26 Oct The Triple Beginner SEO Guide: The Dive Into Technical SEO
In previous articles, we talked about search engines, On-Page and Off-Page SEO, and how to optimize your site for Google to consider ranking it.
However, what good would that do if Google never actually found your page! You can have the best content, but without Technical SEO, it’s as good as nothing.
Technical SEO is concerned with what lingers behind the ranking process from crawling & indexing to site architecture & speed, and everything in between.
Seems a bit vague! Let’s break it down, starting with a deeper understanding of crawling and indexing.
How Google Builds Its Search Index:
Google uses a software known as “Crawlers” or “Spiders” to navigate the web pages searching for links to follow, URLs – the user can also give Google a sitemap to ease this process.
Afterwards, the processing stage is initiated where the system handles canonicalization and sends the pages for rendering, allowing Google to see the page from the user’s eyes.
Finally, pages are stored in an index to deliver for the searchers. As for how to rank them, it’s your on and off page seo mission.
Any questions!
Ofcourse; what in the world is “canonicalization”?
And, can one control what Google crawls and stores in its index?
If Google finds multiple versions of the web page with different URLs, it chooses one as “Canonical” to show the searcher.
However, it’s not an “Eeny Meeny Miny Moe”, there are many signals for canonicalization, including sitemap URLs, duplicate pages, canonical tags, redirects, and internal links.
You can use the URL Inspection Tool in Google search console to figure out the status of your URL in terms of indexing and whether it is the canonical one or not.
As for the second question; yes, you can analyze the Crawl Stats Report on Google Console, and control what Google crawls on your website, here are a few ways how:
- Robots.txt: this file tells Google where it can and can’t go on your site.
- Crawl Rate:
- There is a crawl-delay directive you can add to the Robots.txt file to set how often you allow Google to crawl a page.
- You can change the crawl rate in Google Console.
- Access Restrictions: if you want a page to be accessible by certain users but not Google, there are few things you can do, such as:
- Login System.
- HTTP Authentication.
- IP Whitelisting.
One thing to put in mind, Google may still index pages you directed not to crawl in the Robots.txt file if there were links pointing at them. On the other hand, you can remove URLs from Google Search in various ways that we will mention in our following guide.
For now, let’s talk about some Technical SEO actions you can take to improve your presence and ranking on the web.
Technical SEO Actions You Should Consider:
When it comes to Technical SEO, there are some MUST-DO steps that you should take; let’s take a look at few:
Are You There in Google Index?
Again, what’s the point of the best content ever created if Google didn’t even index it! No matter how high you expect it to rank, if Google couldn’t see it, so wouldn’t the users.
You’re probably wondering why a certain page would not be indexed, right?
Well, start off by investigating and type in Site:Yoursite.com, check out if the number of results corresponds to the number of pages within your site. If not, here are a few reasons why:
- You inserted a “User-Agent: * Disallow: /” line in your Robot.txt file, telling the spiders to crawl past your site.
- A bad configuration of your htaccess file can create an infinite loop preventing your site from loading.
- You have a <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”> line in your Meta Tag.
- Your Sitemap is NOT up-to-date.
- Incorrect configuration of URL Parameters in Google Console.
- Low page rank.
- Connectivity & DNS issues.
- Second-hand domain with inherited issues.
Therefore, you should always check if your page is indexed; you can use Site Audit to see pages that can’t be and why, so you can move along to fixing the issue.
The Lost and Found:
It’s not a secret that backlinks are not an easy task to handle; it takes continuous effort and a great deal of time.
Now, let’s take a scenario where you change the URL of a certain page, what happens when users click on a link that refers back to the old one!
They simply will not find the content they are looking for. What a bad user-experience! And what a loss for you on backlinks!
Good news is, you can add a 404 – Not Found filter to the old URL, and use a 301 Redirect to basically tell the browser “Hey, i permanently moved my page from here to there”.
With that, you will retrieve the lost value of your backlinks, and each time users click on a link that points back to the old URL, they will be redirected to the new one.
Give Google a Hand; Good Deeds Find Their Way Back:
Google works really hard to navigate your page and understand its content thoroughly, you can help it by providing Structured Data coded using in-page Markup.
Basically, structured data is a standardized format for providing information about a page and classifying its content; ie. you’re offering Google explicit clues about the meaning of your page. For instance, if your page is about workout routines, what kind of exercises, how many sets, for how long, and so on.
Not only does Google use structured data to understand content, but also to enable special search results features. With that, your site stands out from the rest on SERP, and users can also find your content by searching for the clues you provided.
It’s a win-win situation; help Google → Google helps you.
Eventually, It Is All About The User:
There is no word-by-word statement that proves ranking and user experience relatedness, and honestly, there shouldn’t be one. It is merely the logical outcome; your content is aimed for the user, and Google is doing all the work in favor of the user. Hence, if you are unable to provide good user experience, your page is not of interest to Google.
Here are some aspects of a website you should work on to deliver quality page experience:
- Core Web Vitals: these are speed metrics Google uses to measure a page UX from Visual Load, Visual Stability, and Interactivity.
- Largest Contentful Paint, LCP – the time a website takes to show the user the largest content on the screen, complete and ready for interaction. You should go for an LCP of 2.5 seconds or less.
- Cumulative Layout Shift, CLS – the unexpected shifting of webpage elements while the page is still downloading.
- First Input Delay, FID – the time from when a user first interacts with your site to the time when the browser is actually able to respond to that interaction. You should strive to have an FID of 100 milliseconds or less.
- Security: the user’s most concern.
- HTTPS: this protocol offers confidentiality and authentication, protecting the communication between the browser and server from attackers.
- Safe Browsing: a guarantee that the page is NOT deceptive and doesn’t contain malware or harmful downloads.
- Interstitials: these are the super-duper annoying popups that cover the whole content, denying users visual access until they interact. Seriously, don’t do that to your users.
- Mobile friendliness: If users are to navigate your page on-the-go from a mobile phone, they should be content with that experience. Therefore, you need to have an easy-to-access website across multiple devices, and you can check for Mobile Usability from Google Console.
Last, but never least, is your website doing ok! Yes, we mean health wise.
You should always keep up with your pages’ health and check for any broken links in order to provide the best user experience. Additionally, stay updated with the latest Google algorithm changes and see what Technical SEO steps you should take accordingly.
Sounds a bit too much! No worries, we are here for you. Our team at Uranus Agency is ready to offer you every SEO service your site might need; on-page, off-page, and technical seo.
No Comments