Seo Checklist: Your Definitive Technical

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

YOUR DEFINITIVE TECHNICAL

SEO CHECKLIST
FOR BRANDS

404
WHAT’S INSIDE

INTRODUCTION PAGE 2

TOOLS PAGE 2

1.0 CRAWLERS LOCATING PAGE 3

2.0 CRAWLERS ACCESS PAGE 7

3.0 CRAWLERS INDEXING PAGE 10

4.0 DUPLICATE CONTENT PAGE 12

5.0 RELATIONSHIP BETWEEN PAGES PAGE 13

6.0 EXTRAS PAGE 15

CONCLUSION PAGE 17
YOUR DIFINITIVE SEO CHECKLIST

The team at Prosperity Media has worked on hundreds of technical SEO site audits over the
years. Mainly, we help medium and large websites with organic visibility growth. We have
decided to share some of our knowledge on Technical SEO.

This guide is designed to assist webmasters and in-house teams with technical SEO
analysis and to share some common areas we look into when completing analysis on
websites. Some webmasters may think that technical SEO is a one-off project. While it is
true that you don’t usually need to perform a full technical SEO every month, it should be an
important part of your overall digital strategy.

We recommend that you perform a full audit at least once a year. It is also advisable to
complete regular technical SEO sweeps and crawls each month, these will provide a number
of insights into areas that could be improved upon. You should also be monitoring Google
Search Console and Bing Webmaster Tools on a weekly basis to ensure SEO performance is
running at an optimal level.

What Tools Do You Need To Perform A Proper Technical SEO Site Audit?
To make the most of our guide:

You will need to have an existing website. If you’re responsible for making on-page
changes then you will need administrative access to the backend.
You will need a web crawler. We recommend Sitebulb or Screaming Frog SEO Spider
Tool & Crawler (both apps provide either a 7-day free trial or a limited number of page
crawls). This is all you will need for an initial audit to identify problems.
SEMrush or Ahrefs – powerful crawlers to identify backlinks, anchor text, orphan pages
and keyword overlap
Sitemap Test – by SEO SiteCheckup.
You will need access to your site’s Google Search Console.
Chrome browser + User-Agent Switcher for Chrome.
Hreflang.ninja – quickly check whether your
rel-alternate-hreflang annotations on a page are correct.
https://varvy.com/mobile/
PageSpeed Insights – by Google.
You will need a website speed test. We recommend
GTmetrix – creating a user account will give you finer control
of the tool (e.g., testing location, connection speed etc.) - or Pingdom.
Structured Data Tool – by Google.
Mobile-Friendly Test – by Google.
Copyscape – a free tool that searches for duplicate content across the Internet.
Siteliner – find duplicate content within your own site.

Now that you have all the tools necessary to perform a technical SEO site audit, let’s get
started shall we? We have broken this checklist down into sections to help you perform
a complete analysis.

| TECHNICAL SEO CHECKLIST PAGE 2


1.0 Can Crawlers Find All Important Pages?

The Internet is an ever-growing library with billions of books stored without a central filing
system. Indexation helps search engines to quickly find relevant documents for a search
query. It does this through the use of web crawlers.
A web crawler (or crawler for short) is software used to discover publicly available web
pages. Search engines use crawlers to visit and store URLs. It does this by following links they
may find on any given web page.

When a crawler such as Googlebot visits a site, it will request for a file called ”robots.txt”.
This file tells the crawler which files it can request, and which files or subfolders it is not
permitted to crawl. The job of a crawler is to go from link to link [removed comma] and bring
data about those web pages back to a search engine’s servers.

Because there is no central filing system on the World Wide Web, search engines index
billions of web pages so that it may retrieve relevant content for a search query. Indexation is
not to be confused with crawling.

Indexation is the process of taking a page and storing it inside the search engines database. If
a page is indexed, you can see it on Google and other search engines websites. Blocking a
page via robots.txt means Google won’t crawl it, however it won’t stop Google from indexing
it and making it appear in the search engine results. In order to completely remove a page
from Google, you’ll need to use a meta robots noindex tag and make sure you aren’t blocking
the page via robots.txt. Never use both. Google won’t be able to crawl the page and see the
meta robots noindex tag. Indexation plays an integral part in how search engines display web
pages. Therefore, one of the first things to check is whether or not web crawlers can ‘see’ your
site’s structure. The easiest way for this to happen is via XML sitemaps.

1.1 CHECK FOR A XML SITEMAP


Think of a XML sitemap as a roadmap to your website. Search engines use XML sitemaps as
a clue to identify what is important on your site. Use this free Sitemap Test to verify that your
website has a XML sitemap. If your website does not have a XML sitemap, there are a
number of tools to generate one.

1.2 HAS THE XML SITEMAP BEEN SUBMITTED TO


SEARCH ENGINES?
A XML sitemap must be manually submitted to each search engine.
In order to do this, you will need a webmaster account with each
search engine. For example, you will need Google Search Console to
submit a XML sitemap to Google, a Bing Webmaster account to
submit to Bing and Yahoo!, a Baidu Webmaster account and Yandex
webmaster account for Baidu and Yandex respectively.

Each webmaster account will have a XML sitemap testing tool. To


check whether a XML sitemap has been submitted to Google, log
into Google Search Console and on the navigation panel on the left
of your screen, go to Index > Sitemaps.

| CRAWLERS LOCATING PAGE 3


On the older version of Google Search Console, navigate to Crawl > Sitemaps. If a sitemap
has been submitted, Google Search Console will display the results on this page. If no sitemap
has been submitted, you can submit one on the same page. You can use Screaming Frog to
crawl the site and export an XML sitemap based on indexable URLs.

1.3 DOES THE XML SITEMAP HAVE FEWER THAN 50,000 URLS?
If your XML sitemap has more than 50,000 URLs, web crawlers will not attempt to index your
web pages. This is because crawling and indexing web pages across the Internet takes up a
lot of computation resources. For this reason, if your site has thousands of pages, Google
recommends splitting your large sitemap into multiple smaller ones.
Google Search Console will notify you if there are errors with your submitted sitemap.

1.4 DOES THE XML SITEMAP CONTAIN URLS WE WANT INDEXED?


Submitting a XML sitemap to Google does not mean that Google will index web pages.
A search engine such as Google will only index a page if it meets two criteria: (a) it must have
found and crawled it and, (b) if it determines the content quality to be worth indexing.
A slow and manual way to check whether a page has been indexed by a search engine is to
enter the URL of your domain with “site:” before it (e.g., site:yourdomain.com).

Tools such as Screaming Frog and Sitebub generate reports on whether web pages in your
site are indexable. Seeing a ‘non-indexable’ in the report accompanied with a 200 status is
not a cause for immediate concern. There are valid reasons why certain URLs should be
excluded from indexation. However, if a high-value page is non-indexable, this should be
rectified as soon as possible because the URL has no visibility in search result pages.

Common reasons for crawling errors usually come down to configurations in one or more of
the following areas: robots.txt, .htaccess, meta tags, broken
sitemap, incorrect URL parameters, connectivity or DNS issues,
inherited issues (penalty due to previous site’s bad practices).

1.5 ARE THERE INTERNAL LINKS POINTING


TO THE PAGES YOU WANT TO BE INDEXED?
Internal links are ones that go from one page on a domain
to a different page on the same domain. They are useful
in spreading link equity around the website. Therefore,
we recommend internal linking to high-value pages in
order to encourage crawling and indexing of said URLs.

1.6 ARE INTERNAL LINKS NOFOLLOW OR DOFOLLOW?


Building upon spreading link equity internally, the rel=”nofollow” attribute
prevents PageRank from flowing through the specified link. It is used to prevent a page from
being indexed. Therefore, you do not want nofollow links on your website. To find out
whether you have nofollow links on your site, run a Sitebulb analysis. Once completed, go to
Indexability > Nofollow.

| CRAWLERS LOCATING PAGE 4


1.7 DO ANY PAGES HAVE MORE THAN 150 INTERNAL LINKS?
According to Moz, search engine bots may have a crawl limit of
150 links per page. Whilst there is some flexibility in this crawl
limit, we recommend clients to limit the number of internal links
to 150 or risk losing the ability to have additional pages crawled
by search engines.

1.8 DOES IT TAKE MORE THAN 4 CLICKS TO GET


TO IMPORTANT CONTENT FROM THE HOMEPAGE?
The further down a page is within a website’s hierarchy, the
lower the visibility the page will have in SERPs. For large
websites with many pages and subpages, this may result in a
crawl depth issue. For this reason, high-value content pages
should be no further than three to four clicks away from the
homepage to encourage search engine crawling and indexing.

1.9 DOES INTERNAL NAVIGATION BREAK WITH JS/CSS TURNED OFF?


In the past, Googlebot was unable to crawl and index content created using JavaScript. This
caused many SEOs to disallow crawling of JS/CSS files in robots.txt. This changed when
Google deprecated AJAX crawling in 2015. However, not all search engine crawlers are able
to process JavaScript (e.g. Bing struggle to render and index JavaScript). Therefore, you
should test important pages with JS and CSS turned off to check for crawling errors.

Why is this important? While Google can crawl and index JavaScript, there are some
limitations you need to know about: all resources (images, JS, CSS) must be available in order
to be crawled, rendered and indexed, all links need to have proper HTML anchor tags. The
rendered page snapshot is taken at 5 seconds (so content must be loaded in 5 seconds or it
will not be indexed).

A common misconception with Javascript SEO is that content from load more buttons and
infinite scrolling will get crawled. Unfortunately, they don’t. Google can only follow <a> tags.
If your load more button does not contain a link to a paginated page, it will not get
crawled. GoogleBot does not click on load more buttons or scroll when the page is
rendered, therefore it will not see the extra content.

1.10 CAN CRAWLERS SEE INTERNAL LINKS?


Simply put, if a crawler cannot see the links, it cannot index them. Google can follow
links only if they are in an <a> tag with an href attribute. Other formats will not be
followed by Googlebot. Examples of links that Google crawlers cannot follow:
<a routerLink=”some/path”>
<span href=”https://example.com”>
<a onclick=”goto(‘https://example.com’)”>

To check that your page can be crawled, inspect each individual link in
the rendered source code to verify that all links are in an <a> tag with a href attribute.

| CRAWLERS LOCATING PAGE 5


1.11 CHECK FOR REDIRECT LOOPS
Redirect loops differ from redirect chains, however, both issues impact your website’s
usability and its ability to be crawled.

Redirect chains make it difficult for search engines to crawl your site and this will have a
direct impact on the number of pages that are indexed. Redirect chains will also result in
slower page loading times which increases the risk of visitors clicking away which is also a
small ranking factor. Redirect loops result in an error. Upon finding a redirect loop, a crawler
hits a dead-end and will cease trying to find further pages to index.

To illustrate a redirect loop, consider this: URL #1 > URL #2 > URL #3 > URL #1. It first
begins as a redirect chain then, because it redirects the user or crawler back to the original
link, a loop is established.

Should your website have a redirect loop, Sitebulb will identify them. Refer to their
documentation on how to resolve a redirect chain and/or loop.

1.12 CHECK FOR ORPHANED PAGES


As the name implies, an orphaned page is a page without a parent. That is, there are no
internal links pointing towards it. Orphaned pages often result from human error and, because
there are no references to the page, a search engine bot cannot crawl it and cannot index it or
display it within SERP.

There are specific situations where orphaned pages are preferred, for example, landing pages
with unique offers that can only be accessed via PPC or email campaigns (these typically
have a noindex meta tag applied), but if a high-value page is orphaned as a result of a
mistake, this needs to be resolved.

SEMrush and Ahrefs have


dedicated site audit tools for
identifying orphaned pages.
They both require a paid
subscription.

| CRAWLERS LOCATING PAGE 6


2.0 Can Crawlers Access Important Pages?

In the previous section, we covered twelve things to check to ensure that


crawlers can find important pages. By now, you should fully understand
why you want spiders to be able to crawl all pages within your website.

In the second part of our technical SEO audit, your goal is to ensure that
crawlers access all your pages that drive revenue.

2.1 DOES THE CONTENT LACK A VALID AND UNIQUE URL?


AJAX is a set of development techniques that combine HTML, CSS, XML and JavaScript.
Some sites serve dynamic content using AJAX often to prevent visitors from needing to load
a new page.

One of the benefits of AJAX is that it allows users to retrieve content without needing to
refresh a page. In the past, dynamic content was not visible to crawlers and in 2009, Google
provided a technical workaround. But that all changed when Google announced that it was
deprecating AJAX crawling scheme.

Using the Fetch & Render tool found in Google Search Console, compare the results of the
page and how Google views the page. If you have any rendering issues with the landing
page you will be able to troubleshoot them using the tool.

2.2 DOES ACCESS TO THE CONTENT DEPEND ON USER ACTION?


Some pages require user action in order to display content. User action in this situation
may involve the clicking of a button, scrolling down the page for a specified amount of time,
or hovering over an object. Since a web crawler cannot perform these tasks, it will not be
able to crawl and index the content on the page.

Perform a manual check to see if a page’s content requires user action. If found, consider
removing user action in order to display the content.

2.3 DOES ACCESS TO THE CONTENT REQUIRE COOKIES?


Browser cookies are tiny bits of text stored on your device by your web browser (e.g., Safari,
Chrome, Firefox). Cookies contain information such as your session token, user preferences,
and other data that helps a website keep track of one user from another. If your website has
user accounts or a shopping cart, cookies are in play.

In terms of crawlers, Googlebot does so stateless, that is without cookies. Therefore, from a
technical SEO perspective, you want to ensure that all your content is accessible without
cookies.

Perform a manual check by loading the page with cookies disabled. Similarly, you can crawl
your site with Screaming Frog or Sitebulb with cookies turned off to identify if this issue exists.

| CRAWLERS ACCESS PAGE 7


2.4 IS THE CONTENT BEHIND A LOGIN SCREEN?
As search engine bots cannot access, crawl or index pages that require password login,
pages behind a login screen will not be visible in SERPs.

Perform a site crawl without authentication to identify whether you have pages that cannot
be accessed due to a login screen.

2.5 DOES CONTENT REQUIRE JAVASCRIPT TO LOAD AND DOES


JAVASCRIPT PREVENT CRAWLERS FROM SEEING IMPORTANT
CONTENT?
Googlebot uses a web rendering service
that is based on Chrome 41. Tests have
shown that Google can read JavaScript
but we continue to have reservations on
how well Googlebot and other crawlers
handle JavaScript across millions of
websites on a daily basis that use
JavaScript.

For this reason, you should test important


pages with JavaScript and CSS turned off
to see if they load properly. One way to do
this is to copy + paste the URL into Google
Search Console Fetch & Render tool. If
content does not load properly, you will
need to rectify the corresponding line(s) of
code.

2.6 IS CONTENT INSIDE IFRAME?


It is common practice to use iframes to embed videos from YouTube, slideshows from
SlideShare, maps from Google, or to use an iframe to embed a PDF so that visitors do not
have to download it and then open it.

In the past, crawlers had difficulty crawling iframe content. These days, this is no longer the
case.

Crawlers consider the content found in iframes to belong to another domain. Whilst this does
not hurt rankings per say, iframes certainly do not help. This is why we recommend that our
clients produce unique and valuable content.

To ensure that Google can see the content within an iframe, use Fetch & Render found inside
Google Search Console.

| CRAWLERS ACCESS PAGE 8


2.7 IS CONTENT INSIDE FLASH OBJECT?
Adobe Flash files are not supported on many mobile devices. For example, Apple has never
supported Flash on any of its iOS devices). Similarly, Google announced in 2017 that it will
remove Flash completely from Chrome towards the end of 2020. Even Microsoft, who are
responsible for Internet Explorer plan to end Flash support by the end of 2020.

Generally speaking, we do not recommend using Flash on your website. Instead, HTML5
Canvas combined with JavaScript and CSS3 can replace many complex Flash animations.

2.8 IS DIFFERENT CONTENT BEING


SERVED TO DIFFERENT USER AGENTS?
Manually check pages using the Chrome add-on User-Agent
Switcher for Chrome. There’s also an option in Screaming
Frog to crawl the site with spoofed user agents.
ou can change your user agent to mimic GoogleBot
or BingBot.

2.9 IS CONTENT BEHIND A URL FRAGMENT?


Google wants you to stop using URL fragments. A fragment is an internal page reference,
sometimes called a named anchor. Googlebot typically ignores fragments and as such, it is
impossible for AJAX-driven sites to get indexed due to their use of fragmented URLs.

However, hashtag URLs are ok if you’re using a table of contents with jump links.

How can you find which pages are behind a URL fragment? Within Sitebulb, navigate to
Filtered URL List > Internal HTML and then do a manual search for URLs containing
the hashtag (#).

| CRAWLERS ACCESS PAGE 9


3.0 Can Crawlers Index All Important Pages?

By this stage of this technical SEO audit, you will know how crawler-friendly your website is
and how to fix crawlability issues if they were present. In this section, we will address the
indexability of your site.

3.1 WHAT URLS ARE ROBOTS.TXT BLOCKING?


The robots.txt file specifies which web-crawlers can or cannot access parts of a website.
Some real-world applications of restricting crawler access include: preventing search engines
from indexing certain images/PDFs, keeping internal search result pages from showing up on
a public SERP, and to prevent your server from being overloaded by specifying a crawl delay.

A serious problem arises when you accidentally disallow Googlebot from crawling your entire
site. Google Search Console has a robots.txt checker. Using robots.txt you can see if any
restrictions have been applied.

3.2 DO ANY IMPORTANT PAGES HAVE NOINDEX META TAG?


A noindex meta tag can be included in a page’s HTML code to prevent a page from appearing
in Google Search. You will be able to spot it in a page’s source code by looking for
<meta name=”robots” context=“noindex”>.

Therefore, to ensure that your site’s important pages are being crawled and indexed by
search engines, check to see that there are no noindex occurrences.

| CRAWLERS INDEXING PAGE 10


3.3 ARE URLS RETURNING 200 STATUS CODE?
HTTP response status codes are issued by a server in response to a client’s request to the
server. Common status codes include 200 (OK), 301 (moved permanently), and 404 (not
found).

Status code 200 is the standard response for successful HTTP requests. What this means is
that the action requested by a client was received, understood and accepted. A status code
200 implies that a crawler will be able to find the page, crawl and render the content and
subsequently, index the page so that it will appear in search result pages.

A site crawl using either Screaming Frog of Sitebulb will show you the response codes of all
your pages.

Generally speaking:
If you find the 302 HTTP status code, we recommend changing these to 301 redirects.
301 redirects are the safest way to ensure 90-99% of link equity is passed on to the
new pages.
Verify that pages with 404 status code are in fact unavailable (e.g. deleted page) then
remove internal links to the page to prevent users hitting a 404 and then 301-redirect
the URL to a relevant active page, or serve a 410 status code.

302 is not always bad. We often use it when products are unavailable for a time being. E.g.
Velocity credit cards are not available now but they might come back later this year.
AEM is using 307 temporary redirect instead of 302. It’s a newer type of temp redirect rule
and clearly states to crawler that the page will be back.

If you discover 500 (internal server error) and 503 (service unavailable) status codes, this
indicates some form of a server-side problem. Specific to site indexation, 5XX-type responses
tell Googlebot to reduce the rate to crawl a site until the
errors disappear. That’s right – this is bad news!

| CRAWLERS INDEXING PAGE 11


4.0 Does Duplicate Content Exist?

As much as 30% of the Internet is duplicate content. Google has openly stated that they do
not penalise duplicate content. While technically not a penalty, in the world of SEO, duplicate
content can impact search engine visibility / ranking. For example, if Google deems a page as
being duplicate in content, it may give it low to no visibility in SERP.

4.1 CHECK FOR POOR REDIRECT IMPLEMENTATION


Using Screaming Frog:
Verify that both HTTP and HTTPS versions return a 200 status code.
Verify that www and non-www versions return a 200 status code.
Verify that URLs with and without trailing slash (/) return a 200 status code.

If any of the above steps produce a status other than 200, the existing redirect is not 301 and
therefore should be changed to a 301-redirect. This can be done within your .htaccess file.

4.2 CHECK URL PARAMETERS


URL parameters as used by click tracking, session IDs, and analytics codes can cause
duplicate content issues. To prevent bots from indexing unnecessary URL parameters,
implement canonicalisation, noindex, or robots.txt to block them from crawling them.

Similar to checking for existing URL parameters, follow the steps outlined in 2.9.

4.3 ARE THERE THIN AND/OR DUPLICATE PAGES?


Do multiple URLs return the same or similar content? Secondly, does your site have many
pages with less than 300 words? If so, what value do these pages provide to visitors? Google
only dedicates a set amount of resources for each site, also known as crawl budget. Having
many thin or duplicate pages can detract from your website's crawl budget.
Use Sitebulb > Duplicate Content.

| DUPLICATE CONTENT PAGE 12


5.0 Can Crawlers understand the relationship
between different pages?

5.1 HREFLANG ISSUES


In 2011, Google introduced the hreflang attribute to help websites to increase their site’s
visibility to users in multiple countries. In a 2017 study, over 100,000 websites had hreflang
issues.

Here are 5 hreflang issues to look out for:


Are hreflang meta tags used to indicate localised versions of the content?
Does each page have a self-referential hreflang tag?
Has hreflang=”x” been used to indicate the default version?
Are there conflicts between hreflang and canonical signals?
Do HTML language and region codes match page content?
Use hreflang.ninja to check whether rel-alternate-hreflange annotations have been
implemented properly. Perform a SEMrush or Ahrefs site audit to check for hreflang
implementation issues.

5.2 PAGINATION ISSUES


Paginated pages are a group of pages that follow each other in a sequence that are internally
linked together.

Things to check for specifically:


Do sequential pages have rel=”next” and rel=”prev” tags in the page header?
Do navigation links have rel=”next” and rel=”prev”?
Do pages in a series have a self-referential canonical tag? Or a canonical points to a
‘view all’ page?

An important thing to note is in 2019, Google stated that they don’t follow rel=”next” and
rel=”prev”. However, other search engines may still use these tags.

| RELATIONSHIP BETWEEN PAGES PAGE 13


5.3 CANONICALISATION ISSUES
If you do not explicitly tell Google which URL is canonical, Google will make the choice for
you. Google understands that there are valid reasons why a site may have different URLs
that point to the same page or have duplicate or very similar pages at different URLs.

Check for the following canonicalisation errors using Sitebulb:


Are canonical tags in use?
Do canonical URLs return 200 status code?
Do canonical declarations use absolute URLs?
Are there any conflicting canonical declarations?
Has HTTPS been canonicalised to HTTP?
Do canonical loops exist?

5.4 MOBILE-SPECIFIC PAGE ISSUES


For sites that do not have a responsive design but rely on serving a separate mobile and
desktop version, Google sees this as duplicate versions of the same page. For example –
www.yoursite.com is served to desktop users while m.yoursite.com is served for mobile
users.

To help search engines understand separate mobile URLs, we recommend doing the
following annotations:

Add a special link rel=”alternate” tag to the corresponding mobile URL


(i.e. m.yoursite.com) on the desktop page (this helps crawlers discover the location
of mobile pages);
Add rel=”canonical” tag on the mobile page pointing to the corresponding desktop
URL (i.e. www.yoursite.com).

Therefore, when it comes to separate URLs


for mobile and desktop users, check:

Do mobile URL have canonical


links to the desktop URL?
Do mobile URLs resolve for
different user agents?
Have redirections been
implemented via HTTP and
JavaScript redirects?

For detailed instructions on how to


implement these changes, refer to
Google’s support documentation
on separate URLs.

| RELATIONSHIP BETWEEN PAGES PAGE 14


6.0 Moving on from Crawling and Indexing

Beyond optimising your site for search engine crawling and indexing, we’ve put together a
few more things you should check as part of your annual technical SEO audit. Specifically, this
section relates to ranking factors and the common issues that can prevent your money page
gaining SERP visibility it deserves.

6.1 KEYWORD CANNIBALISATION


Chances are that, if your website has been around for a few years, you’ll probably have a few
pages that target the same keyword(s). This in itself isn’t a problem as many pages rank for
multiple keywords. According to a study by Ahrefs, the average page ranking in the first
position also ranked for one thousand other relevant keywords.

Use SEMrush or Ahrefs to identify your site’s organic keywords and export them into a
spreadsheet. Then filter the results alphabetically to see where keyword overlaps exist across
your site.

6.2 ON-PAGE OPTIMISATION


Some basic on-page SEO to check for:
Does each page have a unique and relevant title tag?
Does each page have a unique and relevant meta description?
Do images found on money pages marked with appropriate alt text?
Is important content/information hidden in tabs/accordions?
Do money pages have helpful content (text + images) for the targeted keyword(s)?

Use Sitebulb or Screaming Frog to help you identify the


existence of duplicate title tags and meta descriptions.

6.3 IS YOUR SITE’S CONTENT UNIQUE?


You want to make sure that you avoid publishing duplicate content on
your site because it can impact SERP visibility. This is particularly
important for e-commerce sites where product descriptions and
images are provided by the manufacturer. This is because search
engines want to serve the most relevant pages to searchers and as a
result, they will rarely show multiple versions of the same content.

We always recommend our clients write their own content, including


production descriptions to avoid falling into the trap of duplicate
content issues.
Use Copyscape to search for copies of your page across the Internet.
If you do find another site has copied your content in violation of
copyright law, you may request that Google removes the infringing
page(s) from their search results by filling out a DMCA request. Use
Siteliner to search for copies of your page within your own site.

| EXTRAS PAGE 15
6.4 USER-FRIENDLINESS
Remember how we debunked one of the common SEO goals earlier? The discipline of search
engine optimisation is not about trying to game the system but rather, it is about trying to
improve the search engine user’s experience.

Therefore, at the most basic level, check for the following:


Are pages mobile-friendly?
Are pages loading in less than 5 seconds?
Are pages served via HTTPS?
Are there excessive ads that push content below the fold?
Are there intrusive pop-ups?

This is because a new visitor to your site will probably be impatient; a resource page that
takes more than 5 seconds to load will result in the user bouncing. Similarly, if the intended
formation is hidden behind excessive ads and pop-ups, the user will bounce away. As
tempting it may be to display a full-screen call-to-action pop-up to a new visitor, this is a
prime example of the sort of poor user-friendliness that Google discourages.

Use GTmetrix, Pingdom and PageSpeed Insights to determine what is causing slow page
loading. Images are the number one cause for delayed page loads. Make sure that you are
serving correctly scaled images that have been compressed.

6.5 STRUCTURED MARKUP


According to Google, 40% of searches, especially local searches include schema markup,
however, more than 95% of local businesses do not have schema on their site or their
schema is implemented incorrectly.

When done correctly, structured markup helps search engines to understand the meaning
behind each page – whether it be an informational article or product page. The better a
search engine understands your content, the more likely it will give it preferential SERP
visibility.

To prove how Google preferences for


structured data, they have a dedicated https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
case studies page. Use Google’s https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
Structured Data Testing Tool to identify
existing errors and consider hiring a https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
schema specialist to ensure that your
pages have been optimised correctly.
https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
Abuse of structured markup can lead to
a manual penalty. https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
Learn more about Technical SEO with a
https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s
recent presentation by Prosperity
Media’s Technical SEO Manager.
(Click here or on the video to the right.)
https://www.youtube.com/watch?v=4_CdNskBTFE&t=2s

| EXTRAS PAGE 16
CLOSING REMARKS

With newfound technical SEO knowledge, you should have a selection of key
issues you can fix on your website. A crucial element of SEO is that you constantly
need to be making changes and staying up to date with industry news. New
developments, tools and markups come out regularly so it’s important to keep up
to date with them and test out everything where possible.

Depending on how large your business is, it might take several months to have
developers implement technical SEO changes. If you work with smaller businesses
and you have direct access to a developer, you should be able to implement these
changes quickly. It’s important to promote the benefits of technical SEO to the
wider organisation as implementing technical SEO changes at scale can have a
highly positive effect on organic traffic.

If you need assistance with Technical SEO or any other area of SEO do not
hesitate to get in contact with the team at Prosperity Media.

https://prosperitymedia.com.au/contact/
CONTACT US HERE
https://prosperitymedia.com.au/contact/

| CLOSING REMARKS PAGE 17

You might also like