Google Index Status Falling Fast - What should I be considering?
-
Hi Folks,
Working on an ecommerce site. I have found a month on month fall in the Index Status continuing since late 2015. This has resulted in around 80% of pages indexed according to Webmaster.
I do not seem to have any bad links or server issues. I am in the early stages of working through, updating content and tags but am yet to see a slowing of the fall.
If anybody has tips on where to look for to issues or insight to resolve this I would really appreciate it.
Thanks everybody!
Tim
-
Hi dude, thank you so much for taking time to look at this site. It is really kind of you. I will be taking a look at all the points raised over the next week to see what we can achieve. Thanks, Tim
-
Thank you for taking so much time to look at our site. I really appreciate it. I will dig in to the points to see what we can achieve. Thanks again, Tim
-
Thanks dude, I will take a look at this. Really appreciate you taking time to respond.
-
Hi Tim,
I agree with Laura on the canonical tags. I've worked on several large Magento sites and I've never seen any issue with the way Magento handles it - by canonicalizing product URLs to the root directory.
In fact, I actually prefer this was over assigning a product to a 'primary' category and using that as the canonical.
As Laura said, a reduction in the total number of indexed pages might actually be a really big positive here! More pages indexed does not mean it's better. If they are low quality/duplicate pages that have been removed from index, that's a really good thing.
I did find some issues with your robots.txt file:
- Disallow: /media/ - should be removed because it's blocking images from being crawled (this is a default Magento thing and they should remove it!)
- Disallow: /? - this basically means that any URLs containing a ? will not be crawled and with the way pagination is setup on the site, this means that any pages after 1 are not being crawled.
This could be impacting how many product pages you have indexed - which would definitely be a bad thing! You would obviously want your product pages to be crawled and indexed.
Solution: I would leave Disallow: /? in robots.txt because it stops a product filter URLs being crawled, but I would add the following line:
Allow: */?p=
This line will allow your paginated pages to be crawled, which will also allow products linked from those pages to be crawled.
Hope this helps!
Cheers,
David
-
I would be interested in seeing examples of where this has happened. Were the canonical tags added after the URLs were already indexed or were the canonicals in place when the site launched?
-
However, the canonical is only an advisory tag. I've had few cases where people have relied on their canonical tag when their site has numerous product url types (as above with category in the url and just product url) which has many references to these different urls elsewhere (onsite and offsite) and they are now indexed as both versions, which is not always ideal. It also means that reporting tools such as Screaming Frog only show the true URLs on the site. It's also saving crawl budget as it doesn't have to crawl the category produced url and the canonical url.
Whilst it's not a major issue, it's something I would look at changing.
-
If I understand you correctly, you are referring to the following two URLs:
https://www.symectech.com/epos-systems/customer-displays/pole-mounting-kit-94591.html
https://www.symectech.com/pole-mounting-kit-94614.html
Both of these have the same canonical referenced, which is https://www.symectech.com/pole-mounting-kit-94614.html.
It doesn't matter what actually shows in the address box. For the purposes of indexation, what matters is what is referenced in the canonical tag.
.
-
What I've suggested will be avoiding these duplicate urls? Here's some actual examples, going via a tier two category I get the following product url:
https://www.symectech.com/epos-systems/customer-displays/pole-mounting-kit-94591.html
With a canonical of:
https://www.symectech.com/pole-mounting-kit-94614.html
Yet when going from https://www.symectech.com/epos-systems/?limit=32&p=2 (a tier 1 category) I get the canonical url.
So if there are products listed in multiple tier two categories then that's multiple urls for the same product. With the suggestion I made, there would only be one variation of this product url (the canonical)
-
A reduction in the number of pages indexed does not necessarily mean something is wrong. In fact, it could mean that something is right, especially if your rankings are improving.
How are you determining that only 80% of pages are indexed? Can you provide a specific URL that is not being indexed?
If you made changes to your canonical tag, robots.txt , or meta robots tag, these could all cause a reduction in the number of pages being indexed.
-
The canonicals appear to be set up correctly, and I would not advise listing the product URLs as their canonicals in the category as suggested above. That will create duplicate URLs with the same content, which is exactly what canonical tags are designed to avoid.
-
Just going through Laura's list as a checklist for ones that are applicable:
- Have you checked your robots.txt file or page-level meta robots tag to see if you are blocking or noindexing anything?
Nothing that I can see, that's causing a major issue.
- Is it a large site? If so, check for issues that may affect crawl budget.
The main thing I can see is that the product urls and canonicals are different, is there anyway of listing the product urls as their canonical versions in the category?
-
<a name="_GoBack"></a>Sorry for the delay in response. Website is symectech.com
We have fixed various issues including a noindex issue earlier this year but our index status is continuing to fall. However, the ranking seems to be improving week on week according to MOZ. Thanks.
Tim
-
Just to echo what Laura has said, if you can share a URL that would be great so we can help you get to the source of the problem.
Try running a tool like screamingfrog (https://www.screamingfrog.co.uk/seo-spider/) to check the issues above that Laura has mentioned, as doing a lot of those by hand can be quite time consuming.
Also, do you have a drop in rankings with your pages falling out the index?
-
Any chance you can share the URL? That would make it much easier for someone to help in this forum. Without the URL, I can offer a few diagnostic questions.
- Have the number of pages on the site remained the same and pages are being removed from the index? Or have you added more content, but the percentage in the index has decreased?
- Have you checked your robots.txt file or page-level meta robots tag to see if you are blocking or noindexing anything?
- Have you submitted an XML sitemap? If so, check the XML sitemap to make sure what's being submitted should be indexed. It's possible to submit a sitemap that includes noindexed pages, especially with some automated tools.
- Is it a large site? If so, check for issues that may affect crawl budget.
- Have you changed any canonical tags?
- Have you used the Fetch as Google tool to diagnose a specific URL?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Request - Typical Time to Complete?
In Google Search Console, when you request the (re) indexing of a fetched page, what's the average amount of time it takes to re-index and does it vary that much from site to site or are manual re-index request put in a queue and served on a first come - first serve basis despite the site characteristics like domain/page authority?
Intermediate & Advanced SEO | | SEO18050 -
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Google Search Results...
I'm trying to download every google search results for my company site:company.com. The limit I can get is 100. I tried using seoquake but I can only get to 100. The reason for this? I would like to see what are the pages indexed. www pages, and subdomain pages should only make up 7,000 but search results are 23,000. I would like to see what the others are in the 23,000. Any advice how to go about this? I can individually check subdomains site:www.company.com and site:static.company.com, but I don't know all the subdomains. Anyone cracked this? I tried using a scrapper tool but it was only able to retrieve 200.
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Language Subdirectory homepage not indexed by Google
Hi mozzers, Our Spanish homepage doesn't seem to be indexed or cached in Google, despite being online for over a month or two. All Spanish subpages are indexed and have started to rank but not the homepage. I have submitted sitemap xml to GWTools and have checked there's no noindex on the page - it seems to be in order. And when I run site: command in Google it shows all pages except homepage. What could be the problem? Here's the page: http://www.bosphorusyacht.com/es/
Intermediate & Advanced SEO | | emerald0 -
How long does google take to show the results in SERP once the pages are indexed ?
Hi...I am a newbie & trying to optimize the website www.peprismine.com. I have 3 questions - A little background about this : Initially, close to 150 pages were indexed by google. However, we decided to remove close to 100 URLs (as they were quite similar). After the changes, we submitted the NEW sitemap (with close to 50 pages) & google has indexed those URLs in sitemap. 1. My pages were indexed by google few days back. How long does google take to display the URL in SERP once the pages get indexed ? 2. Does google give more preference to websites with more number of pages than those with lesser number of pages to display results in SERP (I have just 50 pages). Does the NUMBER of pages really matter ? 3. Does removal / change of URLs have any negative effect on ranking ? (Many of these URLs were not shown on the 1st page) An answer from SEO experts will be highly appreciated. Thnx !
Intermediate & Advanced SEO | | PepMozBot0 -
Indexed non existent pages, problem appeared after we 301d the url/index to the url.
I recently read that if a site has 2 pages that are live such as: http://www.url.com/index and http://www.url.com/ will come up as duplicate if they are both live... I read that it's best to 301 redirect the http://www.url.com/index and http://www.url.com/. I read that this helps avoid duplicate content and keep all the link juice on one page. We did the 301 for one of our clients and we got about 20,000 errors that did not exist. The errors are of pages that are indexed but do not exist on the server. We are assuming that these indexed (nonexistent) pages are somehow linked to the http://www.url.com/index The links are showing 200 OK. We took off the 301 redirect from the http://www.url.com/index page however now we still have 2 exaact pages, www.url.com/index and http://www.url.com/. What is the best way to solve this issue?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
Ranking Factors for Google
Yesterday a blog post appeared on SEOMOZ titled 'A Tale Of Two Studies' - http://www.seomoz.org/blog/a-tale-of-two-studies-google-vs-bing-clickthrough-rate It suggested some of the ranking factors Google and Bing take into account when ranking. A few of them I want to talk about: Social Signals, Age of Domain and H1 HTML Tag So I thought age of domain and H1 both had some weight in Google? I guess not! And social signals, now I know it gives some weight but its right up there in the list for both SE's, so should getting likes, tweets, plus1's now be part of my everyday link building? Bing-Google-CTR-Infographic-e1321978731479.png
Intermediate & Advanced SEO | | activitysuper0