Sitemap indexed pages dropping
-
About a month ago I noticed my pages indexed from my sitemap are dropping.There are 134 pages in my sitemap and only 11 are indexed. It used to be 117 pages and just died off quickly. I still seem to be getting consistant search traffic but I'm just not sure whats causing this. There are no warnings or manual actions required in GWT that I can find.
-
Just wanted to update this, it took a month but since I decided to completely remove canonical tags and try and handle duplicate content with url rewrites and 301 redirects and I now have 114 out of 149 indexed from my sitemap which is much better. it ended up to dropping to 5 out of 149 at one point.
-
Hi Stephen,
Great that you've probably found the cause - this will absolutely cause mass de-indexation. I had a client a year ago canonicalise their entire site (two sites, actually) to the home page. All their rankings and indexed pages dropped off over a matter of about six days (we spotted the tag immediately but the fix went into a "queue" - ugh!).
The bad news is that it took them a long time to get properly re-indexed and regain their rankings (I am talking months, not weeks). Having said this, the sites were nearly brand new - they had very few backlinks and were both less than six months old. I do not believe that an older site would have had as much problem regaining rankings, but I can't be sure and I have only seen that situation take place first-hand once.
-
I may have found the issue today. Most of the articles are pulled from a database and I think I placed a wrong canonical tag on the page which screwed up everything. Does anyone know how long it takes before a fix like this will show?
-
Thats a good catch, I fixed that. I do use that in WMT and it has been fine for the longest time. I guess its not that big of an issue, my main concern was the pages being indexed. Was reading another Q&A thing and used the info: qualifer to check some of the pages and all the ones I checked are indexed and its more then the 11. I just don't understand why its dropped all a sudden, and if that number really means anything.
-
How are the indexed numbers looking in WMT today? I see 3,370 results for a site: search on the domain, but those can be iffy in terms of up to date accuracy: https://www.google.co.uk/search?q=site%3Agoautohub.com&oq=site%3Agoautohub.com&aqs=chrome..69i57j69i58.798j0j4&sourceid=chrome&es_sm=119&ie=UTF-8
Not that this should matter too much if you are submitting a sitemap through WMT but your robots.txt file specifies sitemap.xml. There is a duplciate sitemap on that URL (http://goautohub.com/sitemap.xml) - are you using sitemap.php, which you mention here, in WMT? .php can be used for sitemaps, but I would update the robots.txt file to reflect the correct URL - http://i.imgur.com/uSB1P1g.png, whichever is meant to be right. I am not aware of problems with having duplicate sitemaps, as long as they are identical, but I'd use just one if it were me.
-
Thanks for checking, I haven't found anything yet.The site is goautohub.com. it's a custom site and the site map file is auto generated. It's goautohub.com/sitemap.php. I've done it like that for over a year. I did start seeing an error message about high response times and I've been working on improving that. It makes since because we have been advertising more to get the site seen. In regards to the rest of Williams points I have checked those but no improvement yet. Thank you
-
Hi Stephen,
Checking in to see if you had checked the points William has raised above. Do you see anything that could have resulted in the drop? Also, are you comfortable sharing the site here? We might be able to have a look too (feel free to PM if you are not comfortable sharing publicly).
Cheers,
Jane
-
Try to determine when the drop off started, and try to remember what kinds of changes the website was going through during that time. That could help point to the reason for the drop in indexing.
There are plenty of reasons why Google may choose not to index pages, so this will take some digging. Here are some places to start the search:
-
Check your robots.txt to ensure those pages are still crawlable
-
Check to make sure the content on those pages isn't duplicated somewhere else on the Web.
-
Check to see if there was any updates to canonical changes on the site around when the drop started
-
Check to make sure the sitemap currently on the site matches the one you submitted to Webmasters, and that your CMS didn't auto-generate a new one
-
Make sure the quality of the pages is worth indexing. You said your traffic didn't really take a hit, so it's not de-indexing your quality stuff.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help Center/Knowledgebase effects on SEO: Is it worth my time fixing technical issues on no-indexed subdomain pages?
We're a SaaS company and have a pretty extensive help center resource on a subdomain (help.domain.com). This has been set up and managed over a few years by someone with no knowledge of SEO, meaning technical things like 404 links, bad redirects and http/https mixes have not been paid attention to. Every page on this subdomain is set to NOT be indexed in search engines, but we do sometimes link to help pages from indexable posts on the main domain. After spending time fixing problems on our main website, our site audits now flag almost solely errors and issues on these non-indexable help center pages every week. So my question is: is it worth my time fixing technical issues on a help center subdomain that has all its pages non-indexable in search engines? I don't manage this section of the site, and so getting fixes done is a laborious process that requires going through someone else - something I'd rather only do if necessary.
Technical SEO | | mglover19880 -
Pages not indexed
Hey everyone Despite doing the necessary checks, we have this problem that only a part of the sitemap is indexed.
Technical SEO | | conversal
We don't understand why this indexation doesn't want to take place. The major problem is that only a part of the sitemap is indexed. For a client we have several projects on the website with several subpages, but only a few of these subpages are indexed. Each project has 5 to 6 subpages. They all should be indexed. Project: https://www.brody.be/nl/nieuwbouwprojecten/nieuwbouw-eeklo/te-koop-eeklo/ Mainly subelements of the page are indexed: https://www.google.be/search?source=hp&ei=gZT1Wv2ANouX6ASC5K-4Bw&q=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&oq=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&gs_l=psy-ab.3...30.11088.0.11726.16.13.1.0.0.0.170.1112.8j3.11.0....0...1c.1.64.psy-ab..4.6.693.0..0j0i131k1.0.p6DjqM3iJY0 Do you have any idea what is going wrong here?
Thanks for your advice! Frederik
Digital marketeer at Conversal0 -
Why did I drop ranking after setting up perm redirect, sitemap, and Google places??
I have a site that was ranking in the top two for my search terms. We had a funky url (it contained hyphens) and was advised to change it for SEO, so I setup a perm redirect through my web host (before it was a temporary one I think) At the same time I installed a sitemap plugin for Wordpress and also registered for a Google Places account. I can't remember the exact order I did this -- does it matter? Anyway, within a couple days of doing the above, my ranking dropped to the bottom of the second page. I would like to fix this, but I'm not sure. I need help please!
Technical SEO | | fsvatousek0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
Non-www home page indexed, but www for rest of site
Hi there, grateful for any ideas on why this is happening: http://www.google.co.uk/search?q=site:www.vitispr.com vs http://www.google.co.uk/search?q=site:vitispr.com Google seems to be indexing and caching vitispr.com for our home page but the www. versions for everything else. As you can see the second query finds the home page. Any ideas why that might be? Other info that might be relevant: non-www etc. are all 301'd to www versions. moved domains/urls etc. around in March of this year and for a week or we were redirecting to the non-www version webmaster tools says 'www' preferred Thanks!
Technical SEO | | JaspalX0 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0 -
How to handle sitemap with pages using query strings?
Hi, I'm working to optimize a site that currently has about 5K pages listed in the sitemap. There are not in face this many pages. Part of the problem is that one of the pages is a tool where each sort and filter button produces a query string URL. It seems to me inefficient to have so many items listed that are all really the same page. Not to mention wanting to avoid any duplicate content or low quality issues. How have you found it best to handle this? Should I just noindex each of the links? Canonical links? Should I manually remove the pages from the sitemap? Should I continue as is? Thanks a ton for any input you have!
Technical SEO | | 5225Marketing0