Sitemap Indexed Pages, Google Glitch or Problem With Site?
-
Hello,
I have a quick question about our Sitemap Web Pages Indexed status in Google Search Console. Because of the drastic drop I can't tell if this is a glitch or a serious issue.
When you look at the attached image you can see that under Sitemaps Web Pages Indexed has dropped suddenly on 3/12/17 from 6029 to 540.
Our Index status shows 7K+ indexed.
Other than product updates/additions and homepage layout updates there have been no significant changes to this website. If it helps we are operating on the Volusion platform.
Thanks for your help!
-Ryan
-
One last update
Both sitemaps now only show 1320 indexed pages each or 2640 total. So the non secure urls in the xml sitemap did fall off a few days after submitting them.
The site is still fully indexed in google but overall impression share has fallen according to GSC. GA shows slight increases in overall organic traffic.
-
Just wanted to pass on an update.
The XML sitemap file with all HTTP urls is showing that 95% of the urls are indexed under Sitemaps Web Pages Indexed.
Not sure if we should have two identical Sitemaps submitted with the only difference being https vs http. Any reason for me not to have both submitted.
-
Hey Oleg,
Thanks for responding.
When I checked Google everything is still there, or at least we have 8K+ urls still in googles index. Traffic from organic is still good but impressions have declined a bit, nothing major.
I did call Volusion (our cart host) to ask when they switched from http to https links in the sitemap.asp file and they, A: couldn't give me that information, or B: Didn't quite understand my question. However they did say that they are aware of some platform technical issues regarding canonical tags and an http to https redirect issue which they are working on but have no date of completion, nor would they tell me when these issues came about. They said these errors have nothing to do with what I'm seeing in GSC but I have a hunch it does.
What I'm going to do is submit a second sitemap in XML which has all http links to see if that one shows any differently in GSC.
-
If the website is still indexed in google (search site:domain.com to see how many pages are indexed), then you might have a canonical URL/sitemap mismatch. e.g. your sitemap has urls site.com/Page-A/ but the canonical url is site.com/page-a/.. Google would still index your site but the # of urls indexed from the sitemap would drop. Another example is your sitemap urls are http but the website is actually https.
Thats the only thing that I can think of (aside from GSC bug) that would cause a drop.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
For a sitemap.html page, does the URL slug have to be /sitemap?
Also, do you have to have anchors in your sitemap.html? or are naked URLs that link okay?
Intermediate & Advanced SEO | | imjonny1230 -
Keyword On Page 1 Everywhere but Google (Site Specific)
Website: www.wheelchairparts.com
Intermediate & Advanced SEO | | Mike.Bean
Keyword: wheelchair parts My website is #1 or #2 on almost every search engine besides Google. Google has us bouncing between the bottom of page 2 and top of 3. However we are on page one for "wheelchairparts". I need to get a link building campaign going for this site. I feel it's more difficult for ecommerce websites and nothing seems to fit in with Rand's Mozcon 2016 Link Building talk except hacks. I need to find a flywheel. Either way, my question is what can I do other than link building to get on page 1 of Google for the term "wheelchair parts"? Thanks in advance! - Mike Bean1 -
Site Structure - Is it ok to Keep current flat architecture of existing site pages and use silo structure on two new categories only?
Hi there, I have a site structure flat like this it ranks quite well for its niche site.com/red-apples.html site.com/blue-apples.html The site is branching out into a new but related lines of business is it ok to keep existing site architecture as above while using a silo structure just for the two new different but related business? site.com/meat/red-meat.html site.com/fish/oceant-trout.html Thanks for any advice!
Intermediate & Advanced SEO | | servetea0 -
Website dropped out from Google index
Howdy, fellow mozzers. I got approached by my friend - their website is https://www.hauteheadquarters.com She is saying that they dropped from google index over night - and, as you can see if you google their name, website url or even site: , most of the pages are not indexed. Home page is nowhere to be found - that's for sure. I know that they were indexed before. Google webmaster tools don't have any manual actions (at least yet). No sudden changes in content or backlink profile. robots.txt has some weird rule - disallow everything for EtaoSpider. I don't know if google would listen to that - robots checker in GWT says it's all good. Any ideas why that happen? Any ideas what I should check? P.S. Just noticed in GWT there was a huge drop in indexed pages within first week of August. Still no idea why though. P.P.S. Just noticed that there is noindex x-robots-tag in headers... Anyone knows where this can be set?
Intermediate & Advanced SEO | | DmitriiK0 -
SEO - is it site or page
Hi When we're talking about SEO does the search engine only look at the whole site in general or do they look at the individual page when we're talking about SERP? So if you have a keyword "my search term" Does the search engine look at the site first or the page with the term on then rank you or is it the page then the site.
Intermediate & Advanced SEO | | Cocoonfxmedia0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
To index or de-index internal search results pages?
Hi there. My client uses a CMS/E-Commerce platform that is automatically set up to index every single internal search results page on search engines. This was supposedly built as an "SEO Friendly" feature in the sense that it creates hundreds of new indexed pages to send to search engines that reflect various terminology used by existing visitors of the site. In many cases, these pages have proven to outperform our optimized static pages, but there are multiple issues with them: The CMS does not allow us to add any static content to these pages, including titles, headers, metas, or copy on the page The query typed in by the site visitor always becomes part of the Title tag / Meta description on Google. If the customer's internal search query contains any less than ideal terminology that we wouldn't want other users to see, their phrasing is out there for the whole world to see, causing lots and lots of ugly terminology floating around on Google that we can't affect. I am scared to do a blanket de-indexation of all /search/ results pages because we would lose the majority of our rankings and traffic in the short term, while trying to improve the ranks of our optimized static pages. The ideal is to really move up our static pages in Google's index, and when their performance is strong enough, to de-index all of the internal search results pages - but for some reason Google keeps choosing the internal search results page as the "better" page to rank for our targeted keywords. Can anyone advise? Has anyone been in a similar situation? Thanks!
Intermediate & Advanced SEO | | FPD_NYC0 -
Should you stop indexing of short lived pages?
In my site there will be a lot of pages that have a short life span of about a week as they are items on sale, should I nofollow the links meaning the site has a fwe hundred pages or allow indexing and have thousands but then have lots of links to pages that do not exist. I would of course if allowing indexing make sure the page links does not error and sends them to a similarly relevant page but which is best for me with the SEarch Engines? I would like to have the option of loads of links with pages of loads of content but not if it is detrimental Thanks
Intermediate & Advanced SEO | | barney30120