Sitemaps and Indexed Pages
-
Hi guys,
I created an XML sitemap and submitted it for my client last month.
Now the developer of the site has also been messing around with a few things.
I've noticed on my Moz site crawl that indexed pages have dropped significantly.
Before I put my foot in it, I need to figure out if submitting the sitemap has caused this.. can a sitemap reduce the pages indexed?
Thanks
David.
-
Sorry - I missed the part about you looking specifically at the Moz crawler. While useful, it's a stand-in for what will actually be used for rankings - namely the actual crawls by the search engine crawlers themselves. I'd be looking right to the source for that info if you're concerned there's an issue, rather than trusting just Mozbot. You can find the SE crawlers data in Google Search Console and Bing Webmaster Tools. Look for trends and patterns there, especially around the sitemap report.
The challenge to a Screaming Frog-rendered sitemap is that it can only find what's linked. If the site has orphaned pages or an ineffective internal linking scheme, a crawl could easily miss pages. It's certainly better than no sitemap, but a map generated by the site's technology itself (usually the database) is safer.
P.
-
Thanks Paul,
Yes there has been a big clean up of pages. There were over 80,000 to begin with. I managed to get that down to about 14k but then last month MOZ bot only crawled about 4,000 pages.
I was just a bit worried that the sitemap generated by Screaming Frog was incorrect and therefore that was the reason for the drop.
I was referring mainly to the MOZ site crawl. I guess I was worried that the MOZ bot only followed the sitemap!
There were loads of filter URL's and all sorts going on so it's a bit of a spiders web!
-
No - submitting a sitemap won't reduce the crawl of a site. The search engines will crawl the sitemap and add these pages to the index if they consider them worthy. But they'll still also crawl any other links/pages they can find in other ways and index those as well if they consider them worthy.
Note though - having the number of indexed pages drop is not necessarily a bad thing. If removing a large number of worthless/duplicate/canonicalised/no-indexed pages cleans up the site, that will also be reflected in fewer crawled pages - an indication that quality improvement work was effective.
That help?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Number of Pages Crawled dropped significantly
Number of total pages crawled on the latest report is about half the number from one week ago. No major changes to the site. Number of issues also dropped (not surprisingly). Why has the number dropped so significantly from week to week?
API | | JThibode
And are the issues actually cleared up, or just not counted because the crawl is so much smaller?0 -
Spring is here and so is our May Index Update!
Happy Index Release Day! For the second month in a row, our hard-working, supremely dedicated Big Data team has delivered our Index Update EARLY! Beyond being punctual, the May Index is one of our most comprehensive and largest update of the year for Moz. Let’s dig into the details: 162,225,495,455 (162 billion) URLs. 1,135,327,420 (1.1 billion) subdomains. 194,346,505 (194 million) root domains. 1,168,465,575,815 (1.1 Trillion) links. Followed vs nofollowed links 2.84% of all links found were nofollowed 65.80% of nofollowed links are internal 34.20% are external Rel canonical: 28.89% of all pages employ the rel=canonical tag The average page has 92 links on it 76 internal links on average. 16 external links on average.. Go have fun with your new data! PS - For any questions about DA/PA fluctuations (or non-fluctuations) check out this Q&A thread from Rand: https://moz.com/community/q/da-pa-fluctuations-how-to-interpret-apply-understand-these-ml-based-scores
API | | IanWatson5 -
September's Mozscape Update Broke; We're Building a New Index
Hey gang, I hate to write to you all again with more bad news, but such is life. Our big data team produced an index this week but, upon analysis, found that our crawlers had encountered a massive number of non-200 URLs, which meant this index was not only smaller, but also weirdly biased. PA and DA scores were way off, coverage of the right URLs went haywire, and our metrics that we use to gauge quality told us this index simply was not good enough to launch. Thus, we're in the process of rebuilding an index as fast as possible, but this takes, at minimum 19-20 days, and may take as long as 30 days. This sucks. There's no excuse. We need to do better and we owe all of you and all of the folks who use Mozscape better, more reliable updates. I'm embarassed and so is the team. We all want to deliver the best product, but continue to find problems we didn't account for, and have to go back and build systems in our software to look for them. In the spirit of transparency (not as an excuse), the problem appears to be a large number of new subdomains that found their way into our crawlers and exposed us to issues fetching robots.txt files that timed out and stalled our crawlers. In addition, some new portions of the link graph we crawled exposed us to websites/pages that we need to find ways to exclude, as these abuse our metrics for prioritizing crawls (aka PageRank, much like Google, but they're obviously much more sophisticated and experienced with this) and bias us to junky stuff which keeps us from getting to the good stuff we need. We have dozens of ideas to fix this, and we've managed to fix problems like this in the past (prior issues like .cn domains overwhelming our index, link wheels and webspam holes, etc plagued us and have been addressed, but every couple indices it seems we face a new challenge like this). Our biggest issue is one of monitoring and processing times. We don't see what's in a web index until it's finished processing, which means we don't know if we're building a good index until it's done. It's a lot of work to re-build the processing system so there can be visibility at checkpoints, but that appears to be necessary right now. Unfortunately, it takes time away from building the new, realtime version of our index (which is what we really want to finish and launch!). Such is the frustration of trying to tweak an old system while simultaneously working on a new, better one. Tradeoffs have to be made. For now, we're prioritizing fixing the old Mozscape system, getting a new index out as soon as possible, and then working to improve visibility and our crawl rules. I'm happy to answer any and all questions, and you have my deep, regretful apologies for once again letting you down. We will continue to do everything in our power to improve and fix these ongoing problems.
API | | randfish11 -
Lost many links and keyword ranks since moz index update
Hi All, I came back from work today from a week off to find my site has gone from 681 external inbound links to 202. With this my domain authority, moz trust and moz rank have all also taken a slip. Compounding this, I am seeing a slip most of my keywords rankings. If i try to use the open site explorer to explore my links and see what going on i get the message It looks like we haven't discovered link data for this site or URL. If i check the just discovered links like it suggests I get It looks like there's no Just-Discovered Links data for this URL yet. I know these features worked before the index as i used them. Is this all attributable to the moz index issues that have been noted or could something have happened to my site? Since i started 2 months ago I have made many changes including... Updating the site map that was 4 years out of date and included 400 broken urls Removed blank pages and other useless webpages on the site that contained no content (from the previous administrator) Edited a few pages content from keyword spammy stuff to nicely written and relevant content Fixed url rewrites that made loops and un-accessible product pages All these changes should be for the better but the latest readings have me a little worried. Thanks.
API | | ATP0 -
API - Internal Links to page and related metrics
Hi dear moz Team! Currently I´m building a Java application accessing your API. But there are some metrics I urgently need which I can´t get out of the API until now: The total number of internal links to a page The total number of internal links to a page with partial anchor text match MozRank passed by all internal links w. part. match anchor text (would be nice) For example, if I try this by your links endpoint, my idea was: http://lsapi.seomoz.com/linkscape/links/http%3A%2F%2Fwww.jetztspielen.de%2F?AccessID=..
API | | pollierer
&Expires=..
&Signature=..
&Scope=domain_to_page
&Filter=internal
&Sort=domain_authority
&SourceCols=4 (or any other value)
&SourceDomain=www.jetztspielen.de
&Offset=0
&Limit=50 If I try this, the API says: {"status": "400", "error_message": "Cannot set a source domain when filtering for internal links."} Is there any way to get the data I need by your API endpoints? I´m currently writing my master thesis and it is very important to me to solve this somehow. Thank you very much in advance! Best, Andreas Pollierer1 -
On Page Grader Problem-Sorry But This Page Inaccessible
Greetings: When I try to use the on page grader and enter my URL, an error message appears stating: "Sorry But This Page Inaccessible". The URL is http://www.nyc-officespace-leader.com/commercial-space/office-space and it works fine when I enter it on my browser. Any page from this domain generates this error. Is there a bug with this tool? How would I go about tracking ranking on various keywords? I see it is possible to tag keywords, and I have done so for about 250. But I don't know how to generate a ranking report for these keywords; ideally I would like to do so filtering them by the label I have applied. Any suggestions? Thanks,
API | | Kingalan1
Alan0 -
Suggestion - Should OSE include "citation links" within its index?
This is really a suggestion (and debate to see if people agree with me), with regard to including "citation links" within Moz tools, by default, as just another type of link NOTE: when I am talking about "citation links" I am talking about a link that is not wrapped in a link tag and is therefore non clickable, eg moz.com Obviously Moz have released the mentions tool, which is great, and also FWE which is also great. However, it would seem to me that they are missing a trick in that "citation links" don't feature in the main link index at all. We know that Google as a minimum uses them as an indicator to crawl a page ( http://ignitevisibility.com/google-confirms-url-citations-can-help-pages-get-indexed/ ), and also that they don't pass page rank - HOWEVER, you would assume that google does use then as part of their alogrithm in some manner as they do nofollow links. It would seem to me that a "Citation Link" could (possibly) be deemed more important than a no follow link in Googles alogrithm, as a "no follow" link is a clear indication by the site owner that they don't fully trust the link, but a citation link would neither indicate trust or non trust. So - my request is to get "citation links" into the main link index (and the Just Discovered index for that matter). Would others agree??
API | | James770