Wild fluctuation in number of pages crawled
-
I am seeing huge fluctuations in the number of pages discovered the crawl each week. Some weeks the crawl discovers > 10,000 pages and other weeks I am seeing 4-500.
So, this week for example I was hoping to see some changes reflected for warnings from last weeks report (which discovered > 10,000 pages). However, the entire crawl this week was 448 pages.
The number of pages discovered each week seems to go back and forth between these two extremes. The more accurate count would be nearer the 10,000 mark than the 400 range.
Thanks.
Mark
-
No problem!
Glad to see Cyrus' response!
-
Hi Mark,
I used to troubleshoot these types of problems (mysteries when I worked on the SEOmoz help team.
The best thing to do would be contact the Help Team (help@seomoz.org) and include information both your account, url and campaign. They can take this information and see if there is anything odd about your website, or if there is a bug in the crawling software, or finally if there is some strange quirk of incompatibility causing this behavior.
If you would rather, you can PM me with the info and I can try to troubleshoot it myself, but the Help Team has a few more tools and access to engineers, so they might be the better choice. Either way, let us know if you have any trouble.
-
Thank you for the response. I should have been more clear. It is the weekly SEOMoz crawl that is showing such inconsistent behavior, not Google. Sorry I wasn't more clear.
We have very few (if any) broken links, errors, etc.
Thanks.
Mark
-
Hi there Mark!
We used to have the same issue using Joomla here. It turns out that Google will reduce their crawling if your site has too many errors, broken links, and so on.
We used GWT to look into the 404's then redirected the broken links. Afterwards, we resubmitted the site to be reindexed. A few weeks later -VOILA- all is back to normal and our page freshness stays where it should.
I'd recommend looking at your GWT first, and fixing broken links followed by resubmission to SE's...
Good Luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Campaigns - crawled
The new Pages Crawled: 2. I have many 404 and other errors, I wanted to start working on it tomorrow but the new crawl only crawled to pages and doesn't show any errors. Whats the problem and what can I do? Yoseph
Moz Pro | | Joseph-Green-SEO0 -
Duplicate page reported in Wordpress site, but I can't find it in All Pages list
In a crawl report a duplicate page content warning has been displayed. The two urls are 1. http://www.superheroes.com.au/shop
Moz Pro | | Andyfools
and
2. http://www.superheroes.com.au/shop/category/catalog/ It's a WordPress site and I cannot find the second page anywhere in the list of All Pages in the Admin (I want to add canonicalisation code) When I view the 2nd page and click Edit Page, it redirects to the 1st page. Any ideas where SEOMoz would be finding this 2nd page or how it might be being generated? (btw I didn't build this site) Thanks Simon0 -
Number of available links limited?
OK, I've been making use of the free LinkScape API (on behalf of a client of mine) and trying to get links (and info on those links) to a specific domain/page/etc. NOTE : I've been using it without any issue in the past, however we are currently facing some weird issues. Let's take this simple query as an example : http://lsapi.seomoz.com/linkscape/links/wikipedia.org?SourceCols=4&TargetCols=4&Sort=page_authority&Scope=page_to_domain What this one supposedly does is to get links to "wikipedia.org", right? I'm reading : The Page_to_* scopes will by default return 25 links per source domain if no limit is specified, so you can see domain diversity. Due to space limitations in our API, a general link query for a given page will return at most 25 pages for every unique domain linking to that page. And I'm saying OK, that's fine. The thing is that (instead of the 1000 links I had been getting before), I'm now getting just 25 links. NOT per... "source domain"... but obviously per "target domain" (= wikipedia.org) - or am I missing something? (well, probably wikipedia suddenly has just about 25 links pointed to it... makes sense! 🙂 ) Please, let me know what's going on with the above, simply because getting just 25 links is close to worthless... Thanks a lot, in advance!
Moz Pro | | drkameleon0 -
Pages Crawled: 0 ?
I've been with SEO Moz for over a month and a half. Why would this weeks crawl have Pages Crawled: 0? I've made no changes since the crawl last week that had 10k pages crawled...
Moz Pro | | mr_w1 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
Crawl Diagnostics and missing meta tags on noindex blog pages
Hi Guys/Gals We do love the Crawl Diagnostics, but do find the missing meta tags ("Missing Meta Description" Tag in this case) somewhat spammy. We use the "All in One SEO Pack" for our blog and it does stick in noindex,follow (as it should) on the pages that is of no use to us. "2008/04/page/2/" and the likes. Maybe I'm wrong but should the Diagnostics tool not respect the noindex tag and just ignore any warnings, since it should really mean that these pages are NOT included in the search index. Meaning that the other meta tags are really useless. Any thoughts?
Moz Pro | | sfseo0 -
Crawl test. Bot crawled only 200 or so links when it should have crawled thousands
Hi everyone, I just recieved my crawl test report and its only given me 200 or so URL's when my site has thousands, any thoughts?
Moz Pro | | Ev840 -
On-Page Optimisation tool on intranet pages
Does anybody know if there's any easy way to use the On-Page Optimisation tool on intranet or not publicly accessible pages? Thanks!
Moz Pro | | neooptic0