Wild fluctuation in number of pages crawled
-
I am seeing huge fluctuations in the number of pages discovered the crawl each week. Some weeks the crawl discovers > 10,000 pages and other weeks I am seeing 4-500.
So, this week for example I was hoping to see some changes reflected for warnings from last weeks report (which discovered > 10,000 pages). However, the entire crawl this week was 448 pages.
The number of pages discovered each week seems to go back and forth between these two extremes. The more accurate count would be nearer the 10,000 mark than the 400 range.
Thanks.
Mark
-
No problem!
Glad to see Cyrus' response!
-
Hi Mark,
I used to troubleshoot these types of problems (mysteries
when I worked on the SEOmoz help team.
The best thing to do would be contact the Help Team (help@seomoz.org) and include information both your account, url and campaign. They can take this information and see if there is anything odd about your website, or if there is a bug in the crawling software, or finally if there is some strange quirk of incompatibility causing this behavior.
If you would rather, you can PM me with the info and I can try to troubleshoot it myself, but the Help Team has a few more tools and access to engineers, so they might be the better choice. Either way, let us know if you have any trouble.
-
Thank you for the response. I should have been more clear. It is the weekly SEOMoz crawl that is showing such inconsistent behavior, not Google. Sorry I wasn't more clear.
We have very few (if any) broken links, errors, etc.
Thanks.
Mark
-
Hi there Mark!
We used to have the same issue using Joomla here. It turns out that Google will reduce their crawling if your site has too many errors, broken links, and so on.
We used GWT to look into the 404's then redirected the broken links. Afterwards, we resubmitted the site to be reindexed. A few weeks later -VOILA- all is back to normal and our page freshness stays where it should.
I'd recommend looking at your GWT first, and fixing broken links followed by resubmission to SE's...
Good Luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page with "Missing Title Tag" isn't a page
Hello, I am going through the various errors that the Moz Pro Crawl report and some non-existent pages keep coming up in the report. For example, one error category is "Missing Title Tag" with one page identified. But this page http://www.immigroup.com/news/“http%3A/crs.yorku.ca”?page=2 isn't real. It would have been a 404 were there not a redirect for everything that is /news/gobbledygook to /news. So my question is: when moz (or GA for that matter) identifies these pages as "real" and having errors, do I need to take this seriously? And what do I do about it? Thanks! George
Moz Pro | | canadageorge0 -
Seomoz legacy pages?
Hello, I am finding that I miss several of the old seomoz sections. The legacy tools in particular like the visual website comparison. Where is that now? Also, where is the ongoing list of the top 100 sites? So much was lost in the shift to MOZ, I hope some of the good old stuff is still available. Thank you, Nolan
Moz Pro | | QuietProgress0 -
Crawl Diagnostics - Historical Summary
As we've been fixing errors on our website, the crawl diagnostic graphs have been showing great results (top left to bottom right for errors). The problem is the graphs themselves aren't very pretty. I can't use them in my internal reports (all internal reports are standardised colours/formats). Is there anyway of exporting the top level summary with historic data so the graphs can be recreated in company colours? I don't want the detailed CSV breakdown of what errors occurred, but rather than on X date there were Y errors, the next month Z errors and so forth. The data must already be in the SEOMoz system in order to create the graphs themselves - I was hoping this can be made available to us if it isn't already? Does anyone know if there is already a way of doing this? I've tried to 'inspect element' and find the underlying data in the source code but to no avail, and can't see any exports that would do this. Thanks in advance Dean
Moz Pro | | FashionLux0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
SEOmoz crawler not crawling my site
We set up a new campaign in SEOmoz on Friday. It is my understanding that the preliminary crawl can cover up to 250 and this has been our experience in the past. However, the preliminary crawl only went through 2 pages. This is a larger eCommerce site with many pages. Any ideas why more pages weren't crawled? We set up the campaign to track at the root domain level.
Moz Pro | | IMM0 -
How often does seomoz crawl the site? Can you force a crawl at a specific time ?
How often does seomoz crawl the site? Can you force a crawl at a specific time ?
Moz Pro | | stewbuch18720 -
Drop in Number of Crawled pages by SEOMOZ?
I noticed that the number of Crawled Pages on my website has been 2 pages only over past week. Before that the number of crawled pages was over 1000. My site has numerous pages as it is a Travel website that pulls search results for Flights, Cars, Hotels, Cruises and Vacation packages so there is a huge Database there. Can someone help? Thanks !
Moz Pro | | sherohass0 -
Transfering Page Authority
Hi, I have recently change my url architecture with site redesign and was just doing some analysis of the old and new pages. I seem to be losing a little bit of Organic Search because of it. As an example this old diving page in open site explorer shows a Page Authority of 46 whilst the new diving page shows a Page Authority of 22. I have a 301 redirect going from the old page to the new, but that seems to be quite a drop in Page Authority. Is there anything else I can be doing to improve upon it? Thanks, Adam
Moz Pro | | NaescentAdam0