Roger bot taking a long time to crawl site
-
Hi all, I've noticed Roger bot is taking a long time to crawl my new site. It started on the 28th Feb 2013 and is still going. There aren't many pages at the moment. Any ideas please?
thanks a lot, Mark.
-
Hi Peter
thanks for your reply. The crawl has now completed and given me some more areas to work on, it's a great tool.
I was so preoccupied with 'hiding' the site over the last couple of months with the easy code:
User-agent: * Disallow: /
I hadn't thought beyond this.
I've noticed Google has now recognised the new robots.txt which has allowed the sitemap to be accepted..
I'll look at your notes, thank you, and work out my next move. I'll let you know how I get on too.
I know (well think) I have to get noindex, follow for 'sorted' category pages...
all the best, Mark.
-
Hi Mike
The crawl has now completed, thank you. I think the results will keep me occupied
all the best, Mark.
-
Hi Mark,
Sorry it's taking a while to crawl your new site.
While I'm not exactly sure what the delay is, one of the possible reasons is through your robots.txt. Here's what I see in a short snippet from your robots.txt:
# Crawlers Setup User-agent: * Crawl-delay: 30 # Allowable Index Allow: /*?p= Allow: /index.php/blog/ Allow: /catalog/seo_sitemap/category/ Allow: /catalogsearch/result/ Allow: /media/ # Directories Disallow: /404/ Disallow: /app/ Disallow: /cgi-bin/ Disallow: /downloader/ Disallow: /errors/ Disallow: /includes/ Disallow: /js/ Disallow: /lib/ Disallow: /magento/ Disallow: /pkginfo/ Disallow: /report/ From here, the formatting looks a little awkward. What's going on is that you're telling Roger bot to only look at these:
Allowable Index
Allow: /*?p=
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/
Allow: /catalogsearch/result/
Allow: /media/While the syntax is OK, not every crawler out there will follow the allow directive. Here's an example something you can use.
# Crawlers Setup User-agent: * Crawl-delay: 30 Disallow: / Disallow: /404/ Disallow: /app/ Disallow: /cgi-bin/ Disallow: /downloader/ Disallow: /errors/ Disallow: /includes/ Disallow: /js/ From here you're telling the crawler to disallow nothing except these directories. Please let us know once you implement this method is that will actually fix the crawl. Thanks for reaching out! Best, Peter Li SEOmoz Help Team ```
-
Hi Mark,
This sounds like a bug or issue with the SEOmoz software.
Contact help@seomoz.org and ask one of the help associates to look into this for you.
If you do not have many pages, it definitely shouldn't take that long.
The help team responds extremely quickly!
Good luck.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Negative SEO & How long does it take for Google to disavow
Following on from a previous problem of this 2021 Waec Runz page completely dropping from index, we have discovered that 150+ spam, porn domains have been directed at our pages (sometime in the last 3-4 months, don't have an exact date). Does anyone have exerpeince on how long it may take Google to take noticed of a new disavow list? Any estimates would be very helpful in determining our next course of action.
Technical SEO | | sathoiue80 -
Are sliders killing our site?
Our website, http://shatterbuggy.com, has what I believe is a systemic issue that stems from the heavy reliance upon the Revolution Slider for Wordpress. I am not an SEO expert and our site has vexed many SEOs in the past. We get feedback regularly from customers (especially those that are not tech savvy) that express gratitude for the ease of use via following an image to image sequence to get to their respective booking. This was our goal when creating the site. Incidentally, in many cases, the only linking from page to page is within the slider itself (clickable image) and there is little to no content. That said, we seems to stumble in SERPS against seemingly inferior competition. For example, we should be ranked in spot 1, 2, or 3 ish for "iPhone repair Minneapolis" but rather we are stuck near spot 15. Any thoughts on whether this is a strategy that may be harming us? If so, would simply creating content on these empty (slider only) pages help? Should we create "static links" that connect to the same places as the slider? Also, is our particular use of the slider creating H1 issues? Thank you all! B.
Technical SEO | | BenjaminH0 -
Google has deindexed 40% of my site because it's having problems crawling it
Hi Last week i got my fifth email saying 'Google can't access your site'. The first one i got in early November. Since then my site has gone from almost 80k pages indexed to less than 45k pages and the number is lowering even though we post daily about 100 new articles (it's a online newspaper). The site i'm talking about is http://www.gazetaexpress.com/ We have to deal with DDoS attacks most of the time, so our server guy has implemented a firewall to protect the site from these attacks. We suspect that it's the firewall that is blocking google bots to crawl and index our site. But then things get more interesting, some parts of the site are being crawled regularly and some others not at all. If the firewall was to stop google bots from crawling the site, why some parts of the site are being crawled with no problems and others aren't? In the screenshot attached to this post you will see how Google Webmasters is reporting these errors. In this link, it says that if 'Error' status happens again you should contact Google Webmaster support because something is preventing Google to fetch the site. I used the Feedback form in Google Webmasters to report this error about two months ago but haven't heard from them. Did i use the wrong form to contact them, if yes how can i reach them and tell about my problem? If you need more details feel free to ask. I will appreciate any help. Thank you in advance C43svbv.png?1
Technical SEO | | Bajram.Kurtishaj1 -
Awful ranking after site redesign
Hello everyone, I have a situation here and I’d like to have your opinion about it. I am working on a site which has been recently redesigned from scratch: in a nutshell, as soon as the new site went live their rankings dropped and, of course, so did their visitors and so on.. The guys who redesigned the site didn’t do any 301 redirect whatsoever, so now the old pages are just 404s and blocked by robots. My question is: if they 301 redirect now, do you think it would be possible they could get their rankings back? One more thing: when they launched the new site, the indexed pages basically doubled overnight; there were 700 and now there are 1400. Do you think this could affect their ranking as well? Thank you for you insights 🙂 Elio
Technical SEO | | Eyah0 -
Crawl Test Report only shows home page and no inner site pages?
Hi, My site is [removed] When I first tried to set up a new campaign for the site, I received the error: Roger has detected a problem: We have detected that the root domain [removed] does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information. I then ran a Crawl Test per the FAQ. The SEOmoz crawl report only shows my home page URL and does not have any inner site pages. This is a Joomla site. What is the problem? Thanks! Dave
Technical SEO | | crave810 -
Site Crawl
I was wondering if there was a way to use SEOmoz's tool to quickly and easily find all the URLs on you site and not just the ones with errors. The site that I am working on does not have a site map. What I am trying to do is find all the URLs along with their titles and description tags. Thank you very much for your help
Technical SEO | | pakevin0 -
How do you diagnose if on your site is only 50% crawled?
Good Morning from 7 degrees C, goodbye arctic conditions wetherby UK, If a site had 100 pages for example & that site was plugged into Webmaster Tools how could you diagnose if all the pages had been crawled? The thing is I want to learn how to diagnose crawl issues with sites, is their a known methodology for this? Thanks in advance, David
Technical SEO | | Nightwing0 -
How long before rankings return?
We recently had a redesign of our website and moved it from a static html site to WordPress. During the development the WordPress installation sat on a sub-domain. Our programmer and designer had the privacy setting inside WordPress checked so the search engines would not crawl the pages while being developed to avoid duplicate content. Fast forward two weeks when new site replaces old. Guess what privacy setting was not fixed. So about 3 days after the site is live organic traffic drops by 80%. Do a little research and all of our great rankings are gone. Notice that our page rank is also gone...Hmm... Upon further inspection the noindex tag is on every page of the site. It's now removed and has been 3-4 days. I've resubmitted inside webmaster tools and I'm just curious what y'all think the likelihood is that everything will come back to how it was before.
Technical SEO | | jmacek070