Moz "Crawl Diagnostics" doesn't respect robots.txt
-
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like:
- Duplicate content
- Overly dynamic URLs
- Duplicate Page Titles
The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored):Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/Many thanks for any info on this issue.
-
Hi Si, has this issue been resolved?
-
Hey Si,
Thanks for writing in. It doesn't seem that we are having an overarching issue with our crawler ignoring robots.txt files so I did some research in Google Webmaster Tools and it looks like most crawlers require an asterisk in the disallow directive to recognize that all pages of a dynamic URL are being disallowed. If you look in the "Pattern Matching" section of this resource here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449, that should give you more information about setting up the robots.txt with the correct disallow directives to block those pages.
If you add in the astrisk to the disallow directive and you are still seeing these pages crawled, it would help if you sent in an email with your campaign information to our support desk at help@moz.com so we can have our engineers look into this more directly.
I hope this helps.
Chiaryn
-
If you have an "index,(no)follow" meta on those pages I think they will be crawled even though you have them blocked in robots.txt. So by adding "noindex" on those pages it might work as you want it to.
-
Is the / actually in the URL at that spot? Or is your link like http://www.example.com/abcd?p=147
If you give an example full URL that includes one of your blocked dynamic URLs we can take a better look. If your robots is setup correctly, it shouldn't find that stuff but give us more info if you're able.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I can't find multiple titles
Hello, Moz showed a lot of multiple titles on our website but, when I go to fix them, there isn't any multiple titles. I was wondering if anyone knew where these issues could be coming from? Many thanks, Olivia
Moz Bar | | SigneerHFS0 -
Site Crawl - MOz Pro
Hi There, When i look into my site crawl i have thousands of duplicate content issues. Now they are essentially product pages which are in multiple categories - however we have added the canonical tag so im confused as to why all of these are appearing as if there is an error, does the MOZ bot not take canonicals into account? Kind Regards Gemma
Moz Bar | | acsilver0 -
All of a sudden a number of my key pages are getting 403 errors with Moz?
One of my squarespace sites has suddenly thrown up a number of 403 access denied errors for a range of pages on the site, according to the Moz weekly report. There is no access issues for these pages and nothing has been changes wrt url, etc..... so why the errors (which I am not seeing through SEM rush for example)? Thx, Phil
Moz Bar | | bugdoctor0 -
Moz Crawler URL paramaters & duplicate content
Hi all, this is my first post on Moz Q&A 🙂 Questions: Does the Moz Crawler take into account rel="canonical" for search results pages with sorting / filtering URL parameters? How much time does it take for an issue to disappear from the issues list after it's been corrected? Does it come op in the next weekly report? I'm asking because the crawler is reporting 50k+ pages crawled, when in reality, this number should be closer to 1000. All pages with query parameters have the correct canonical tag pointing to the root URL, so I'm wondering whether I need to noindex the other pages for the crawler to report correct data?: Original (canonical URL): DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas Filter active URL: DOMAIN.COM/charters/search/mx/BS?search_location=cabo-san-lucas&booking_date=&booking_days=1&booking_persons=1&priceFilter%5B%5D=0%2C500&includedPriceFilter%5B%5D=drinks-soft Also, if noindex is the only solution, will it impact the ranking of the pages involved? Note: Google and Bing are semi-successful in reporting index page count, each reporting around 2.5k result pages when using the site:DOMAIN.com query. The rel canonical tag was missing for a short period of time about 4 weeks ago, but since fixing the issue these pages still haven't been deindexed. Appreciate any suggestions regarding Moz Crawler & Google / Bing index count!
Moz Bar | | Vukan_Simic0 -
Domain.com isn't recognized by on-page-grader, but domain.com/index.php is
I am running a website through On-page-grader, as www.domain.com and scores an "F" for a specific keyword. When it's ran as www.domain.com/index.php, it scores an "A" for that same keyword and has everything checked other than "keyword in the domain name". There are no other files such as index.htm, or index.html that would interfere and can't figure out why this page is not being recognized. I checked, the robots and .htaccess file, but do not see anything that would hinder. Could this be a server issue?
Moz Bar | | werkbot0 -
Crawl Test cannot be seen on my PC. Using Windows 8.
I received and downloaded my Crawl Test. When I try to open it, my pc says "This app can't run on your PC. To find a version for your PC, check with the software publisher". I'm running Windows 8. Can I view my Crawl Test with my PC? Is there a work-around for this issue? Update I can apparently open my Crawl Test and view it as an Excel Spreadsheet. But when I download it and choose Save As, it saves it as a MS-DOS Application. This is my very first Crawl Test and I am not sure if I am doing everything right.
Moz Bar | | jameskoby010 -
403 Error on WMT but not on MOZ?
Hello, 2 days ago I found there are about 1200 of 403 errors by Google WMT when I tried to fetch my domain - Please see attached HTTP/1.1 403 Access Forbidden Cache-Control: private Content-Type: text/html ETag: "" Server: Set-Cookie: ASPSESSIONIDSSBARTSD=BEHMJHJBKJOEJEALECNNIPFH; path=/; HttpOnly X-Powered-By: Date: Tue, 18 Feb 2014 13:54:10 GMT Content-Length: 1233 <title>403 - Forbidden: Access is denied.</title> Server Error <fieldset> 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. </fieldset> I ran a complete report using MOZ but I was shocked not see any 4xx , 5xx errors. Google: 246 of 404 errors No Google, Yahoo or Bing blocking HTTP status code: ALL 200 301 redirect: none? I have done about 2500 over 4 years. The website is losing indexed pages. I'm not sure what's going and which numbers to trust. Please help. Thank you. Adam
Moz Bar | | homs830 -
Blocked Production Site from Search Engines - How to get it Crawled by Moz Crawler
I have an 'under development' site hosted, (which is an exact replica of live site as working on to add new functionalities & modules) - but its password protected, excluded from robots.txt (Disallow) & also marked noindex on all pages in the index - so that Googlebot & other Search Engines can not crawl the site At present the development work is almost 95% completed., Now - feel like to crawl the site through SEOMOZ Roger Bot - to know the errors and all indexed urls by Rogerbot. What's the best way to get Moz Bot crawl the site - but simultaneously continue it blocking its access to Search Engines I have gone through - https://support.google.com/webmasters/answer/93708?hl=en, it says a) Save it in a password-protected directory. Googlebot and other spiders won't be able to access the content- But this way Moz will also not be able to crawl the site b) Use a robots.txt to control access to files and directories on your server - However it also says - It's important to note that even if you use a robots.txt file to block spiders from crawling content on your site, Google could discover it in other ways and add it to our index. c) Use a noindex meta tag to prevent content from appearing in our search results - It also says that a link to the page can still appear in their search results. Because we have to crawl your page in order to see the noindex tag, there's a small chance that Googlebot won't see and respect the noindex meta tag Password Protected thus seems the best way to continue blocking. However, continuing with it will also block Moz bot to crawl the site. Any suggestions Thanks
Moz Bar | | Modi0