Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
-
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/
To resolve this we have set up a disallow statement in the robots.txt file that says
Disallow: /page/For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?
-
Thanks, Tawny,
If you look at Duplicate titles, check the first one (https://needquest.com/place_tag/autism-spectrum-disorder/). All the URLs with a duplicate title have /page/ in them. I will suggest they move the Allow statement and see if that helps.
-
I'm not seeing that URL coming up with Duplicate Title or Duplicate Content issues — when I search by that URL I see no Content issues at that URL. I do see that URL in the All Crawled Pages section, but I can't find it bringing up Content issues in the app.
That said, I took a look at your robots.txt file, and I think this could be a result of having an Allow command before the rest of the Disallow commands. I think possibly if you put that Allow command at the end of the block of Disallow commands, rogerbot would see the disallow for /page/ and stop crawling those URLs.
If you're still running into trouble, I would suggest writing in to us at help@moz.com so we can take a closer look at the Campaign and what could be going on there.
-
Any reason the Disallow: /page/ isn't preventing URLs like
https://needquest.com/place_tag/autism-spectrum-disorder**/page/**4/
from generating duplicate descriptions and title errors in our site crawl? It was my hope that those pages wouldn't be crawled at all. -
Sorry, Tawny ... I did go back and correct y question. We did apply Disallow: /page/ to address this issue. The /place_tag/ is found in many pages we DO want to crawl and index ... and we only want here to disallow those page 2, page 3, page 4, etc. pages.
(We also disallowed /tag/, /category/, and a few other common issues that generate false positives in the site crawl.)
-
Hey there!
Tawny from Moz's Help Team here.
Adding a disallow directive for /tag/ won't help with the example URL you've provided — that URL doesn't have /tag/ in the URL pathway. To block us from seeing content like that URL you listed, you'd need a disallow directive for /place_tag/.
If you include that disallow directive, that should stop us from seeing duplicate content on pages with /place_tag/ in the URL.
Hope that helps! If you've still got questions, feel free to shoot us a note over at help@moz.com and we'll do our best to sort things out with you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does MOZ pick up every issue in one crawl?
Hi, Does MOZ pick up every error/warning in one crawl? Or does it take numerous crawls? Many thanks Lee
Getting Started | | lbagley0 -
MOZ point
I don't know why, but since a week ago I'm not receiving moz point for my activity on moz forum. Example
Getting Started | | Roman-Delcarmen
Today I posted 3 answer in the Question section but in my moz profile does not show the 3 moz point that normally I receive for that. I week ago suddenly I received 20 moz point, why I dont have any idea, maybe someone mark one of my asnwer as good answer. So my point is where I cant found the exact tracking record of my activity1 -
Rel Canonical Notice in Moz Analytics
Hello all. I just received my first report fro Moz Analytics. Signed up here to see what problems my site might have and find out how to fix them. On all of my issue pages I see notice with black dot saying that my urls are Rel Canonical Using rel=canonical suggests to search engines which URL should be seen as canonical. I know that recently my web developer set up so that every url on my website has a rel=canonical as I think there was an issue with trailing slash. Is this to be considered an issue or I can just leave it? Thank you guys. Regards, Armands
Getting Started | | A_Fotografy0 -
Why wont rogerbot crawl my page?
How can I find out why rogerbot won't crawl an individual page I give it to crawl for page-grader? Google, bing, yahoo all crawl pages just fine, but I put in one of the internal pages fo page-grader to check for keywords and it gave me an F -- it isn't crawling the page because the keyword IS in the title and it says it isn't. How do I diagnose the problem?
Getting Started | | friendoffood0 -
How to authenticate Moz crawler so that others don't use Rogerbot useragent to scrape data from our site?
Is there any way to authenticate genuine Moz crawler. Because, our website keeps getting scrapping attacks and if there is no way to authenticate Moz crawler, then, any scraper can just set user agent as Rogerbot and scrape all our pages. Is there a fixed IP that can be used or any other customization that will help us authenticate and allow only Moz crawler to crawl our site. Looking forward to a solution to this problem. We haven't been able to use Moz crawler due to this issue.
Getting Started | | longclimber0 -
Custom report doesn't list keywords.
I'd like a report which shows a list of all the keywords and their ranks - s it possible to get a complete list in the report rather than just summary of rankings?
Getting Started | | ukandyh0 -
I am new to MOZ, I set up one tracking campaign two weeks ago, I have tracked no keywords, I have done some keyword research for ranking difficulty and in two weeks I have already hit 50K pages crawled, I'm maxed out, is this common?
I am a startup and can't afford the higher plans yet. And even their highest plan is 600K pages crawled, which seems really low considering how lightly I used the tool and how quickly I hit 50K. Does anyone have any advice or information on how they use the tool on lower packages? Did I do something wrong to hit 50K pages crawled that fast? Does this pricing make any sense, it seems like an incredibly high price, I love the tool, any help is appreciated.
Getting Started | | Daedilus1