Block Moz (or any other robot) from crawling pages with specific URLs
-
Hello!
Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future.
I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt:
User-agent: dotbot
Disallow: /*numberOfStars=0User-agent: rogerbot
Disallow: /*numberOfStars=0My questions:
1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact?
2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?)
I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there.
Thank you for your help!
-
Hello!
Thanks a lot for your feedback and clearing this out! It worked well.
The robots.txt tester is a good tip!
Thanks!
-
Hi,
What you have there will work absolutely fine with a little tweak. And no need to leave spaces between lines.
Disallow: /numberOfStars=0
However, no need to add the wildcard at the end if there is nothing more after that.
The best way to test what works, is before you go and add it to live, use the Robots.txt test tool in Search Console (Webmaster Tools), add in the lines above and then check to make sure none of your other pages are blocked. They won't be, but it's a great way to test before going live.
I hope this helps
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
Youtube traffic page url referral
Hello, How can I see which videos from Youtube that has my domain inserted in their description url drive traffic to my domain? I can see in GA how many visitors are coming from Youtube to my domain, but I can't see what Youtube video pages has driven traffic. Any help?
Moz Pro | | xeonet320 -
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
Sub-accounts on Moz
Can you create sub-user accounts on Moz?... Ones which provide limited access to certain users like use of the tools or access to view rankings in a campaign without being able to edit anything??
Moz Pro | | Qology1 -
How can I see the URL's affected in Seomoz Crawl when Notices increase
Hi, When Seomoz crawled my site, my notices increased by 255. How can I only these affected urls ? thanks Sarah
Moz Pro | | SarahCollins0 -
When will be the 250 pages crawled limit eliminated?
Hi, I signed up yesterday for a SEOMoz Pro Account, and would like to know, please, when will be the 250 pages crawled limit eliminated? 🙂 Thanks in advance for your help!
Moz Pro | | Andarilho0 -
Issue in number of pages crawled
i wanted to figure out how our friend Roger Bot works. On the first crawl of one of my large sites, the number of pages crawled stopped at 10000 (due to the restriction on the pro account). However after a few weeks, the number of pages crawled went down to about 5500. This number seemed to be a more accurate count of the pages on our site. Today, it seems that Roger Bot has completed another crawl and the number is up to 10000 again. I know there has been no downtime on our site, and the items that we fixed on our site did not reduce or increase the number of pages we had. Just making sure there are no known issues with Roger Bot before I look deeper into our site to see if there is an issue. Thanks!
Moz Pro | | cchhita0 -
How can I change (specifically, decrease) the reporting/crawling frequency of the keyword ranking?
It always seem to compare the standings based on the week before, which confuses the issue when I'm only reporting monthly or quarterly. Is there currently (or might there be in the future) a way to set this so that the comparison is based on a time period that I specify?
Moz Pro | | MackenzieFogelson0