Block Moz (or any other robot) from crawling pages with specific URLs
-
Hello!
Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future.
I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt:
User-agent: dotbot
Disallow: /*numberOfStars=0User-agent: rogerbot
Disallow: /*numberOfStars=0My questions:
1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact?
2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?)
I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there.
Thank you for your help!
-
Hello!
Thanks a lot for your feedback and clearing this out! It worked well.
The robots.txt tester is a good tip!
Thanks!
-
Hi,
What you have there will work absolutely fine with a little tweak. And no need to leave spaces between lines.
Disallow: /numberOfStars=0
However, no need to add the wildcard at the end if there is nothing more after that.
The best way to test what works, is before you go and add it to live, use the Robots.txt test tool in Search Console (Webmaster Tools), add in the lines above and then check to make sure none of your other pages are blocked. They won't be, but it's a great way to test before going live.
I hope this helps
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved On page grader
All of my keywords score a 53 using the on page grader. When I look at the notes it indicates I don't have the keyword in question anywhere on the page which, while true in some cases, is not always factual. Does anyone have a similar experience?
Moz Pro | | josayoun0 -
Page authority
Hello, How can my page authority be different across various page built exactly on the same model and none of them having links ? Thank you,
Moz Pro | | seoanalytics0 -
@staff When is Moz Analytics coming out?
I want to see a video demo or something for the new Moz Analytics. When is the software going to be available for us subscribers?
Moz Pro | | Francisco_Meza0 -
Order of urls in SEOMoz crawl report
Is there any rhyme or reason to the order of urls in the SEOMoz crawl report, or are the urls just listed in random order?
Moz Pro | | LynnMarie0 -
Duplicate Page Content
i getting crewl errors on Duplicate Page Title and content for the same page. www.breeze-air.com www.breeze-air.com/ www.breeze-air.com/index-html what am i doing worng? please help thank you
Moz Pro | | eoberlender0 -
Robots review
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before. User-agent: Rogerbot Disallow: /default.aspx?*
Moz Pro | | sprynewmedia
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspx Disallow: /ctl/
// Keep from indexing the admin controls Disallow: ArticleAdmin
// Keep from indexing article admin page Disallow: articleadmin
// same in lower case Disallow: /images/
// Keep from indexing CMS images Disallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls. general rules lacking wildcards User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx0 -
Only crawling one page
Hi there, A campaign was crawling fine, but at the last crawl, for some reason, SEOmoz can only crawl one page... any ideas? If I run a custom crawl I still access all of the site's pages.
Moz Pro | | harryholmes0070 -
To block with robots.txt or canonicalize?
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities. I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are: Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page? or Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective? Is there an alternative I'm missing?
Moz Pro | | JonClark150