How to block Rogerbot From Crawling UTM URLs
-
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like:
User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
-
Shoot! There may be something else going on. Give us a shout at help@moz.com and we'll see if we can figure it out!
-
FYI - I tried this and it did not work. Rogerbot is still picking up URL's we don't need. It's making my crawl report a mess!
-
The only difference there is the * wildchar. The string with that character will limit the crawler from accessing any URL with that string of characters in it.
-
What is the difference between Disallow: /*?utm_ and Disallow: /?utm_ ?
-
Hi there! Tawny from the Customer Support team here!
You should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this:
User-agent: Rogerbot
Disallow: ?utmetc., until you have blocked all of the parameters that may be causing these duplicate content errors. It looks like the _source* might be what's giving our tools some trouble. It looks like Logan Ray has made an excellent suggestion - give that formatting a try and see if it helps!
You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer. Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt We always recommend checking your robots.txt file with a handy Robots Checker Tool once you make changes to avoid any nasty surprises.
-
Skyler,
You're close, give this a shot:
Disallow: /*?utm_
This will be inclusive of all UTM tags regardless of what comes before the tag or what element you have first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Performance Metrics crawl error
I am getting an error:
Product Support | | bhsiao 0
Crawl Error for mobile & desktop page crawl - The page returned a 4xx; Lighthouse could not analyze this page.
I have Lighthouse whitelisted, is there any other site I need to whitelist? Anything else I need to do in Cloudflare or Datadome to allow this tool to work?1 -
Why are only two pages of my site being crawled by MOZ?
MOZ is only crawling two pages of my site even though there are links on the homepage, and the robot.txt file allows for crawling. Our site address is www.hrparts.com.
Product Support | | hrparts0 -
How to remove staging Urls from being shown in MOZ as critical crawler issues?
Hi MOZs, A few months ago we were updating our website and used a staging site to do so. Soon after we realized that it was indexed by G and shown in Search so we removed it to be indexed. The staging site has since not been visible in search except for 2 links that I just found out about today. My question is how to remove the staging site URLs being shown in MOZ as crawl issues or remove completely? I want to do this because the digital reports are connected to MOZ and it shows there are currently more than 400x 4xx errors which are all because of the staging URLs. I already marked to ignore the issues but they are still showing in. Any help would be much appreciated! Thanks
Product Support | | Strausberg0 -
Crawl error robots.txt
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears: **Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Can help us? Thanks!
Product Support | | Mandiram0 -
Why is comment moderation blocking all my comments? There are no links in them.
For the past few days, every time I try posting a comment, no matter how long or short, how informative or simple, I've been getting the error message that the comment isn't being allowed. I only tried a few times, once on several different posts (not spamming, God forbid!), and I never include links in my comments... So what's wrong?! So frustrated, I am so not up to this right now. Why can't I share my insights and questions on Moz anymore? Gaaaaaaa!
Product Support | | whiteonlySEO1 -
Both campaigns are now useless due to URL rewrite?
I have two campaigns on Moz and they were doing fine until I made the decision to rewrite my URL to remove www so, www.thing.com becomes thing.com Moz sees this as a error it seems and I am now getting error code 902. I tried to change my campaign setting but it won't let me change the URL because it's got historical information that doesn't pertain I guess. What should I do? Was it a mistake to remove the www? Thanks for any advise, Greg
Product Support | | Banknotes0 -
Rogerbot not crawling our site
Has anyone else had issues with Roger crawling your site in the last few weeks? It shows only 2 pages crawled. I was able to crawl the site using Screaming Frog with no problem and we are not specifically blocking Roger via robots.txt or any other method. Has anyone encountered this issue? Any suggestions?
Product Support | | cckapow0 -
What is the difference between the "Crawl Issues" report and the "Crawl Test" report?
I've downloaded the CSV of the Crawl Diagnositcs report (which downloads as the "Crawl Issues" report) and the CSV from the Crawl Test Report, and pulled out the pages for a specific subdomain. The Crawl Test report gave me about 150 pages, where the Crawl Issues report gave 500 pages. Why would there be that difference in results? I've checked for duplicate URLs and there are none within the Crawl Issues report.
Product Support | | SBowen-Jive0