How to block Rogerbot From Crawling UTM URLs
-
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like:
User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
-
Shoot! There may be something else going on. Give us a shout at help@moz.com and we'll see if we can figure it out!
-
FYI - I tried this and it did not work. Rogerbot is still picking up URL's we don't need. It's making my crawl report a mess!
-
The only difference there is the * wildchar. The string with that character will limit the crawler from accessing any URL with that string of characters in it.
-
What is the difference between Disallow: /*?utm_ and Disallow: /?utm_ ?
-
Hi there! Tawny from the Customer Support team here!
You should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this:
User-agent: Rogerbot
Disallow: ?utmetc., until you have blocked all of the parameters that may be causing these duplicate content errors. It looks like the _source* might be what's giving our tools some trouble. It looks like Logan Ray has made an excellent suggestion - give that formatting a try and see if it helps!
You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer. Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt We always recommend checking your robots.txt file with a handy Robots Checker Tool once you make changes to avoid any nasty surprises.
-
Skyler,
You're close, give this a shot:
Disallow: /*?utm_
This will be inclusive of all UTM tags regardless of what comes before the tag or what element you have first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved 503 Service Unavailable (temporary?) Rogerbot takes a break
A lot of my Moz duties seem to be setting hundreds of issues to ignore because my site was getting crawled while under maintenance. Why can't Rogerbot take a break after running into a few of these and then try again later? Is there an official code for Temporary Service Unavailability that can smart bots pause crawls so that they are not wasting compute, bandwidth, crawl budget and my time?
Product Support | | awilliams_kingston0 -
Crawling issue
Hello,
Product Support | | Benjamien
I have added the campaign IJsfabriek Strombeek (ijsfabriekstrombeek.be) to my account. After the website had been crawled, it showed only 2 crawled pages, but this site has over 500 pages. It is divided into four versions: a Dutch, French, English and German version. I thought that could be the issue because I only filled in the root domain ijsfabriekstrombeek.be , so I created another campaign with the name ijsfabriekstrombeek with the url ijsfabriekstrombeek.be/nl . When MOZ crawled this one, I got the following remark:
**Moz was unable to crawl your site on Feb 21, 2018. **Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with ijsfabriekstrombeek.be/nl in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ijsfabriekstrombeek.be/nl. Typically errors like this should be investigated and fixed by the site webmaster. I have checked the robots.txt and that is fine. There are also no robots meta tags in the code, so what can be the problem? I really need to see an overview of all the pages on the website, so I can use MOZ for the reason that I prescribed, being SEO improvement. Please come back to me soon. Is there a possibility that I can see someone sort out this issue through 'Join me'? Thanks0 -
Is is possible to revert reporting to a past crawl date?
Site Crawl report defaults to the last crawl. Is there a way to get data from a previous crawl for comparison?
Product Support | | JThibode1 -
Which URL should I add at general settings?
Hi,I would like to double check something. We have set up our site to be international with multiple language. However we have at the moment only 1 language live. SO when you go to www.yourzite.com, you will be redirected to www.yourzite.com/nl/.Which URL should I add at the general campaign settings. I have now added www.yourzite.com/nl/, but it shows that I don't have any incoming links or domain authority. Should I change the link to yourzite.com? That is also the link that Google shows in the search results. Regards Jack
Product Support | | YourZite.com0 -
MOZ not accepting our recent changes it still showing us old Crawl Diagnostics report
Hi, 507 Temporary Redirect We made changes for 302 redirects which are listed in crawl diagnostics report. Now "Compare" and "Wishlist" links are already removed from our source code. All required changes are made but still your report listed Compare and wishlist links. We made changes on Friday (14/8/2015) and waiting for new updated report. Link: http://www.stopwobble.com/ Please let us know what is the exact issue. So that we can fix it.
Product Support | | torbett0 -
Why can I not crawl this site
I wanted to add this site as new campaign: new.kbc.be But it won't accept it. Why?
Product Support | | KBC0 -
SEO Moz PRO app Isn't Crawling Anymore
Hi, We find the SEO Moz PRO app a great tool for us. What is the reason that it is not re-crawling the websites included in our campaigns anymore?
Product Support | | solution.advisor0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0