Rogerbot directives in robots.txt
-
I feel like I spend a lot of time setting false positives in my reports to ignore.
Can I prevent Rogerbot from crawling pages I don't care about with robots.txt directives? For example., I have some page types with meta noindex and it reports these to me. Theoretically, I can block Rogerbot from these with a robots,txt directive and not have to deal with false positives.
-
Yes, you can definitely use the robots.txt file to prevent Rogerbot from crawling pages that you don’t want to include in your reports. This approach can help you manage and minimize false positives effectively.
To block specific pages or directories from being crawled, you would add directives to your robots.txt file. For example, if you have certain page types that you’ve already set with meta noindex, you can specify rules like this:
User-agent: Rogerbot Disallow: /path-to-unwanted-page/ Disallow: /another-unwanted-directory/
This tells Rogerbot not to crawl the specified paths, which should reduce the number of irrelevant entries in your reports.
However, keep in mind that while robots.txt directives can prevent crawling, they do not guarantee that these pages won't show up in search results if they are linked from other sites or indexed by different bots.
Additionally, using meta noindex tags is still a good practice for pages that may occasionally be crawled but shouldn’t appear in search results. Combining both methods—robots.txt for crawling and noindex for indexing—provides a robust solution to manage your web presence more effectively.
-
Never mind, I found this. https://moz.com/help/moz-procedures/crawlers/rogerbot
-
@awilliams_kingston
Yes, you can use robots.txt directives to prevent Rogerbot from crawling certain pages or sections of your site, which can help reduce the number of false positives in your reports. By doing so, you can focus Rogerbot’s attention on the parts of your site that matter more to you and avoid reporting issues on pages you don't care about.Here’s a basic outline of how you can use robots.txt to block Rogerbot:
Locate or Create Your robots.txt File: This file should be placed in the root directory of your website (e.g., https://www.yourwebsite.com/robots.txt).
Add Directives to Block Rogerbot: You’ll need to specify the user-agent for Rogerbot and define which pages or directories to block. The User-agent directive specifies which web crawlers the rules apply to, and Disallow directives specify the URLs or directories to block.
Here’s an example of what your robots.txt file might look like if you want to block Rogerbot from crawling certain pages:
javascript
Disallow: /path-to-block/
Disallow: /another-path/
If you want to block Rogerbot from accessing pages with certain parameters or patterns, you can use wildcards:javascript
Disallow: /path-to-block/*
Disallow: /another-path/?parameter=
Verify the Changes: After updating the robots.txt file, you can use tools like Google Search Console or other site analysis tools to check if the directives are being applied as expected.Monitor and Adjust: Keep an eye on your reports and site performance to ensure that blocking these pages is achieving the desired effect without inadvertently blocking important pages.
By doing this, you should be able to reduce the number of irrelevant or false positive issues reported by Rogerbot and make your reporting more focused and useful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Small startup Marketing leader - What are 3 actionable reports I can review daily
Hi all - I've joined a small startup as their first marketing hire and I am strategizing, planning, and executing all work. I need to get to 3-4 reports I focus on per channel so I can still be relatively effective across multiple channels. What are 3-4 reports I should be laser-focused on in Moz that will help me ID opportunities/threats and be able to identify best actions from.
Digital Marketing | | AndrewAeqium0 -
Unsolved Google Analytics (GA4) recommendations for SEO analysis?
Guides on Moz and elsewhere mostly refer to Google Analytics' Universal Analytics (UA). However, UA is being replaced with GA4, and the interface, options, and reporting are very different. Can you recommend a clear, thorough, and effective walkthrough of how to set up useful SEO reports in GA4? Is there a simple tool you recommend that will help connect historical data from UA to GA4 when GA4 is the only option available? If there's no simple tool, what values do you recommend retaining from UA for effective historical reporting? How would you use them? At minimum for reporting, I'd want to show month-to-month changes and year-to-year changes (in percentages and in real numbers) for the following: all site visits all organic visits organic visits as a percentage of all site visits organic visits that led to a specific goal completion organic visits that led to any goal completion Thanks in advance for your help!
Reporting & Analytics | | Kevin_P1 -
Unsolved Building a report that provides all ranking keywords to specific pages on my website
I'd like to build a report that provides me all of the ranking keywords (tracked and un-tracked) for about 100 specific webpages on the rockhurst.edu site. I can pull the keywords from Keyword Explorer using the exact page url, but I don't want to have to do that individually for all 100 or so pages.
Keyword Explorer | | Dave_Hunt0 -
Main Website Redirects to Mobile Website, Mobile Website counts this as direct traffic, is there a way to tell what the source/medium is?
Hello, The situation is that someone is arriving on my main website https://www.example.com and being redirected to http://m.example.com. When this happens my analytics says that the traffic is all direct coming to my mobile site. However, I know people clicking on my google cpc, and some google organic users are hitting the main website and being redirected. Before we didn't have as good of a redirect on our main website so I could tell organic and cpc traffic coming in, now my main website has a huge drop in these categories because they are redirecting to mobile but I can't tell on my mobile how much traffic from each is going to the mobile site. Is there a way to fix this? Is it because my main website is https:// and mobile is a http:// (as I know that sometimes makes traffic direct) or is it a bigger problem that can't be resolved? Thanks
Reporting & Analytics | | oxfordseminars0 -
Organic and direct traffic swap
We moved to a CMS (Webhook) and when we did that organic traffic and direct traffic swapped places. Since we moved it organic traffic is down by about 400 visits and direct traffic is up by 400 visits. I went through the list below and confirmed everything is working. The http refer wasn't being passed for a couple of weeks but the issue was resolved and the organic traffic issue is still ongoing. Is there anything else that may cause this issue? I confirmed the issue isn't one of the below problems. during http to https redirect (or vice versa) the referrer may not be passed incorrect subdomain or cross-domain tracking can strip the referrer. 302 redirects sometimes caused the referrer to be dropped problems with cookies being lost/corrupted. javascript missing from certain entry pages (means any further page view looks like a direct)
Reporting & Analytics | | BT20090 -
Drop in direct traffic
Hi, I help look after two websites and have been tracking traffic sources for a couple of years. I have noticed that both sites have seen a drop in direct traffic over 2 years. Has anyone else noticed this and do you have a hypothesis as to the reason? Overall, on both sites traffic has increased. Thanks, Amelia
Reporting & Analytics | | CommT0 -
A switch from Search to Direct traffic from Feb 2012
I have read about the Google algorithm updates galore in 2012. I would therefore be inclined to suspect a sudden sharp drop in Search traffic this year to be caused by such updates. However, I am seeing what appears to be a direct switch from 'Search' to 'Direct' traffic as of about February 16th this year. There are no specific brand-building exercises or general advertising campaigns happening around this time for the site in question. It seems very UN-explainable? I wondered if there are any thoughts from the various lurking gurus? I have attached GA screenshot.. DjTu1.png
Reporting & Analytics | | MikiP0 -
How to measure number of visits from Google News coming from Google Universal Search (NOT referral coming directly coming from news.google.com) with google analyitcs
I'm running a news site, and I have a problem of accuratly measuring which traffic is REALLY coming from google news. I analyzed a lot of individual articles and I come to the conclusion, that the visits, that come from the google news section in the universal search results are counted as "normal" search engine traffic in google analytics. So if you do a Google search for a topic that includes links from Google news, you don't get an accurate referral count. As an example, if you do a search for "eBay", incorporated into the page 1 search results you may also see Google news results as well.
Reporting & Analytics | | Mulle
If someone clicks on that Google news link that appears in Google search, it shows up in Google analytics as a referral from Google search, when it was actually from a Google news referral. I was already checking google analytics and google news help forums and searched SEO blogs for this. But I wasn't able to find a working solution. Can anybody help me out with this problem? Thanks so much, Matthias0