Rogerbot directives in robots.txt
-
I feel like I spend a lot of time setting false positives in my reports to ignore.
Can I prevent Rogerbot from crawling pages I don't care about with robots.txt directives? For example., I have some page types with meta noindex and it reports these to me. Theoretically, I can block Rogerbot from these with a robots,txt directive and not have to deal with false positives.
-
Yes, you can definitely use the robots.txt file to prevent Rogerbot from crawling pages that you don’t want to include in your reports. This approach can help you manage and minimize false positives effectively.
To block specific pages or directories from being crawled, you would add directives to your robots.txt file. For example, if you have certain page types that you’ve already set with meta noindex, you can specify rules like this:
User-agent: Rogerbot Disallow: /path-to-unwanted-page/ Disallow: /another-unwanted-directory/
This tells Rogerbot not to crawl the specified paths, which should reduce the number of irrelevant entries in your reports.
However, keep in mind that while robots.txt directives can prevent crawling, they do not guarantee that these pages won't show up in search results if they are linked from other sites or indexed by different bots.
Additionally, using meta noindex tags is still a good practice for pages that may occasionally be crawled but shouldn’t appear in search results. Combining both methods—robots.txt for crawling and noindex for indexing—provides a robust solution to manage your web presence more effectively.
-
Never mind, I found this. https://moz.com/help/moz-procedures/crawlers/rogerbot
-
@awilliams_kingston
Yes, you can use robots.txt directives to prevent Rogerbot from crawling certain pages or sections of your site, which can help reduce the number of false positives in your reports. By doing so, you can focus Rogerbot’s attention on the parts of your site that matter more to you and avoid reporting issues on pages you don't care about.Here’s a basic outline of how you can use robots.txt to block Rogerbot:
Locate or Create Your robots.txt File: This file should be placed in the root directory of your website (e.g., https://www.yourwebsite.com/robots.txt).
Add Directives to Block Rogerbot: You’ll need to specify the user-agent for Rogerbot and define which pages or directories to block. The User-agent directive specifies which web crawlers the rules apply to, and Disallow directives specify the URLs or directories to block.
Here’s an example of what your robots.txt file might look like if you want to block Rogerbot from crawling certain pages:
javascript
Disallow: /path-to-block/
Disallow: /another-path/
If you want to block Rogerbot from accessing pages with certain parameters or patterns, you can use wildcards:javascript
Disallow: /path-to-block/*
Disallow: /another-path/?parameter=
Verify the Changes: After updating the robots.txt file, you can use tools like Google Search Console or other site analysis tools to check if the directives are being applied as expected.Monitor and Adjust: Keep an eye on your reports and site performance to ensure that blocking these pages is achieving the desired effect without inadvertently blocking important pages.
By doing this, you should be able to reduce the number of irrelevant or false positive issues reported by Rogerbot and make your reporting more focused and useful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Zero '0' Total Visits
Hi. One of the properties in our account has been reporting zero '0' total visits for the past few weeks. The other properties aren't affected. Is there a reason for this or is this an issue on the Moz side of things. Thanks!Moz Zero Visits.PNG
Reporting & Analytics | | rh-digi0 -
Unsolved Building a report that provides all ranking keywords to specific pages on my website
I'd like to build a report that provides me all of the ranking keywords (tracked and un-tracked) for about 100 specific webpages on the rockhurst.edu site. I can pull the keywords from Keyword Explorer using the exact page url, but I don't want to have to do that individually for all 100 or so pages.
Keyword Explorer | | Dave_Hunt0 -
Solved Is there a way to remove the Moz branding from automated PDF reports that are emailed to clients?
Is there a way to remove the Moz branding from automated PDF reports that are emailed to clients?
Reporting & Analytics | | ArttiaCreative0 -
Spam Direct Traffic
Hello, Lately, I have been receiving a big amount of unexpected direct traffic from Boston. After analyzing with Analytivs, this is what I get (please, check attachment). Normally I would be blocking this traffic source straight away from my Google Analytics account, and also blocking this traffic from accesing my servers, but check out the analytic metrics: this traffic represents 12% of my total traffic right now!!! av. session duration is 4:53 !! bounce rate is 72% !!!! pages/session 1.44 !! Service provider is "Microsoft Corporation" who looks like one of the typical spammy service providers. My question is, is this a bot?? what do you think ? Thanks, Luis zUlVHIi
Reporting & Analytics | | Yeeply.com1 -
Direct traffic coming to URLs with /rss_feedIP#
I'm doing a site audit for an organization that has a bunch of really messy old Drupal sites. In looking at their traffic, I see that a majority of it is coming to landing pages that look like this: http://clientsdomain.com/rss_feed173.8.208.97 plus other IP addresses. The bounce rate is 100% and time on site is less than a second. It looks like something that an RSS feed tool might use, but I've never seen something like it before. It creates its own landing page, hits the site, then appears to bounce. This is making their Analytics data look a whole lot worse than the site is actually doing, since the bounce rate is 100% on all that fake traffic. I have some experience with Drupal, but I've never seen anything like this in Drupal or any other CMS. Has anyone out there ever experienced something like this, where direct traffic comes to an rss feed landing page and bounces immediately?
Reporting & Analytics | | newwhy0 -
Domain redirect for direct mail (source) tracking in Analytics?
We have a client that would like to do some direct mail marketing and the plan is to use a short/simple domain in the marketing materials, which redirects to the main site domain. By default this would show as a referral traffic source in Analytics, right? So any traffic that came through that redirect would be attributed to "shortdomain.com / referral"? Meaning I wouldn't need to do any sort of customized, advanced tracking set up to track conversions that I've already set up (ecomm and goals) and attribute them to this new source? Just double checking that I'm not overlooking something. Thanks!
Reporting & Analytics | | VTDesignWorks0 -
800,000 pages blocked by robots...
We made some mods to our robots.txt file. Added in many php and html pages that should not have been indexed. Well, not sure what happened or if there was some type of dynamic conflict with our CMS and one of these pages, but in a few weeks we checked webmaster tools and to our great surprise and dismay, the number of blocked pages we had by robots.txt was up to about 800,000 pages out of the 900,000 or so we have indexed. 1. So, first question is, has anyone experienced this before? I removed the files from robots.txt and the number of blocked files has still been climbing. Changed the robots.txt file on the 27th. It is the 29th and the new robots.txt file has been downloaded, but the blocked pages count has been rising in spite of it. 2. I understand that even if a page is blocked by robots.txt, it still shows up in the index, but does anyone know how the blocked page affects the ranking? i.e. while it might still show up even though it has been blocked will google show it at a lower rank because it was blocked by robots.txt? Our current robots.txt just says: User-agent: *
Reporting & Analytics | | TheCraig
Disallow: Sitemap: oursitemap Any thoughts? Thanks! Craig0