Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rogerbot directives in robots.txt
-
I feel like I spend a lot of time setting false positives in my reports to ignore.
Can I prevent Rogerbot from crawling pages I don't care about with robots.txt directives? For example., I have some page types with meta noindex and it reports these to me. Theoretically, I can block Rogerbot from these with a robots,txt directive and not have to deal with false positives.
-
Yes, you can definitely use the robots.txt file to prevent Rogerbot from crawling pages that you don’t want to include in your reports. This approach can help you manage and minimize false positives effectively.
To block specific pages or directories from being crawled, you would add directives to your robots.txt file. For example, if you have certain page types that you’ve already set with meta noindex, you can specify rules like this:
User-agent: Rogerbot Disallow: /path-to-unwanted-page/ Disallow: /another-unwanted-directory/
This tells Rogerbot not to crawl the specified paths, which should reduce the number of irrelevant entries in your reports.
However, keep in mind that while robots.txt directives can prevent crawling, they do not guarantee that these pages won't show up in search results if they are linked from other sites or indexed by different bots.
Additionally, using meta noindex tags is still a good practice for pages that may occasionally be crawled but shouldn’t appear in search results. Combining both methods—robots.txt for crawling and noindex for indexing—provides a robust solution to manage your web presence more effectively.
-
Never mind, I found this. https://moz.com/help/moz-procedures/crawlers/rogerbot
-
@awilliams_kingston
Yes, you can use robots.txt directives to prevent Rogerbot from crawling certain pages or sections of your site, which can help reduce the number of false positives in your reports. By doing so, you can focus Rogerbot’s attention on the parts of your site that matter more to you and avoid reporting issues on pages you don't care about.Here’s a basic outline of how you can use robots.txt to block Rogerbot:
Locate or Create Your robots.txt File: This file should be placed in the root directory of your website (e.g., https://www.yourwebsite.com/robots.txt).
Add Directives to Block Rogerbot: You’ll need to specify the user-agent for Rogerbot and define which pages or directories to block. The User-agent directive specifies which web crawlers the rules apply to, and Disallow directives specify the URLs or directories to block.
Here’s an example of what your robots.txt file might look like if you want to block Rogerbot from crawling certain pages:
javascript
Disallow: /path-to-block/
Disallow: /another-path/
If you want to block Rogerbot from accessing pages with certain parameters or patterns, you can use wildcards:javascript
Disallow: /path-to-block/*
Disallow: /another-path/?parameter=
Verify the Changes: After updating the robots.txt file, you can use tools like Google Search Console or other site analysis tools to check if the directives are being applied as expected.Monitor and Adjust: Keep an eye on your reports and site performance to ensure that blocking these pages is achieving the desired effect without inadvertently blocking important pages.
By doing this, you should be able to reduce the number of irrelevant or false positive issues reported by Rogerbot and make your reporting more focused and useful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved The Moz.com bot is overloading my server
0 -
Abnormally High Direct Traffic Volume
We have abnormally high amounts of direct traffic to our site. It's comprising over half of all web traffic while organic is second with considerably less. From there the volume decreases amongst other channels. I've never seen such a huge proportion of traffic being attributed the Direct. Does anyone know how to test this or see if there is an error in Google Analytics reporting?
Reporting & Analytics | | graceflack 01 -
Blocking Standard pages with Robots.txt (t&c's, shipping policy, pricing & privacy policies etc)
Hi I've just had best practice site migration completed for my old e-commerce store into a Shopify environment and I see in GSC that it's reporting my standard pages as blocked by robots.txt, such as these below examples. Surely I don't want these blocked ? is that likely due to my migrators or s defaults setting with Shopify does anyone know? : t&c's shipping policy pricing policy privacy policy etc So in summary: Shall I unblock these? What caused it Shopify default settings or more likely my migration team? All Best Dan
Reporting & Analytics | | Dan-Lawrence0 -
I have had a huge increase in direct traffic to our website but not sure why this suddenly happened? (no promos during this time period)
I have had a huge increase in direct traffic to our website but not sure why this suddenly happened? (no promos during this time period), traffic up 200%+ according to Google Analytics
Reporting & Analytics | | Julia_a1a1 -
Direct Traffic from Ashburn, VA
We've seen a huge spike in traffic form Ashburn, VA every Monday. It's wrecking our analytics. I don't want to create a filter based on location because we should receive legitimate traffic from that location. I see there are a few other identifiers that make me think I could add a filter for just those items (iOS 5, Safari). Does anyone have a current best-practice for this type of problem? Tx!
Reporting & Analytics | | fishlizzer2 -
Direct traffic spam on Google Analytics: how can you identify and filter it?
One of my smaller clients noticed a huge jump in direct traffic visits last month. The bounce rate was around 97% so I'm pretty certain that most of the traffic was illegitimate. I know how to filter out spam referrals and organic keywords in Google Analytics. However I'm not sure what to do about direct traffic spam. Are there recommendations for filtering this out? Can I identify spam IP addresses?
Reporting & Analytics | | RosemaryB0 -
Google Analytics shows most referrers as "Direct" -- What are some better tools?
Very often Google Analytics will show 50-90% of our referrers as (direct) which is not very helpful. Are there other tools out there that will provide a clearer breakdown of what other websites are sending us our traffic? Specifically, I want to be able to be able to tell who are the top traffic referrers to my top performing pages on my site for the last 30 days. (I want to be able to study this on a per-page basis.) Thanks in advance!
Reporting & Analytics | | Brand_Psychic0 -
Why is my direct traffic down DRASTICALLY?
I have been seeing a trend for a while that is intesifying. My direct traffic numbers are down A LOT. We are not down 50% to LY (in actual number not just percentage of traffic) I am trying to understand what could be the causes of this issue. I was considering simply bigger meaner competition, but I actually perform decently on my returning customers. Also my performance on my brand keyword is more inline with my current trend so I would except these KW to do equally as bad if the actual brand/store was the issue. The more surprising even, is the fact that I can trace back the start of the trend exactly to the day. Overnight on Sept 22 LY direct traffic went down 30% (to LY) when it was trending UP 20-25%(to LY) before. Now, we did do a redesign of the website on May 2011 (4 months before the drop), and did change host Oct 2011 (a couple weeks after the start of the trend). Do you have any clue as to why this could be happening? Did GA start tracking direct traffic differently?
Reporting & Analytics | | CassisGroup
Any thoughts?0