Prevent Rodger Bot for crwaling pagination
-
Hello,
I have a site that has around 300k static pages but each one of these has pagination on it.
I would like to stop Rodger Bot from crawling the paginated pages and maybe even Google.
The paginated pages are results that change daily so there is no need to index them.
What's the best way to prevent them from being crawled?
The pages are dynamic so I don't know the URLs.
I have seen people mention add no follow to the pagination links would this do it? or is there a better way?
Many thanks
Steve
-
Robots.TXT Rules
If you have architecture like:
Then use:
User-agent: rogerbot
Disallow: /page/
If you have architecture like:
Then use:
User-agent: rogerbot
Disallow: /*?p=
If you have architecture like:
Then use:
User-agent: rogerbot
Disallow: /*?page=
That should pretty much stop Rogerbot from crawling paginated content. It would certainly stop Googlebot, but I don't quite know if Rogerbot respects the "*" wildcard like Googlebot does. Give it a try, see what happens
Don't worry, in the robots.txt file only "*" is respected as a wildcard, so you won't have any problems with "?" and there won't be any need for an escape character
-
Hi,
Lets separate topics here:
- Prevent crawling is by robots.txt and won't index pages that are already indexed.
- Prevent indexing and de-index pages already indexed, is done by robots tag with a noindex parameter.
Here, an article from google about that: Block search indexing with 'noindex' - Google Search Console Help
That said, another action you might take is adding a nofollow in pagination links. Nofollow only tells Google: "I don't want that page to considered as important." It will probably reduce its chances to rank high but won't prevent from crawling nor indexing.
Another way, yet a little more expensive in development, is adding a specific parameter in the URL when you know is pagination. Then you can block that in robots.txt. Again, this won't remove what's been already indexed.Hope it helps.
Best luck.
Gaston
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Scripts?
0 -
Unsolved Domain Overview - no NZ?
Why on Domain Overview are there only 4 countries to choose from? I am in NZ and need to check websites that operate businesses in NZ not Canada, US or Australia?
Getting Started | | KapitiGirl0 -
Unsolved How do I remove close caption when watching lessons
By default, the lesson videos I'm watching have closed caption turned on. I find it annoying and I'd like to turn it off. How do I do that?
Getting Started | | digipal550 -
When setting up a new campaign for a client should I add keywords
I'm new to Moz and SEO. When setting up a new campaign there are some keywords pre-filled. Sometimes there are a lot. Sometimes they are very client-specific rather than for the sector or for a search. Sometimes there are very few poor quality keywords. Should I research keywords so that there are as many as possible relevant keywords when first setting up the campaign or leave it as it is and add keywords later?
Getting Started | | thepeterlunt0 -
How do I check PA in moz
i see where I can check the DA for a site, but how can I check the PA of a page?
Getting Started | | Konvertica0 -
Has anyone purchased the MOZ SEO courses? Are they good?
I'm looking to learn more about SEO. Has anyone purchased the courses available through MOZ? If so, are they useful?
Getting Started | | Stevepair0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
My website does not allow all crawler to crawl, Now my question is that whether i need to give permission to moz crawler if yes then whaat is moz bot name?
My website does not permit all crawler to crawl website. Whether ii need to give permission to moz bot to crawl website or not? If yes what is the moz bot name?
Getting Started | | irteam0