Blocking pages from Moz and Alexa robots
-
Hello,
We want to block all pages in this directory from Moz and Alexa robots - /slabinventory/search/
Here is an example page - https://www.msisurfaces.com/slabinventory/search/granite/giallo-fiesta/los-angeles-slabs/msi/
Let me know if this is a valid disallow for what I'm trying to.
User-agent: ia_archiver
Disallow: /slabinventory/search/*User-agent: rogerbot
Disallow: /slabinventory/search/*Thanks.
-
Hi,
Firstly, yes, that robots.txt is valid and would work for your purpose.
There's a great tool (https://technicalseo.com/tools/robots-txt/) that allows you to put in your proposed robots.txt file contents, the URL you want to test and even the robot you want to test against and it lets you know the result.
-
That looks valid to me. It's possible you may not need "*" at the end of each rule but I can't see it doing any harm either
I might go more like:
User-agent: ia_archiver
Disallow: /*/search/User-agent: rogerbot
Disallow: /*/search/^ this would stop all search URLs being indexed, so even if you introduced new search facilities later in other directories - they would 'probably' be caught too (assuming that is your intention, assuming they were still in /search/ subdirs)
Don't think what you have done is wrong though.
Always check using Google's robots.txt tester to be safe. Just put your rules into the tester (altering them to be used for all user-agents), and try out some different URL patterns. When it works as you like, update your real robots.txt file (remembering of course, to restore your rogerbot / alexa UA targeting - if you don't want the rules to also apply to Google!)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking subdomains with Robots.txt file
We noticed that Google is indexing our pre-production site ibweb.prod.interstatebatteries.com in addition to indexing our main site interstatebatteries.com. Can you all help shed some light on the proper way to no-index our pre-prod site without impacting our live site?
Technical SEO | | paulwatley0 -
Pages Fighting Over Keywords
Hi Guys, Just after some general advice. Since manipulation of keywords through links is no longer a feasible way of ranking these days, I was wondering how people got round the issue of pages bouncing for the same keyword or Google deciding that a blog post is a better signal rather than your service page. For instance if you are doing local and national search, how do you stop the local keywords ranking for national pages, without diluting the local signals. I have some ideas:- stronger internal linking to the page review content But obviously redirects or canonical won't be a good solution as I still want these pages to exist in their own right. Regards Neil
Technical SEO | | nezona0 -
Why are crawlers not picking up these pages?
Hi there, I've been asked to audit a new subdomain for a travel company. It's all a bit messy, so it's going to take some time to remedy. However, one thing I couldn't understand was the low number of pages appearing in certain crawlers. The subdomain has many pages. A homepage, category pages then product pages. Unfortunately, tools like Screaming Frog and xml-sitemaps.com are only picking up 19 pages and I can't figure out why. Google has so far indexed around 90 pages - this is by no means all of them, but that's probably because of the new domain and lack of sitemap etc. After looking at the crawl results, only the homepage and category (continent pages) are showing. So all the product pages are not. for example, tours.statravel.co.uk/trip/Amsterdam_Kings_Day_(Start_London_end_London)-COCCKDM11 is not appearing in the crawl results. After reviewing the source code, I can't see anything that would prevent this page being crawled. Am I missing something? At the moment, the crawl should be picking up around 400+ product pages, but it's not picking up any. Thanks
Technical SEO | | PeaSoupDigital0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
How come only 2 pages of my 16 page infographic are being crawled by Moz?
Our Infographic titled "What Is Coaching" was officially launched 5 weeks ago. http://whatiscoaching.erickson.edu/ We set up campaigns in Moz & Google Analytics to track its performance. Moz is reporting No organic traffic and is only crawling 2 of the 16 pages we created. (see first and third attachments) Google Analytics is seeing hundreds of some very strange random pages (see second attachment) Both campaigns are tracking the url above. We have no idea where we've gone wrong. Please help!! 16_pages_seen_in_wordpress.png how_google_analytics_sees_pages.png what_moz_sees.png
Technical SEO | | EricksonCoaching0 -
How can I prevent duplicate content between www.page.com/ and www.page.com
SEOMoz's recent crawl showed me that I had an error for duplicate content and duplicate page titles. This is a problem because it found the same page twice because of a '/' on the end of one url. e.g. www.page.com/ vs. www.page.com My question is do I need to be concerned about this. And is there anything I should put in my htaccess file to prevent this happening. Thanks!
Technical SEO | | onlineexpression
Karl0 -
Product category paging
Hi, My product categories have 2-3 pages each. I have paging implemented with rel=next and rel=prev. from some reason Google GWT now reports the pages as having duplicate titles and description. Should I be worried? Should I set a different title like "blue category - page x" ? Thanx, Asaf
Technical SEO | | AsafY0 -
Invisible robots.txt?
So here's a weird one... Client comes to me for some simple changes, turns out there are some major issues with the site, one of which is that none of the correct content pages are showing up in Google, just ancillary (outdated) ones. Looks like an issue because even the main homepage isn't showing up with a "site:domain.com" So, I add to Webmaster Tools and, after an hour or so, I get the red bar of doom, "robots.txt is blocking important pages." I check it out in Webmasters and, sure enough, it's a "User agent: * Disallow /" ACK! But wait... there's no robots.txt to be found on the server. I can go to domain.com/robots.txt and see it but nothing via FTP. I upload a new one and, thankfully, that is now showing but I've never seen that before. Question is: can a robots.txt file be stored in a way that can't be seen? Thanks!
Technical SEO | | joshcanhelp0