Blocking out specific URLs with robots.txt
-
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example.
I'm trying to block
but not block
It seems if it setup my robots.txt as so..
Disallow: /cats
It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it.
Any help is much appreciated, thanks in advance.
-
Do not play with Robots as it may block out series of pages and folders out of index
Correct command as stated by Lesley is /cats/ . Refer official documentation
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt
-
You can either use /cats/ or /cats/* that should just block the cats folder and not the other folder. Note the first use is the preferred one.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Specific page does not index
Hi, First question: Working on the indexation of all pages for a specific client, there's one page that refuses to index. Google Search console says there's a robots.txt file, but I can't seem to find any tracks of that in the backend, nor in the code itself. Could someone reach out to me and tell me why this is happening? The page: https://www.brody.be/nl/assistentiewoningen/ Second question: Google is showing another meta description than the one our client gave in in Yoast Premium snippet. Could it be there's another plugin overwriting this description? Or do we have to wait for it to change after a specific period of time? Hope you guys can help
Intermediate & Advanced SEO | | conversal0 -
How to make Google index your site? (Blocked with robots.txt for a long time)
The problem is the for the long time we had a website m.imones.lt but it was blocked with robots.txt.
Intermediate & Advanced SEO | | FCRMediaLietuva
But after a long time we want Google to index it. We unblocked it 1 week or 8 days ago. But Google still does not recognize it. I type site:m.imones.lt and it says it is still blocked with robots.txt What should be the process to make Google crawl this mobile version faster? Thanks!0 -
URL Formatting - Magento
Hi, We are working with a client on Mangento who URLs are formatting Google friendly eg; productname.html - as seen in site search in Google) but when you click the link to the site it is adding on #.VEWKQxbc754 (or similar) The site is also having some page indexing problems as well Thoughts? specific settings/Add on in magento?
Intermediate & Advanced SEO | | Pure-SEO0 -
Switching Url
I started working with a Roofer/Contractor about a year ago. His website is http://www.lancasterparoofing.com/. The name of his business is Spicher Home Improvements. He used to have spicherhomeimprovements.com, well he still does. He was focusing on Roofing and Siding but now would like to branch to other areas like Interior remodeling. So adding interior work under LancasterPaRoofing.com is not applicable. I do not think starting another domain and having two is the best option. I think he should go back to using SpicherHomeImprovements.com and I assume he would take a small hit but in time he should be better off. Plus the url is more applicable to the real name of his business. Thanks for any feedback I receive. Chad
Intermediate & Advanced SEO | | ChadEisenhart0 -
Blog URL Canonical
Hi Guy's, I would like to know your thoughts on the following set-up for blog canonical. Option 1 domain.com/blog = <link rel="canonical" href="domin.com/blog"> domain.com/blog-category/general = <link rel="canonical" href="domain.com/blog"> domain.com/blog-article/how-to-set-canonical = no canonical option 2 domain.com/blog = <link rel="canonical" href="domin.com blog"="">(as option 1)</link rel="canonical" href="domin.com> domain.com/blog-category/general = <link rel="canonical" href="domain.com blog-category="" general"="">(this time has the canonical of the category)</link rel="canonical" href="domain.com> domain.com/blog-article/how-to-set-canonical = <link rel="canonical" href="domain.com blog-article="" how-to-set-canonical"="">(this time has the canonical of the article full URL)</link rel="canonical" href="domain.com> Just not sure which is the best option, or even if it is any of the above! Thanks Dan
Intermediate & Advanced SEO | | Dan1e10 -
Googlebot crawling partial URLs
Hi guys, I've checked my email this morning and I've got a number of 404 errors over the weekend where Google has tried to crawl some of my existing pages but not found the full URL. Instead of hitting 'domain.com/folder/complete-pagename.php' it's hit 'domain.com/folder/comp'. This is definitely Googlebot/2.1; http://www.google.com/bot.html (66.249.72.53) but I can't find where it would have found only the partial URL. It certainly wasn't on the domain it's crawling and I can't find any links from external sites pointing to us with the incorrect URL. GoogleBot is doing the same thing across a single domain but in different sub-folders. Having checked Webmaster Tools there aren't any hard 404s and the soft ones aren't related and haven't occured since August. I'm really confused as to how this is happening.. Thanks!
Intermediate & Advanced SEO | | panini0 -
Block an entire subdomain with robots.txt?
Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
Intermediate & Advanced SEO | | kylesuss12 -
Block all search results (dynamic) in robots.txt?
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from: /search?12345&productblue=true&id789 to /product/search/blue_widgets/womens/large As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
Intermediate & Advanced SEO | | rhutchings0