Confirming Robots.txt code deep Directories
-
Just want to make sure I understand exactly what I am doing
If I place this in my Robots.txt
Disallow: /root/this/that
By doing this I want to make sure that I am ONLY blocking the directory /that/ and anything in front of that. I want to make sure that /root/this/ still stays in the index, its just the that directory I want gone.
Am I correct in understanding this?
-
that's right!
Disallow: /root/this/ will bock the complete directly whereas,
Disallow: /root/this/that will only block the "that" within "this"
hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Text to code ratio<10% warning from website audit by SiteChecker.Pro - how important is it?
Hi to everyone, I used Sitechecker.Pro for a website audit of a client website https://bizpages.org and there was this warning (not an error!): TEXT TO CODE RATIO<10% https://sitechecker.pro/app/main/project/1839063/audit/summary How important is this to achieve good ranking? What are good ratios? I undestand that more text needs to be added to improve it? fcdcfbe438
Technical SEO | | astweb0 -
No: 'noindex' detected in 'robots' meta tag
Pages on my site show No: 'noindex' detected in 'robots' meta tag. However, when I inspect the pages html, it does not show noindex. In fact, it shows index, follow. Majority of pages show the error and are not indexed by Google...Not sure why this is happening. The page below in search console shows the error above...
Technical SEO | | Sean_White_Consult0 -
422 vs 404 Status Codes
We work with an automotive industry platform provider and whenever a vehicle is removed from inventory, a 404 error is returned. Being that inventory moves so quickly, we have a host of 404 errors in search console. The fix that the platform provider proposed was to return a 422 status code vs a 404. I'm not familiar with how a 422 may impact our optimization efforts. Is this a good approach, since there is no scalable way to 301 redirect all of those dead inventory pages.
Technical SEO | | AfroSEO0 -
How to tell when a directory backlink or other backlink is worthy of disavow tool? Especially when a keyword is not ranking passed where it should.
Hello, I jumped aboard as SEO for a client, who seems to of had been hit by panda and penguin back in 2012 of April, the panda part I feel I've fixed by creating better content, combining pages that were same topic into one, basically creating a better content experience that relates better to search terms users are searching for. Once the site was redesigned and relaunched all keywords improved minus one, the main keyword they want to rank for. Created a landing page for it, that is very nicely optimized for that keyword and it's brothers and sisters, however that page isn't used by google since it's brand new with a PA of 1. Doing a backlink audit I found 102 links out of 400 using the same anchor text as the keyword they want ranked for, they also have synonyms anchor text for other links too but not quite as much. Most of those 102 domains using the main keyword anchor text are directories, in my opinion I'd declare all of them spam, however there are a few with DAs higher than 50, making me little more nervous to disavow, since I want to make sure we get out of the penalty if we were hit by penguin but also don't want to ruin the ranking for other keywords we're doing better with, since they are longtails and short, but very relevant to users. How is the best way to determine if a site / directory is spammy enough that it's penalizing you and how could I approach the anchor text issue with backlinks? 99% of these links I cannot have changed, since they're directories I doubt many have had a human mess with them in a while. Sidenote* If you're going to post a link as a response, try to summarize what that link will be about, as many times links are giving as an answer but end up not really providing the meat we were seeking. Thank you!
Technical SEO | | Deacyde0 -
Doubt between sub-directory and sub-domain for develop Blog for my business website
Hi, I am much aware with SEO. But still there is a confusion in my mind. I am in the process of making blog for my Business website. I am confused between whether my blog should be with sub-directory(www.metaoption.com/ blog) or with sub-domain(blog. metaoption.com). There are many articles on this topic but they have created much confusions. Please suggest me the best option and also give me reason behind it if you can. Thanks
Technical SEO | | Perfect0070 -
Meta-robots Nofollow on logins and admins
In my SEO MOZ reports I am getting over 400 errors as Meta-robots Nofollow. These are all leading to my admin login page which I do not want robots in. Should I put some code on these pages so the robots know this and don't attempt to and I do not get these errors in my reports?
Technical SEO | | Endora0 -
406 Error Code?
Hi, The Crawl Diagnostic section of my campaign reporting is displaying 406 error code on almost all the pdf files of my website. But I can access them normally and there is no problem while doing so. Then, why this error status code? Regards, Shailendra Sial
Technical SEO | | IM_Learner0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0