Blocking robots.txt
-
Do you know any methods to block robots.txt file from external users?
-
You can block on server side all IP except Google bot for any file, but it may lead to ban, because of cloaking.
-
There is no setting or feature to do this, but you can do it with code
You could detect the useragnet and create a robot.txt dynamically depending on the useragent.
But i cant see why you would want to do this. -
Hi Zsolt
I don't believe there is a way to, not in a Windows environment anyway.
Robots.txt has to be hosted at website root level, as in example.com/robots.txt which is where search engine bots look for it. If a bot can find it (which they usually need to if one exists) then a person can too.
If there are pages or folders that you don't want anybody to know about, exclude them from robots.txt and edit the Meta Robots tag on each page concerned to e.g. NoIndex, Follow or whatever is appropriate.
Regards
Simon
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
What are the negative implications of listing URLs in a sitemap that are then blocked in the robots.txt?
In running a crawl of a client's site I can see several URLs listed in the sitemap that are then blocked in the robots.txt file. Other than perhaps using up crawl budget, are there any other negative implications?
Technical SEO | | richdan0 -
Is using JavaScript injected text in line with best practice on making blocks of text non-crawlable?
I have an ecommerce website that has common text on all the product pages, e.g. delivery and returns information. Is it ok to use non-crawlable JavaScript injected text as a method to make this content invisible to search engines? Or is this method frowned upon by Google? By way of background info - I'm concerned about duplicate/thin content, so want to tackle this by reducing this 'common text' as well as boosting unique content on these pages. Any advice would be much appreciated.
Technical SEO | | Coraltoes770 -
Block Quotes and Citations for duplicate content
I've been reading about the proper use for block quotes and citations lately, and wanted to see if I was interpreting it the right way. This is what I read: http://www.pitstopmedia.com/sem/blockquote-cite-q-tags-seo So basically my question is, if I wanted to reference Amazon or another stores product reviews, could I use the block quote and citation tags around their content so it doesn't look like duplicate content? I think it would be great for my visitors, but also to the source as I am giving them credit. It would also be a good source to link to on my products pages, as I am not competing with the manufacturer for sales. I could also do this for product information right from the manufacturer. I want to do this for a contact lens site. I'd like to use Acuvue's reviews from their website, as well as some of their product descriptions. Of course I have my own user reviews and content for each product on my website, but I think some official copy could do well. Would this be the best method? Is this how Rottentomatoes.com does it? On every movie page they have 2-3 sentences from 50 or so reviews, and not much unique content of their own. Cheers, Vinnie
Technical SEO | | vforvinnie1 -
Client accidently blocked entire site with robots.txt for a week
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues. Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in. The error has been corrected, but... Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly? Thanks for all of your help.
Technical SEO | | pixelpointpress0 -
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like: staging.domain.com
Technical SEO | | fthead9
User-agent: *
Disallow: / in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.0 -
How to block/notify google that your domain has been added to sites with very low trustworthiness?
Hey Guys, I am writing to SEOmoz community because a problem occurred which I do not know how to solve: My domain (xyz.com) occured on very strange sites with very low trustworthiness (even blocked by google). Checking the site, I found out that all of the pictures were ALT=xyz.com. Could this hurt my position of my site on google rankings? How to prevent such actions, what should I do? Thanks for you help in advance!
Technical SEO | | Kajmany0 -
Site links -> anchor text and blocking
1.Does anyone know where google pulls that anchor text for the organic site links? -- Is there a way to manipulate the anchor text of the sitelinks to get our more important pages to stick out more (capitalization, punctuation etc.) 2. If i block a few of my sitelinks from showing will goolge replace it with a new sitelink or will i be left with fewer? Thanks! Srs
Technical SEO | | Morris770