Is there any value in having a blank robots.txt file?
-
I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place. Is there any merit in having a blank robots.txt file?
What is the minimum you would include in a basic robots.txt file?
-
I know this is four years old, but there's value in having a blank robots.txt as some tools (including the latest version of the Moz crawler) will baulk at sites without a robots.txt file.
-
Thanks for both of your replies. As per my question it was around whether there is any value having a blank robots.txt file. Philipp's answer was right on the money.
-
i mentioned same only, The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site."
n has added - More and more people use robots,txt to disallow access to some administration or private folders of the site
-
No use in having a blank robots.txt. Minimum requirement if you want to have your site crawled is this:
User-agent: * Allow: /
Note that Gagans example above will block the entire site.
-
Hi, This is what i got
" Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called_The Robots Exclusion Protocol_. It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:
User-agent: * Disallow: /
The "<tt>User-agent: *</tt>" means this section applies to all robots. The "<tt>Disallow: /</tt>" tells the robot that it should not visit any pages on the site."
More and more people use robots,txt to disallow access to some administration or private folders of the site . If you dont want to hide anything then may be you can leave it blank
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
CSS and Javascipt files - website redesign project
UPDATED: We ran a crawl of the old website and have a list of css and javascript links as are part of the old website content. As the website is redesigned from scratch, I don't think these old css and javascipt files are being used for anything on the new site. I've read elsewhere online that you redirect "all" content files if launching/migrating to a new site. We are debating if this is needed for css and javascript files. Examples (A) http://website.com/wp-content/themes/style.css (B) http://website.com/wp-includes/js/wp-embed.min.js?ver=4.8.1
Technical SEO | | CREW-MARKETING0 -
Adding directories to robots nofollow cause pages to have Blocked Resources
In order to eliminate duplicate/missing title tag errors for a directory (and sub-directories) under www that contain our third-party chat scripts, I added the parent directory to the robots disallow list. We are now receiving a blocked resource error (in Webmaster Tools) on all of the pages that have a link to a javascript (for live chat) in the parent directory. My host is suggesting that the warning is only a notice and we can leave things as is without worrying about the page being de-ranked/penalized. I am wondering if this is true or if we should remove the one directory that contains the js from the robots file and find another way to resolve the duplicate title tags?
Technical SEO | | miamiman1000 -
Best way to retain banklink values when moving site?
Hi all, I want to get some opinions on what the best practice is when transferring backlink values from an old site to a new one. On the old site, I currently have a product page and this particular product has multiple models all listed on the one singe page. However on the new site, every model of this particular product has its own page. These product model pages would have relatively similar content apart from several key details which differentiates the models. Firstly would you guys recommend this splitting of models of the same product to different pages? If so, my initial thought process is to 301 redirect the old product page to the new model page that is most popular, and adding rel canonical tags to the other model pages. Would you consider this best practice? Or are there better ways I can be doing this to retain backlink values without also getting penalised due to possible content duplication? Thanks! Jac - sent from my manager's account.
Technical SEO | | RuchirP0 -
Magento Robots & overly dynamic URL-s
How can i block all URL-s on a Magento store that have 2 or more dynamic parameters in it, since all the parameters have attribute name in it and not some uniform ID Would something like: Disallow: /?&* work? Since the only thing that is constant throughout all the custom parameters is that they are separated with "&" Thanks 🙂
Technical SEO | | tilenkrivec0 -
How should I properly setup my .htaccess file?
I have searched google for 'how to setup .htaccess file' and it seems that every website has some variation. For example: RewriteCond %{HTTP_HOST} ^yoursite.com RewriteRule ^(.*)$ http://www.yoursite.com/$1 [R=permanent,L] On SEOMOZ someone posted this: RewriteCond %{HTTP_HOST} !^www.yoursite.com [NC] RewriteRule (.*) http://www.yoursite.com/$1 [L,R=301] On yet another website, I found this: RewriteEngine On RewriteCond %{HTTP_HOST} !^your-site.com$ [NC] RewriteRule ^(.*)$ http://your-site.com/$1 [L,R=301] As you can see there are slight differences. Which one do I use? I'm on Apache CentOS and I have HTML5 websites and several Joomla! wesites. Would the HTACCESS File be different for both?
Technical SEO | | maxduveen0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
Robots exclusion
Hi All, I have an issue whereby print versions of my articles are being flagged up as "duplicate" content / page titles. In order to get around this, I feel that the easiest way is to just add them to my robots.txt document with a disallow. Here is my URL make up: Normal article: www.mysite.com/displayarticle=12345 Print version of my article www.mysite.com/displayarticle=12345&printversion=yes I know that having dynamic parameters in my URL is not best practise to say the least, but I'm stuck with this for the time being... My question is, how do I add just the print versions of articles to my robots file without disallowing articles too? Can I just add the parameter to the document like so? Disallow: &printversion=yes I also know that I can do add a meta noindex, nofollow tag into the head of my print versions, but I feel a robots.txt disallow will be somewhat easier... Many thanks in advance. Matt
Technical SEO | | Horizon0 -
What to do about "blocked by meta-robots"?
The crawl report tells me "Notices are interesting facts about your pages we found while crawling". One of these interesting facts is that my blog archives are "blocked by meta robots". Articles are not blocked, just the archives. What is a "meta" robot? I think its just normal (since the article need only be crawled once) but want a second opinion. Should I care about this?
Technical SEO | | GPN0