Role of Robots.txt and Search Console parameters settings
-
Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?
-
Thank you! That helps a lot.
-
So, regarding NOINDEX vs. DISALLOW, there is a significant difference there.
If you disallow in robots, you are asking the search engine to not even crawl that page. Whereas if you NOINDEX in the page head, then the search engine may still crawl the page but should not index it.
There are a few impacts of this difference. For one, if you use NOINDEX but still allow the search engine to FOLLOW, then it may discover pages which otherwise might not have been discovered (if that page has unique links, for example). So in this case, you might prefer to use (NOINDEX, FOLLOW) if you want that discovery to happen. On the other hand, if you have many pages and you are trying to wisely use the search engine's crawl "budget", then you might in some cases prefer to disallow some paths in the robots.txt file.
It's also common to use robots.txt to disallow some files where you do not have control over the response. Non-html files, where you might not be able to easily administer noindex directives. Or dynamic pages your web application may serve but not allow you to administer head tags for.
All of that said, robots.txt files have been shrinking ever since the search engines began to render javascript, since now they need access to a lot of resource files which they previously did not. Much of the old advice of disallowing scripts and admin folder paths may be obsolete now, if those files are needed to properly render pages.
-
Thanks so much for the reply. I am still struggling to understand when it's best to use robots.txt
I think I understand that url parameters are best handled in the search console parameters tool, and if you want to keep a page out of the index, it's best to use meta noindex rather than blocking it in robots.txt
What would be an example of when you would want to disallow something in robots.txt?
-
For one, the GSC functionality is much easier to use for dealing with URLs having multiple query string parameters. robots.txt processes the statements in order, so you often have to set up a broad disallow, followed by more specific allows, to achieve the same result which can be more easily managed in GSC.
Also, GSC is useful for the "representative URL" setting, if your pages don't necessarily get crawled without the parameter present at all, but you only want one version of the page indexed if the crawler encounters multiple versions. So, this is a little like a dynamic canonical, except you are not specifying which version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console rejecting XML sitemap files as HTML files, despite them being XML
Hi Moz folks, We have launched an international site that uses subdirectories for regions and have had trouble getting pages outside of USA and Canada indexed. Google Search Console accounts have finally been verified, so we can submit the correct regional sitemap to the relevant search console account. However, when submitting non-USA and CA sitemap files (e.g. AU, NZ, UK), we are receiving a submission error that states, "Your Sitemap appears to be an HTML page," despite them being .xml files, e.g. http://www.t2tea.com/en/au/sitemap1_en_AU.xml. Queries on this suggest it's a W3 Cache plugin problem, but we aren't using Wordpress; the site is running on Demandware. Can anyone guide us on why Google Search Console is rejecting these sitemap files? Page indexation is a real issue. Many thanks in advance!
Technical SEO | | SearchDeploy0 -
What should I consider before setting up a sub domain?
Morning all! We've just been approached by IT. They've been asked to develop an online 'portal' where clients can upload and download materials. IT will be developing a portal that sits on the company network perimeter (hosted on our internal servers). The concept is that 3<sup>rd</sup> parties can get and update information in regards to progressing cases, the first use will be for agencies who will retrieve records via the portal and then post reports after a consultation. however I would like to have an automatic link to forward to the portal from the web address: oursite.com/dave We will look to create robot.txt and anything else to prevent from listings/indexes. Does any of the above mess with your SEO? The Directors have asked if they can have this on a sub-domain of our site. Is this wise? And, are there any major SEO considerations for my team to worry about? Better still, have any of you had to deal with this before? If so, what happened? All the best, John
Technical SEO | | Muhammad-Isap0 -
XML Sitemap and unwanted URL parameters
We currently don't have an XML sitemap for our site. I generated one using Screaming Frog and it looks ok, but it also contains my tracking url parameters (ref=), which I don't want Google to use, as specified in GWT. Cleaning it will require time and effort which I currently don't have. I also think that having one could help us on Bing. So my question is: Is it better to submit a "so-so" sitemap than having none at all, or the risks are just too high? Could you explain what could go wrong? Thanks !
Technical SEO | | jfmonfette0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
How can you tell if a 301 has been set up correctly?
How can you tell if a 301 has been set up correctly. We had a .co.uk site that has moved over to a .com However the .co.uk we have hasd problems with the old webmaster giving control, but he has said he has implemented a 301 redirect to the new .com site, and if you click on .co.uk it does transfer you to the .com site How can we check that it is a 301 set up and not another sort of redirect?
Technical SEO | | ocelot0 -
Robots.txt query
Quick question, if this appears in a clients robots.txt file, what does it mean? Disallow: /*/_/ Does it mean no pages can be indexed? I have checked and there are no pages in the index but it's a new site too so not sure if this is the problem. Thanks Karen
Technical SEO | | Karen_Dauncey0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0