Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
I have two robots.txt pages for www and non-www version. Will that be a problem?
-
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
-
It wont affect your SEO, you just don;t need the the non-https version
-
Hi ramb,
Short answer: No, it won't affect your ability to rank in Google. Unless both sites (non-www and www version) compete for the same search term and one of them isn't blocked in the correspondent robots.txt file.
If you can, make sure to have a redirection rule so as everything in the non-www goes to the www.
It bugs me why aren't you redirecting the complete non-www to the www version.
Two possibilities come to my mind:- You can't redirect the whole non-www due to some app or technical need.
In this case, both versions, if accessible to Google, will be treated as different sites. Thus, you must be sure that both robots file are correct for the given subdomain. - You have a separate website, which contains different content from the www version (this usually happens with subdomains with different page types, such as products.abc.com and categories.abc.com)
In this case, please be sure that you know what you want to be blocked and have each robots.txt file in their subdomain.
Keep in mind that Robots file only controls where you don't want googlebot to access in the public version of your website. When a certain page or group of pages are blocked in robots.txt, google won't access them anymore thus not knowing if that page has what it needs to rank for any given search term. Google might rank lower and users will see a note in search results, leading to a lower CTR.
Hope it helps.
Best Luck.
Gaston - You can't redirect the whole non-www due to some app or technical need.
-
Are you redirecting everything on www to non-www? If so, you don't really need a robots.txt to be served for the www subdomain. Google will ignore the original robots.txt file if it is given a 301 anyway.
-
Hi Gatson
Thank you for your response. Currently, www version of the site is redirected to non-www version, which is the primary(or root) domain.
But the problem is, I have 2 robots.txt files running for the same site. i.e. same robots.txt file loads on both www and non-www version. (Example https://www.abc.com/robots.txt and https://abc.com/robots.txt).
Does it affect my site's SEO ??
Should I redirect www-version of the file to non-www version?
Your feedback will be highly appreciated.Thank you,
R.
-
Hi ramb,
It's totally fine to have different robots.txt files for different subdomains.
Thus said, http://domain.com and http://www.domain.com are different subdomains. Consider the one with non-www as the full root domain.In case it is needed, here you have Google's official resource about robots.txt:
Learn about Robots.txt file - Search Console helpHope it helps.
Best luck.
Gast
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do I have a problem with missing pages in Screaming Frog?
We have category pages and some of those pages have pagination due to us having additional items. Screaming Frog could not find the items that were after page 1. Is this a problem for Google? These item pages are still in the sitemap. I am sure they can find them to index them but does it hurt rankings at all.
Technical SEO | | EcommerceSite0 -
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
Will an XML sitemap override a robots.txt
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
Technical SEO | | KCBackofen0 -
Two different canonical tags on one page
Due to an error, some of my pages now have two canonical tags on them. One is correct and the other goes to a nonsense URL (404 page). I know I should ideally remove the incorrect ones, but it's a big manual job. Are they doing any harm? Can I just leave them there and let Google figure it out? The correct ones are higher up in the code. Will this make a difference? Any help appreciated.
Technical SEO | | ShearingsGroup0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Robots.txt Sitemap with Relative Path
Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?
Technical SEO | | MRCSearch0