Using one robots.txt for two websites
-
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this:
User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap
Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
-
Hi @gpainter,
Thanks for your help. I can't see anything specific in that link that says you can't have two sitemaps in one robots.txt. Where it mentions the sitemap it does say "You can specify multiple sitemap fields", although I'm not sure whether this means having multiple sitemap URLs under one mention of 'sitemap'?
-
@ciehmoz Hey I've replied to the other thread too.
The best case here will be to utilize different robots.txt files for both the websites.
You could've used the same robots.txt file only if the other site was on the same subdomain.
Don't forget to include the corresponding sitemaps to the new robots.txt file, hope this works out, cheers.
-
Hey @ciehmoz
Just replied to your other thread, you will need one robot.txt per site. Referring to two sitemaps in one robots.txt will confuse Google.
Info here - https://developers.google.com/search/docs/advanced/robots/robots_txt
Good Luck
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Merging two sites into a new one: best way?
Hi, I have one small blog on a specific niche and let's call it firstsite.com (.com extension) and it's hosted on my server. I am going to takeover a second blog on same niche but with lots more links, posts, authority and traffic. But it his on a .info domain and let's call it secondsite.info and for now it's on a different server. I have a third domain .com where I would like join both blogs. Domain is better and reflects niche better and let's call it thirdsite.com How should I proceed to have the best result? I was thinking of creating a new account at my server with domain thirdsite.com After that upload all content from secondsite.info and go to google webmaster to let they know that site now sits on a new domain. Also do a full 301 redirect. Should it be page by page or just one 301 redirect? And finally insert posts (they are not many) from firstsite.com on thirdsite.com and do specific redirects. Is this a good option? Or should I first move secondsite.info to my server and keep updating it and only a few weeks later make transition to thirdsite.com? I am worried that it could be too much changes at once.
Technical SEO | | delta440 -
Canoical tags how do i use them
Hi i have this coming up on the report for my url www.in2town.co.uk but i am not sure how to use the canonical tag. I am using joomla and would be grateful if anyone could please give me advice on how to use this. Canonical URL Tag Usage Moderate fix <dl> <dt>Number of Canonical tags</dt> <dd>0</dd> <dt>Explanation</dt> <dd>Although the canonical URL tag is generally thought of as a way to solve duplicate content problems, it can be extremely wise to use it on every (unique) page of a site to help prevent any query strings, session IDs, scraped versions, licensing deals or future developments to potentially create a secondary version and pull link juice or other metrics away from the original. We believe the canonical URL tag is a best practice to help prevent future problems, even if nothing is specifically duplicate/problematic today.</dd> <dt>Recommendation</dt> <dd>Add a canonical URL tag referencing this URL to the header of the page.</dd> <dd>many thanks for your help
Technical SEO | | ClaireH-184886
</dd> </dl>0 -
Do You Have To Have Access to Website Code to Use Open Graph
I am not a website programmer and all of our websites are in Wordpress. I never change the coding on the backend. Is this a necessity if one wants to use Open Graph?
Technical SEO | | dahnyogaworks0 -
BEST Wordpress Robots.txt Sitemap Practice??
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/robotstxt Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read other questions. but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt http://www.seomoz.org/q/robots-txt-question-2 http://www.seomoz.org/q/quick-robots-txt-check. http://www.seomoz.org/q/xml-sitemap-instruction-in-robots-txt-worth-doing I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. User-agent: *
Technical SEO | | joony2008
Disallow:
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-login.php
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /comments **ERASE EVERYTHING??? and changed it to** <code> <code>
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
I have 2 websites with the same content
Hello everyone, this is my first post here on SEOmoz and I have a questions that I cannot seem to figure out. So here is my scenario: I have 2 websites that are identical. The only difference between the 2 websites is the domain name. This was done a while back for marketing purposes, however, I am no longer needing my 2nd website. What is the best way to get rid of this second website? I still have about 1 paying customer a day convert on this 2nd website and I do not want to loose them, however, I know that I am getting penalized by the search engines because of this duplicate content. Please let me know the best way of going about this. PS: I have read about 301 redirects, canonicalizing URLs, and other methods but do not know which one to choose. Any help is greatly appreciated!
Technical SEO | | threebiz0 -
Help needed with robots.txt regarding wordpress!
Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched. http://ensoplastics.com/theblog/?cat=743 http://ensoplastics.com/theblog/?p=240 These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes. IS there a reason I should block any pages contained in wodrpress? Sitemap: http://www.ensobottles.com/blog/sitemap.xml User-agent: Googlebot Disallow: /*/trackback Disallow: /*/feed Disallow: /*/comments Disallow: /? Disallow: /*? Disallow: /page/
Technical SEO | | ENSO
User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/plugins/ Disallow: /wp-content/themes/ Disallow: /trackback Disallow: /commentsDisallow: /feed0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0