Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt on subdomains
-
Hi guys!
I keep reading conflicting information on this and it's left me a little unsure. Am I right in thinking that a website with a subdomain of shop.sitetitle.com will share the same robots.txt file as the root domain?
-
That's about as comprehensive an answer as I could have hoped for. Thanks Ryan, really appreciated.
-
Mostly no. I say 'mostly' because a lot of times when you look at a site using www and no-www if both of those work they're almost always pulling files from the same location (hence the warnings around duplicate content), so both www.domain.com/robots.txt and domain.com/robots.txt are going to work. This is the dominant example of a subdomain sharing a robots.txt file. However, on domains that are set up as their own subdomains they have different robots.txt. Take a look at the many differences between subdomain1-1000.wordpress.com/robots.txt vs wordpress.com/robots.txt. If you set up a subdomain that isn't just a reflection of your root domain, then you'll need to create a robots.txt file as well. Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving E-Commerce Store to Subdomain?
Hi all, We have a customer who currently uses Square for their in-store point-of-sale system as well as for their e-commerce website. From my understanding, a Square site is a watered-down version of Weebly, and is proving to be highly restrictive from an SEO and content structuring standpoint. It's been an uphill battle to try and get traction for their site in SERPs. Would it be a bad idea to move the entire Square online store to a subdomain, and install WordPress on the root domain? This way their online store would remain as-is, but the primary pages on the site would be on WordPress which would give us a lot more control over the content. I just want to make sure this doesn't negatively impact their SEO. Thanks!
Technical SEO | | suarezventures0 -
Robots.txt Syntax for Dynamic URLs
I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page= Which is the proper syntax?
Technical SEO | | btreloar
Disallow: ?Page=
Disallow: ?Page=*
Disallow: ?Page=
Or something else?0 -
Moving from a subdomain to subfolder
Hello, I am currently working on a site that is leveraging multiple subdomains. I wanted to see if it suggested to migrate them into subfolders. One of the subdomains is a .shop and the other is location specific. Thanks, T
Technical SEO | | Tucker_100 -
Does using a reverse proxy to make a subdomain appear as a subdirectory affect SEO?
Using a reverse proxy only makes it appear that a subdomain is really a subfolder. However, links in the end remain the same. Does this have any negative (or positive) impact on SEO? Does it make it difficult from the blog's (subdomain's) sitemap or robots.txt file to be properly read by search engines?
Technical SEO | | rodelmo41 -
Robots.txt on http vs. https
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https. I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt? Strangely, I cannot find a single ressource about this...
Technical SEO | | zeepartner0 -
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
Hi I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ? Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ? In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Oh no googlebot can not access my robots.txt file
I just receive a n error message from google webmaster Wonder it was something to do with Yoast plugin. Could somebody help me with troubleshooting this? Here's original message Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. Recommended action If the site error rate is 100%: Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot. If your robots.txt is a static page, verify that your web service has proper permissions to access the file. If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure. If the site error rate is less than 100%: Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors. The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website. After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
Technical SEO | | BistosAmerica0