Do I need a separate robots.txt file for my shop subdomain?
-
Hello Mozzers!
Apologies if this question has been asked before, but I couldn't find an answer so here goes...
Currently I have one robots.txt file hosted at https://www.mysitename.org.uk/robots.txt
We host our shop on a separate subdomain https://shop.mysitename.org.uk
Do I need a separate robots.txt file for my subdomain? (Some Google searches are telling me yes and some no and I've become awfully confused!
-
Thank you. I want to disallow specific URLs on the subdomain and add the shop sitemap in the robots.txt file. So I'll go ahead and create another!
-
You go be fine without one. You only need one if you want to manage that subdmain: add specific xml sitemaps links in robots.txt, cut access to specific folders for that subdomain.
if you don't need any of that - just move forward without one.
-
Currently we just have: User-agent: *
I'm in the process of optimising.
-
It depends what currently is in your robots.txt. Usually it would be useful to have another one for your subdomain.
-
Yes, I would have a seperate robots.txt files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website URL, Robots.txt and Google Search Console (www. vs non www.)
Hi MOZ Community,
Technical SEO | | Badiuzz
I would like to request your kind assistance on domain URLs - www. VS non www. Recently, my team have moved to a new website where a 301 Redirection has been done. Original URL : https://www.example.com.my/ (with www.) New URL : https://example.com.my/ (without www.) Our current robots.txt sitemap : https://www.example.com.my/sitemap.xml (with www.)
Our Google Search Console property : https://www.example.com.my/ (with www.) Question:
1. How/Should I standardize these so that Google crawler can effectively crawl my website?
2. Do I have to change back my website URLs to (with www.) or I just need to update my robots.txt?
3. How can I update my Google Search Console property to reflect accordingly (without www.), because I cannot see the options in the dashboard.
4. Is there any to dos such as Canonicalization needed, or should I wait for Google to automatically detect and change it, especially in GSC property? Really appreciate your kind assistance. Thank you,
Badiuzz0 -
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
Need Advice: How should we handle this situation?
Hi Folks, We have a blog post on one of our sites that ranked very highly for lucrative term for about a period of two months. It had over 2000 Facebook likes, about 20 tweets and the same amount of Google +1's. The post ended up receiving several high quality natural links, and we also pointed a few authoritative links to it from our network of sites. After we saw the ranking starting to slip we did a bit of link building (which we shouldn't have done) and ended up making a big mistake. The link building company was only supposed to do 30 links and they ended up doing 600. Once we figured it out, we immediately submitted a disavow request and told Google about our mistake. I also thought maybe we then had a manual spam penalty applied so I also submitted a reconsideration request (and also told them about our mistake) but got back a canned reply saying "no manual penalties" were found. After we did all that, we saw the rankings fall out of the top 50 with the next 10 days. I'm confident we can throw up a new similar blog post and see close the same rankings we experienced with the original post. But before I do that, I have two questions: Should we 301 the old post to the new post? Could that some how "pass" the bad rankings along to the new post? What should we do about the natural links we received? Should we try and reach out to the sites and get them to change their links to the new post? Any help would be appreciated. Thanks!
Technical SEO | | shawn810 -
Site (Subdomain) Removal from Webmaster Tools
We have two subdomains that have been verified in Google Webmaster Tools. These subdomains were used by 3rd parties which we no longer have an affiliation with (the subdomains no longer serve a purpose). We have been receiving an error message from Google: "Googlebot can't access your site. Over the last 24 hours, Googlebot encountered 1 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 100.00%". I originally investigated using Webmaster Tools' URL Removal Tool to remove the subdomain, but there are no indexed pages. Is this a case of simply 'deleting' the site from the Manage Site tab in the Webmaster Tools interface?
Technical SEO | | Cary_PCC0 -
Best practise needed for translating content
Hi all, I was after some advice in the best solution to follow for translating website content into multiple languages? I am working on a content rich UK site and want to know a good solution to translate this content into other languages for best practise SEO? Would anyone have any recommendations in best practise to follow as well as best solutions? Many thanks Simon
Technical SEO | | simonsw0 -
How ro write a robots txt file to point to your site map
Good afternoon from still wet & humid wetherby UK... I want to write a robots text file that instruct the bots to index everything and give a specific location to the sitemap. The sitemap url is:http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Is this correct: User-agent: *
Technical SEO | | Nightwing
Disallow:
SITEMAP: http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Any insight welcome 🙂0 -
Is the Sandbox Real? Need Help!
To start, I'm very new at this so I've likely made a ton of mistakes but here is the breakdown of what's happened/what's been done to my site. I own a wedding photography company which was based in Portland, we decided about six months prior that we wanted to relocate to San Diego. It was too soon to optimize our website for our new town of San Diego so I created a brand new site. It was born around June 2011. It looks just like the old site but all the content is different (different titles, re-uploaded images, text, etc was optimized for San Diego). What may be my pitfall is I imported our blog posts from the old site to the new site and we continued to keep both blogs live (writing the post in one, importing to the other). San Diego site: http://continuumweddings.com Old Site (now optimized for LA): http://continuumphotography.com From there I began link building. I signed up for the SEO Scheduler and began making the changes suggested there. It told me to sign up for Linxboss, and I did it. Other than that, my links have been build naturally and I have quite a few of them, definitely enough to compete with my top competitors. At one point I was #3 for "San Diego Wedding Photographer" and I stayed there for a couple weeks. Then I began to drop. Now I'm somewhere on page 10. I've read a lot of articles on here and I know I have a lot of things potentially hurting me. Site age, Duplicate content, etc. I'm just not sure why I dropped (still rank on 1st page in Yahoo & Bing) and what I should do about it. I tend to get overwhelmed and every post I read seems to talk about something new I may have done wrong. I'm willing to put in the time to fix this; I just need to know where my time is best spent.
Technical SEO | | mrsmelmitch0 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0