How to block google robots from a subdomain
-
I have a subdomain that lets me preview the changes I put on my site.
The live site URL is www.site.com, working preview version is www.site.edit.com
The contents on both are almost identical
I want to block the preview version (www.site.edit.com) from Google Robots, so that they don't penalize me for duplicated content.
Is it the right way to do it:
User-Agent: *
Disallow: .edit.com/*
-
Thanks o much for your help!
-
Hi,
Probably without the www. so: site.edit.com/robots.txt because otherwise you would have a subdomain of a subdomain ;-). But the rest is perfect!
-
Thanks a lot for your answer, Martijn!
So just to make sure I got it correctly - this robots file URL should be:
?
Thanks a lot for your answer
-
Hi,
The Google Robots will look for the robots.txt in each individual root. So you need the robots.txt in the root of the subdomain not just the domain root. That's why its also possible to include a complete disallow in there and not just: .edit.com/* .
Example:
User-agent: *
Disallow: /Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking pages from Moz and Alexa robots
Hello, We want to block all pages in this directory from Moz and Alexa robots - /slabinventory/search/ Here is an example page - https://www.msisurfaces.com/slabinventory/search/granite/giallo-fiesta/los-angeles-slabs/msi/ Let me know if this is a valid disallow for what I'm trying to. User-agent: ia_archiver
Technical SEO | | Pushm
Disallow: /slabinventory/search/* User-agent: rogerbot
Disallow: /slabinventory/search/* Thanks.0 -
Fetch as Google issues
HI all, Recently, well a couple of months back, I finally got around to switching our sites over to HTTPS://. In terms of rankings etc all looks fine and we have not move about much, only the usual fluctuations of a place or two on a daily basis in a competitive niche. All links have been updated, redirects in place, the usual https domain migration stuff. I am however, troubled by one thing! I cannot for love nor money get Google to fetch my site in GSC. No matter what I have tried it continues to display "Temporarily unreachable". I have checked the robots.txt and it is on a new https:// profile in GSC. Has anyone got a clue as I am stumped! Have I simply become blinded by looking too much??? Site in Q. caravanguard co uk. Cheers and looking forward to your comments.... Tim
Technical SEO | | TimHolmes0 -
Why would Google not index all submitted pages?
On Google Search console we see that many of our submitted pages weren't indexed. What could be the reasons? | Web pages |
Technical SEO | | Leagoldberger
| 130,030 Submitted |
| 87,462 Indexed |0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Influencing Google Instant Preview
Hello there! I have been looking at how our articles are shown in Google Instant Previews. While the description in the SERPs picks up the first paragraph from the article from the main table/div body, the Instant Preview highlights a text ad in the top right of the side column. According to Google https://sites.google.com/site/webmasterhelpforum/en/faq-instant-previews Q: How can I influence the text highlighted in the preview image?
Technical SEO | | CleverPhD
A: The highlighted text is automatically chosen based on the user's search query. Only text that is visible on the page can be selected for highlighting. Google is highlighting the same block of text from an ad space over and over again. "Download our Free Guide ...." vs the first paragraph in the article. Any ideas folks?0 -
Google plus
With a single Google search, you can see regular search results, along with all sorts of results that are tailored to you -- pages shared with you by your friends, Google+ posts from people you know. **Does pages shared by friends ** Does this mean pages shared by friends on Google plus ?
Technical SEO | | seoug_20050 -
Will Google Continue to Index the Page with NoIndex Tag Upon Google +1 Button Impression or Click?
The FAQs for Google +1 button suggests as follows: "+1 is a public action, so you should add the button only to public, crawlable pages on your site. Once you add the button, Google may crawl or recrawl the page, and store the page title and other content, in response to a +1 button impression or click." If my page has NoIndex tag, while at the same time inserted with Google +1 button on the page, will Google recognise the NoIndex Tag on the page (and will not index the page) despite the +1 button's impression or clicks send signals to Google spiders?
Technical SEO | | globalsources.com0 -
Does Google index XML files?
Does Google or other search engines include XML files in their index? More specifically, I am wondering how Google knows the difference between an xml filetype and an RSS feed.
Technical SEO | | nicole.healthline0