I am looking for best way to block a domain from getting indexed ?
-
We have a website http://www.example.co.uk/ which leads to another domain (https://online.example.co.uk/) when a user clicks,in this case let us assume it to be Apply now button on my website page. We are getting meta data issues in crawler errors from this (https://online.example.co.uk/) domain as we are not targeting any meta content on this particular domain. So we are looking to block this domain from getting indexed to clear this errors & does this effect SERP's of this domain (**https://online.example.co.uk/) **if we use no index tag on this domain.
-
Hi the real challenge here is we are not using any Google entities like webmaster tools etc on https://online.example.co.uk so to my knowledge robots.txt wont work, will be waiting to hear from you if we have any other options here.
Thanks
P
-
I'd recommend putting a robots.txt file on the https://online.example.co.uk site you don't want indexed.
Just save the following as robots.txt:
User-agent: *
Disallow: /then add it to the the root folder of the site like https://online.example.co.uk/robots.txt
This tells all the search spiders not to index or crawl any pages on the entire site.
-
Hi thanks for letting me know this , it will be great if you have any wildcard to solve this
-
If you use the meta noindex on page then it will be blocked from the index. So, yes, it will effect the SERPs of that domain by removing any results from the SERPs.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to absorb discontinued brand/domain?
Our parent company is looking to absorb the domain of a brand we are discontinuing. The domain we want to absorb has a thousands of blog posts from 2010 onward. Much of the content is old but still high-converting. We would like to keep as much of the potential traffic as possible, but we don't want the parent website to become too large or lose credibility with too many 301 redirects. Any advice on the best way to do this?
Technical SEO | | NichGunn1 -
Homepage indexed and cached as the wrong domain
I'm a bit baffled by this one and would love if someone in the community could help provide some clarity! In general, my website (PSG1.com) is indexed and cached correctly. The exception is that the homepage is actually cached as plasticsurgerygroupnewjersey.com, another domain we own. Header checkers all confirm that plasticsurgerygroupnewjersey.com redirects to PSG1.com, not the other way around No canonical is set for that domain. At one time, I used the Moz toolbar to view attributes and it registered PSG1.com as having a response code of both 200 and 301 to plasticsurgerygroupnewjersey.com. However, I cannot replicate this. Any idea why the homepage of PSG1.com is not indexed/cached correctly? I appreciate your wisdom!
Technical SEO | | BTeubner0 -
Changing the city of operation and trying to know the best way of informing Google
We are having a business operating out of three cities A, B and C with A being the primary address and the business provides its services in B and C as well. Business has decided to shut shop in C and instead add D as another city. Currently the URLs are like www.domainname.com/A/productswww.domainname.com/B/productswww.domainname.com/C/productsPlease help us in understanding the best way to inform google that City C is non operational now.Do we need to do the redirects, and if yes, should we do the redirects to Home Page?Or can we just remove the C city URLs from the webmaster tool and inform Google.
Technical SEO | | deep_irvin0 -
Removing a staging area/dev area thats been indexed via GWT (since wasnt hidden) from the index
Hi, If you set up a brand new GWT account for a subdomain, where the dev area is located (separate from the main GWT account for the main live site) and remove all pages via the remove tool (by leaving the page field blank) will this definately not risk hurting/removing the main site (since the new subdomain specific gwt account doesn't apply to the main site in any way) ?? I have a new client who's dev area has been indexed, dev team has now prevented crawling of this subdomain but the 'the stable door was shut after the horse had already bolted' and the subdomains pages are on G's index so we need to remove the entire subdomain development area asap. So we are going to do this via the remove tool in a subdomain specific new gwt account, but I just want to triple check this wont accidentally get main site removed too ?? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Google is indexing blocked content in robots.txt
Hi,Google is indexing some URLs that i don't want to be indexed and also is indexing the same URLs with https. This URLs are blocked in the file robots.txt.I've tried to block this URLs through Google WebmasterTools but Google doesn't let me do it because this URL are httpsThe file robots.txt is correct so, what can i do to avoid this content to be indexed?
Technical SEO | | elisainteractive0 -
301 redirect domain to page on another domain
Hi, If I wanted to do a 301 permanent redirect on a domain to a page on another domain will this cause any problems? Lets say I have 4 domains (all indexed with content), I decide to create a new domain with 4 pages, one for each domain. I copy the content from the old domains to the relevant page on the new domain and set it live. At the same time as setting the new site live I do a 301 permanent redirect on the 4 domains to the relevant pages on the new domain. What happens if Google indexes the new site before visiting the redirected domains, could this cause a duplicate content penalty? Cheers
Technical SEO | | activitysuper0 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0 -
Https indexed - though a no index no follow tag has been added
Hi, The https-pages of our booking section are being indexed by Google. We added But the pages are still being indexed. What can I do to exclude these URL's from the Google index? Thank you very much in advance! Kind regards, Dennis Overbeek ACSI Publishing | dennis@acsi.eu
Technical SEO | | SEO_ACSI0