Removing/ Redirecting bad URL's from main domain
-
Our users create content for which we host on a seperate URL for a web version. Originally this was hosted on our main domain.
This was causing problems because Google was seeing all these different types of content on our main domain. The page content was all over the place and (we think) may have harmed our main domain reputation.
About a month ago, we added a robots.txt to block those URL's in that particular folder, so that Google doesn't crawl those pages and ignores it in the SERP.
We now went a step further and are now redirecting (301 redirect) all those user created URL's to a totally brand new domain (not affiliated with our brand or main domain).
This should have been done from the beginning, but it wasn't.
Any suggestions on how can we remove all those original URL's and make Google see them as not affiliated with main domain?? or should we just give it the good ol' time recipe for it to fix itself??
-
Yes, that's correct Kurt. We want to disassociate our brand from those pages. Thanks for your FB!
-
Yes, very helpful... Thanks!
-
It sounds to me like you don't want the search engines to know that your moving the content, but rather have them think that you have dropped the pages from your site because you don't want the search engines associating those pages with your site, correct?
If that's the case, then you do want to keep the noindex on the old pages and setup 301 redirects as well. The redirects are for real users who happen to use any links/bookmarks to the old pages. By keeping the old pages noindexed, then hopefully the search engines won't crawl them and won't follow the redirects. I'd also remove the pages from the Google and Bing indexes in their webmaster tools for good measure.
If you are linking from your site to the new location of the user content, you may want to nofollow those links or, better yet, create the links in javascript or something to hide them. If all the links to the content just shift to the new location, Google and Bing may still associate it your site with the new site. Then again, if all the content from the old pages is all the new site, then they may figure it all out anyway.
-
You need to get rid of the robots.txt block on those URLs you want to redirect, Alec.
As it is now with the robots block in place, you've told the search engines NOT to crawl those URLs So it's going to be very difficult for them to discover the 301 redirects and learn that they should be dropping the old URLs form the index. After that, it is just a matter of time. (It can also help to leave those old URLs in the xml sitemap for a while to make it easier for the engines to crawl them and discover the 301s)
If none of those URLs were generating any substantial amount of traffic or incoming links, you could also use Google and Bing Webmaster Tools to request that the pages be removed from the index. This will only really work if the pages are organised in a specific directory, as it would likely take far too long to annotate each URL for removal otherwise.
Hope that helps?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I leave off HTTP/HTTPS in a canonical tag?
We are working on moving our site to HTTPS and I was asked by my dev team if it is required to declare HTTP or HTTPS in the canonical tag? I know that relative URL's are acceptable but cannot find anything about HTTP/HTTPS. Example of what they would like to do Has anyone done this? Any reason to not leave off the protocol?
White Hat / Black Hat SEO | | Shawn_Huber0 -
On the use of Disavow tool / Have I done it correctly, or what's wrong with my perception?
On a site I used GSA search engine ranker. Now, I got good links out of it. But, also got 4900 links from one domain. And, I thought according to ahrefs. One link from the one domain is equal to 4900 links from one domain. So, I downloaded links those 4900 and added 4899 links to disavow tool. To disavow, to keep my site stable at rankings and safe from any future penalty. Is that a correct way to try disavow tool? The site rankings are as it is.
White Hat / Black Hat SEO | | AMTrends0 -
Footer images links, good or bad?
Hi everybody! I have a very serius question because i have a problem with this. We run a website of voucher codes and we are looking that our rivals are putting their logos on footers of online stores with images, sometimes link to home, sometimes link to store within webpage. Should i ask for the same to online stores? I have scary to get a penalty by Google. Please help me with this and recommend me something because we are doing fair play but rivals are doing this and they get best results in SERPS. Thanks very much! Best regards!
White Hat / Black Hat SEO | | pompero990 -
IS http://ezinearticles.com/ good or bad for backlinks?
Hi Everyone, Is http://ezinearticles.com/ any good to use? Thanks
White Hat / Black Hat SEO | | vanplus0 -
Who's still being outranked by spam?
Over the past few months, through Google Alerts, I've been watching one of our competitors kick out crap press releases, and links to their site have been popping up all over blog networks with exact match anchor text. They now outrank us for that anchor text. Why is this still happening? Three Penguin updates later and this still happens. I'm trying so hard to do #RCS and acquire links that will ensure our site's long-term health in the SERPs. Is anyone else still struggling with this crap?
White Hat / Black Hat SEO | | UnderRugSwept2 -
'Stealing' link juice from 404's
As you all know, it's valuable but hard to get competitors to link to your website. I'm wondering if the following could work: Sometimes I spot that a competitor is linking to a certain external page, but he made a typo in the URL (e.g. the competitor meant to link to awesomeglobes.com/info-page/ but the link says aewsomeglobes.com/info-page/). Could I then register the typo domain and 301 it to my own domain (i.e. aewsomeglobes.com/info-page/ to mydomain.com/info-page/) and collect the link juice? Does it also work if the link is a root domain?
White Hat / Black Hat SEO | | RBenedict0