How to Let Google Know I am a new Site Owner and to Remove or De-value all backlinks?
-
I am looking to buy a new domain for a brand.
Problem is the domain has was registered in 1996 and has around 6k backlinks (according to ahrefs) that I need removed as the old content will have no relevance to my new site.
Should I just disavow all of them? Is there anything "special" I can do to let Google know that it will be new site owner/content and to remove/discount the current links?
-
A three letter, brandable EMD-- Sounds like the kind of thing someone would want to keep and use or sell for a bunch of money. But then when you add the "6K junk links" it sound like the kind of thing someone would just want to sell and hope to get a bunch of money. Playing games with the registrar isn't going to help help you in the matter. Did you try offering them half price? It still may not be worth it.
-
Hey Thanks for the responses.
The site in question does not have any manual penalty. It just has a bunch of junk links that I need to get rid of before I start building an authority site.
I would like to avoid using this domain name but it is a 3 letter EMD thats brandable.
What if I transferred the domain to a new owner that does not have the whois blocked? Thereby maybe letting google know that there is a new owner. Anyone have experience with this?
-
I think that it is imposible to know every back link you have to disavow. I had a site that i tried to disavow everything and always keep appearing new links that i did not found.
although if the domain have a manual action, and you want to focus on other words, maybe it won't heart you.
-
Do you know if there is a manual action against the site? If so, in some cases, you can show Google evidence that you are the new owner and they will lift the action. If they don't lift the action then you have to go through the full process of cleaning up the backlink profile (trying to remove as many bad links as possible and then disavowing the remaining unnatural links) and then filing for reconsideration. This is usually what happens.
If there is no manual action on the site then disavowing the bad links will probably do the trick. The problem is that there are so many unknowns surrounding the disavow tool.
If this is a site that is going to be important to you, and if the domain currently has an unnatural links problem then I wouldn't risk trying to clean it up. I'd probably buy a different domain name.
With all of that being said, are the links to the site actually bad ones? If they are primarily self made, low quality links then everything I said above still applies. If it's just that the links are not relevant to your content then I wouldn't worry about that and I'd just go ahead and create my new site.
-
You have to watch out for that, Richard. Here is video from Matt Cutts on the issue. Basically, you're putting yourself in deep water by purchasing such a domain. As Matt says, it can be "a little bit difficult" (that's Matt-speak for "next to impossible") to pull the domain out of a penalty because of it's back links. There is often good reason to pass on buying such a domain and you really need to do your homework before you make the decision to buy.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Site Map Anomalies (HTTP vs HTTPS)
Hi I've just done my usual Monday morning review of clients Google Search Console (previously Webmaster Tools) dashboard and disturbed to see that for 1 client the Site Map section is reporting 95 pages submitted yet only 2 indexed (last time i looked last week it was reporting an expected level of indexed pages) here. It says the sitemap was submitted on the 10th March and processed yesterday. However in the 'Index Status' its showing a graph of growing indexed pages up to & including yesterday where they numbered 112 (so looks like all pages are indexed after all). Also the 'Crawl Stats' section is showing 186 pages crawled on the 26th. Then its listing sub site-maps all of which are non HTTPS (http) which seems very strange since the site is HTTPS and has been for a few months now and the main sitemap index url is an HTTPS: https://www.domain.com/sitemap_index.xml The sub sitemaps are:http://www.domain.com/marketing-sitemap.xmlhttp://www.domain.com/page-sitemap.xmlhttp://www.domain.com/post-sitemap.xmlThere are no 'Sitemap Errors' reported but there are 'Index Error' warnings for the above post-sitemap, copied below:_"When we tested a sample of the URLs from your Sitemap, we found that some of the URLs were unreachable. Please check your webserver for possible misconfiguration, as these errors may be caused by a server error (such as a 5xx error) or a network error between Googlebot and your server. All reachable URLs will still be submitted." _
Technical SEO | | Dan-Lawrence
Also for the below site map URL's: "Some URLs listed in this Sitemap have a high response time. This may indicate a problem with your server or with the content of the page" for:http://domain.com/en/post-sitemap.xmlANDhttps://www.domain.com/page-sitemap.xmlAND https://www.domain.com/post-sitemap.xmlI take it from all the above that the HTTPS sitemap is mainly fine and despite the reported 0 pages indexed in GSC sitemap section that they are in fact indexed as per the main 'Index Status' graph and that somehow some HTTP sitemap elements have been accidentally attached to the main HTTPS sitemap and the are causing these problems.What's best way forward to clean up this mess ? Resubmitting the HTTPS site map sounds like right option but seeing as the master url indexed is an https url cant see it making any difference until the http aspects are deleted/removed but how do you do that or even check that's what's needed ? Or should Google just sort this out eventually ? I see the graph in 'Crawl > Sitemaps > WebPages' is showing a consistent blue line of submitted pages but the red line of indexed pages drops to 0 for 3 - 5 days every 5 days or so. So fully indexed pages being reported for 5 day stretches then zero for a few days then indexed for another 5 days and so on ! ? Many ThanksDan0 -
Just Launched New Site - First Steps to Get it to Rank?
Good Morning Mozzers... We just recently launched a brand new site and now the fun part begins: trying to get it to appear in the SERPS. I'm wondering if you guys can share your best and most proven secrets/tricks to get brand new sites to rank in Google.... For example, what are the first directories you add the site to? What are some links you try and aquire first? Looking for some tips and ideas for a brand new site. Thanks in advance.
Technical SEO | | Prime850 -
I noticed all my SEOed sites are getting attacked constantly by viruses. I do wordpress sites. Does anyone have a good recommendation to protect my clients sites? thanks
We have tried all different kinds of security plugins but none seem to work long term.
Technical SEO | | Carla_Dawson0 -
How does Google find /feed/ at the end of all pages on my site?
Hi! In Google Webmaster Tools I find *.../feed/ as a 404 page in crawl errors. The problem is that none of these pages exist and they have no inbound links (except the start page). FYI, it´s a wordpress site. Example: www.mysite.com/subpage1/feed/ www.mysite.com/subpage2/feed/ www.mysite.com/subpage3/feed/ etc Does Google search for /feed/ by default or why do I keep getting these 404´s every day?
Technical SEO | | Vivamedia0 -
Merged old wordpress site to new theme and have crazy amount of 4xx and duplicate content that wasn't there before?
URL is awardrealty.com We have a new website that we merged into a new wordpress theme. I just crawled the site with my seomoz crawl tool and it is showing a ridiculous amount of 4xx pages (200+) and we cant find the 4xx pages in the sitemap or within wordpress. Need some help? Am i missing something easy?
Technical SEO | | Mark_Jay_Apsey_Jr.0 -
Does 301 redirecting a site multiple times keep the value of the original site?
Hi, All! If I 301 redirect site www.abc.com to www.def.com, it should pass (almost) all linkjuice, rank, trust, etc. What happens if I then redirect site www.def.com to www.ghi.com? Does the value of the original site pass indefinitely as long as you do the redirects correctly? Or does it start to be devalued at some point? If anyone's had experience redirecting a site more than once and they've seen reportable good/bad/neutral results, that would be very helpful. Thanks in advance! -Aviva B
Technical SEO | | debi_zyx0 -
Getting Google to index new pages
I have a site, called SiteB that has 200 pages of new, unique content. I made a table of contents (TOC) page on SiteB that points to about 50 pages of SiteB content. I would like to get SiteB's TOC page crawled and indexed by Google, as well as all the pages it points to. I submitted the TOC to Pingler 24 hours ago and from the logs I see the Googlebot visited the TOC page but it did not crawl any of the 50 pages that are linked to from the TOC. I do not have a robots.txt file on SiteB. There are no robot meta tags (nofollow, noindex). There are no 'rel=nofollow' attributes on the links. Why would Google crawl the TOC (when I Pinglered it) but not crawl any of the links on that page? One other fact, and I don't know if this matters, but SiteB lives on a subdomain and the URLs contain numbers, like this: http://subdomain.domain.com/category/34404 Yes, I know that the number part is suboptimal from an SEO point of view. I'm working on that, too. But first wanted to figure out why Google isn't crawling the TOC. The site is new and so hasn't been penalized by Google. Thanks for any ideas...
Technical SEO | | scanlin0 -
Google caching meta tags from another site?
We have several sites on the same server. On the weekend we relocated some servers, changing IP address. A client has since noticed something freaky with the meta tags. 1. They search for their companyname, and another site from the same server appears in position 1. It is completely unrelated, has never happened before, and the company name is not used in any incoming text links. Eg search for company1 on Google. Company1.com.au appears at position 2, but at position1 is school1.com.au. The words company1 don't appear anywhere on the site. I've analysed all incoming links with a gazillion tools, and can't find any link text of company1, linking to school1. 2. Even more freaky, searching for company1.com.au at Google. The results at Google in position 1 for the last three days has been: Meta Title for school1 (but hovering/clicking actual goes to URL for company1)
Technical SEO | | ozgeekmum
Meta Description for school1
URL for company1.com.au Clicking on the cached copy of result1, it shows a cached version of school1 taken on March 18. Today is 29 March. Logically we are trying to get Google to spider both sites again quickly. We've asked the clients to update their home pages. Resubmitted xml sitemaps. Checked the HTTP status codes - both are happily returning 200s. Different cookies. I found another instance on a forum: http://webmasters.stackexchange.com/questions/10578/incorrect-meta-information-in-google Any ideas?0