Can I use content from an existing site that is not up anymore?
-
I want to take down a current website and create a new site or two (with new url, ip, server). Can I use the content from the deleted site on the new sites since I own it? How will Google see that?
-
Thank you. That is a great answer!
-
Hi there,
I would say that, taking William's point into account, canonicals might work in order to remove any possibility that Google would see the new site as copying the old one. That said, I can't guarantee that they could not either manually or automatically (manually would be much easier) note that the two sites are owned by the same person and that the domain change is a measure taken to avoid a penalty. The truly safest thing to do is to re-write the content and start afresh. The next safest is to remove the content from the old site, force a re-crawl / wait for Google to update its cache of the old site excluding the content, and then re-publish on the new site.
Canonicals will make this process quicker, but I don't believe it can be guaranteed that they won't result in Google making a stronger connection between the two sites, which might not go well. Again, this is only if there are enough similarities for Google to understand that this is not a scraper / scrapee situation but a situation where one entity owns both sites.
I'm sorry not to give a definitive answer.
-
After reading Jane & William's discussion--do you both agree that canonicals is the way to go? The site will be similar (trying to create a non-penalized site). The sites will have different ip's and servers but a lot of the same content. None of the same backlinks... I just don't want to do the work if it's going to end up hurting me worse. I don't see how I can get all those bad backlinks removed.
-
Really good point. Taking that into account, I might guess that an anti-manipulation method Google might employ is to grab registration details, hosting data, analytics codes, etc. and other identifying factors to determine whether the canonicalised content is owned by the same person. That is, canonicals between tightly-linked sites where the "duplicate" is penalised could hurt the canonical source, stopping people using this in place of the old 301 trick. If the scraper site has nothing in common with the source, Google does not pass on any negative metric from the duplicate.
This is just a theory too of course! I'd be confident assuming that they're taking precautions to stop this becoming a common trick. Awesome point!
-
The thought behind canonicals is this:
-
One of their uses is to fight against scrapers and such by still having the canonical tags in place when these spammy places grab your content.
-
If penalties passed through canonicals, then the penalties these scrapers have would effect your site terribly. This is not the case, in my experience.
-
So, unless Google has already implemented the human tracking that was discussed a few Whiteboard Fridays ago, this should work. And even with hardcore human tracking for penalities, I think its yet to be seen if this would focus on small sites trying to fix penalities as opposed to the large black hat spammers.
There is a bit of theorycrafting here, but in RoxBrock's specific situation, it looks like he has to pick the lesser of all evils.
-
-
The idea of using canonicals interests me, but I am not 100% sure it is risk-free. It used to be the case that you could 301 penalised websites and remove the penalty (we're talking 2010 and earlier here). Google is very keen on transferring penalties these days, so I would be surprised if they are leaving a loophole for canonical tags open like this, or if they will keep that loophole open for long.
You would ideally leave the site live and remove its content as William says - once you see that the cached version of the site no longer contains the content you want to move, you can feel free to take the old site down and put the content up on the new site.
We don't know what lengths Google is going to or will go to to avoid people being able to re-use previously penalised content (including good content from penalised websites) but the safest thing you can do whilst using this old content right now is ensure the old content has been deindexed before putting it up again elsewhere.
The actual safest thing you can do is re-write the content, but I realise this might not be possible.
-
Put the canonical tags in the old content, and point it to the new pages.
If you believe there are penalties, then 301ing is a little risky.
De-indexing content doesn't mean Google forgets it was there, they still have it cached, so this isn't ideal.
It looks like canonical may be your best bet.
-
So you suggest leaving the old site up and add the content to the new site with the canonical tag pointing to old site? Any other options you can think of?
-
You would need to keep the site live to speed up the de-indexation. Then block all bots through robots.txt and force a crawl.
Make sure this is what you want to do. There are other options for this situation depending on your intent. Canonical tags, for example, would not transfer penalties and still show Google where the good source of the content is.
-
Many bad links were built on the old website by a questionable SEO firm, so I do believe the URL has been hit, but not with a formal penalty.
In order to redirect the old web pages I would need to keep the website live which really does not serve my purpose--which is to use great content that was written in-house on a clean website with no backlinks (starting from scratch).
How would one go about "de-indexing" content?
Thank you for prompt responses.
-
301 redirect the old web pages to the new ones using an .htaccess file on the old website. This will show Google that the content has moved to the new web pages. Check out the link for more information: http://moz.com/learn/seo/redirection
-
Interesting question!
I had to do some research on this, there is not much out there. One place I was sure to find and answer was the depths of the underworld in blackhat forums. I found a whole discussion on it from 6 months back. (Not going to link to a black hat site, sorry)
However what they said and had tried and tested was that the site must be de-indexed and the same for all pages so that it did not trip the duplicate content.
However lets back things up a little. Why are you doing this? Does the original have a penalty?
Why not keep the original live and put a canonical link in your page pointing to the new site stating that is the original content owner? this way you will get traffic right away and not have to start ranking from scratch.
Need to know more about your reasons please.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do I lose link juice if I have a https site and someone links to me using http instead?
We have recently launched a https site which is getting some organic links some of which are using https and some are using http. Am I losing link juice on the ones linked using http even though I am redirecting or does Google view them the same way? As most people still use http naturally will it look strange to google if I contact anyone who has given us a link and ask them to change to https?
White Hat / Black Hat SEO | | Lisa-Devins0 -
Is Google not Penalizing aggressively anymore for on page manipulation?
I wanted to throw this out where we have been seeing so much emphasis on Google cracking down on bad linking, have they let up enforcement on manipulative on-page tactics that have faded in current years? I've been seeing hidden text popping up again and ranking. Here is an example. Google "landscaping Portsmouth NH" and find the #1 result. Now find "Portsmouth" on the page. So what I find interesting, the site has a clean backilnk profile, but that's a pretty blatant manipulation hiding those keywords. What I find interesting is I filled out a report on it a year ago. (I'm not a big "fill out spam report" guy, I was curious if Google would take action). A year later it is still #1 for the competitive keyword. So I'm curious if others have seemed similar trends like font-size:0px, or text color as the background popping back up and ranking. I would love other's thoughts on it.
White Hat / Black Hat SEO | | BCutrer0 -
Multiple domains different content same keywords
what would you advice on my case: It is bad for google if i have the four domains. I dont link between them as i dont want no association, or loss in rakings in branded page. Is bad if i link between them or the non branded to them branded domain. Is bad if i have all on my webmaster tools, i just have the branded My google page is all about the new non penalized domain. altough google gave a unique domain +propdental to the one that he manually penalized. (doesn't make sense) So. What are the thinks that i should not do with my domain to follow and respect google guidelines. As i want a white hat and do not do something that is wrong without knowledge
White Hat / Black Hat SEO | | maestrosonrisas0 -
Do sitewide links from other sites hurt SEO?
A friend of mine has a pagerank 3 website that links to all my pages on my site on every page of his site. The anchor text of all these links are the title of each page that it links to. Does this hurt SEO? I can have him change to the links to whatever i want, so if it does hurt, what should i change the anchor text to if needed? Thanks mozzers! Ron
White Hat / Black Hat SEO | | Ron100 -
DIV Attribute containing full DIV content
Hi all I recently watched the latest Mozinar called "Making Your Site Audits More Actionable". It was presented by the guys at seogadget. In the mozinar one of the guys said he loves the website www.sportsbikeshop.co.uk and that they have spent a lot of money on it from an SEO point of view (presumably with seogadget) so I decided to look through the source and noticed something I had not seen before and wondered if anyone can shed any light. On this page (http://www.sportsbikeshop.co.uk/motorcycle_parts/content_cat/852/(2;product_rating;DESC;0-0;all;92)/page_1/max_20) there is a paragraph of text that begins with 'The ever reliable UK weather...' and when you via the source of the containing DIV you will notice a bespoke attribute called "threedots=" and within it, is the entire text content for that DIV. Any thoughts as to why they would put that there? I can't see any reason as to why this would benefit a site in any shape or form. Its invalid markup for one. Am I missing a trick..? Thoughts would be greatly appreciated. Kris P.S. for those who can't be bothered to visit the site, here is a smaller version of what they have done: This is an introductory paragraph of text for this page.
White Hat / Black Hat SEO | | yousayjump0 -
Does posting a source to the original content avoid duplicate content risk?
A site I work with allows registered user to post blog posts (longer articles). Often, the blog posts have been published earlier on the writer's own blog. Is posting a link to the original source a sufficient preventative solution to possibly getting dinged for duplicate content? Thanks!
White Hat / Black Hat SEO | | 945010 -
How to know if a link in a directory will be good for my site?
Hi! Some time ago, a friend of my added our site to a directory. I did not notice it until today, when in the search results for my domain name, the directory came in the first page, in the four position. My friend wrote a nice article, describing our bussiness, and the page has a doFollow link. Looking at the metrics of that directory, I found the following: Domain Authority: 70; main page authority: 76; linking domain roots: 1383; total links: 94663 (several anchor texts); facebook shares: 26; facebook likes: 14; tweets: 20; Google +1: 15. The directory accept a free article about a company, does not review it before it is published, but look for duplicated articles representing spam; so one company can only have one listing (in theory). Is there any formula to know if a directory is safe to publish a doFollow link? If they don't review the link I would say is not a good signal, but is there any other factors to take into account?
White Hat / Black Hat SEO | | te_c0 -
Trying to determine if my site was de-indexed...
I ran a search using the allinsite:floridainboundmarketing.com command and found that virtually all of my pages are not being returned in the results. I'm one of those who (foolishly) used ALN blog network for a few months, got the unnatural links notice in WMT and on advice of other SEOs (including some here) I ignored it based on the idea that if my SERPS dropped due to alog update that a request for reconsideration was of no value. As I watched my SERPs dropping I was confident that it was simply because those links were no longer being counted and overall link profile was poor, so the results started dropping. I've not read where G has gone back and started de-indexing pages for such sites but it may be happening as (unless I'm wrong) my site is gone... Anyone got any ideas? Am I searching correctly?
White Hat / Black Hat SEO | | sdennison0