I've copied a content from a government site as it is necessary. Should I add a canonical or just a reference link?
-
Thanks!
-
You may find this helpful - https://www.youtube.com/watch?v=hy3_Rjc0Tso
I suppose you could get around it by creating it in an image or a way that Google bot wouldn't see is as duplicate content as much but its iffy.
Alternatively don't copy the content just reference it in a link then you don't have the content problem but the users can still see the content.
-
Then you'd want to avoid the canonical, but it's unlikely that the page will rank well if you have copied it from a reliable resource like a government website. Google tends to try and filter copies like this, although sometimes you see the same thing ranking over and over again on different sites because those duplicated resources are legitimately the only relevant results for a user's query. When Google does filter duplicate results, it will try to pick the most authoritative resource to rank, discarding the rest. In a case like this, it'll pick the government website 99.9% of the time and discard copies.
If you really want that page to rank, you'd also want to avoid linking to the original source as well, as linking was a good way of specifying the source before canonicalisation. I wouldn't say that it's a good idea, though - there's no point adding duplicate content that lacks canonicalisation to your website when you don't need to, even if the content is a good resource.
-
What if I still want the page to rank in Google since it's a resource though it's a duplicate content?
-
The link might be enough but I am not sure what a Googler would say to the question. They might advise you to add a canonical tag due to the entire page being a duplicate. Using the canonical certainly can't hurt your site at all, besides the fact that that page won't rank (which isn't an issue). The rest of the site remains totally unaffected.
-
Yes, I copied an entire page for a legitimate reason. Is it fine if I'll just add a link below the copied content for example "Original source: [url]"?
-
Depending on how extensive your quoting of the government content is, you might just be able to link, or you might be better off canonicalising. A simple quote on an otherwise unique page is not reason to canonicalise, just as if you had quoted from a newspaper website in an article about a subject. There is no way you'd need to canonicalise your own article to that subject.
An entire page, lifted and republished for legitimate reasons, you could canonicalise to avoid any duplication confusion (even though a link was the proper way to go about identifying the original source of the content in the past).
-
Both do the same really with the exception of the user can see one more than the other. I would recommend the canonical which should help avoid duplicate content issues as the content is already there and I don't foresee the user needing a link.
in short- canonical it
more info - https://support.google.com/webmasters/answer/139066?hl=en
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Could duplicate (copied) content actually hurt a domain?
Hi 🙂 I run a small wordpress multisite network where the main site which is an informative portal about the Langhe region in Italy, and the subsites are websites of small local companies in the tourism and wine/food niche. As an additional service for those who build a website with us, I was thinking about giving them the possibility to use some ouf our portal's content (such as sights, events etc) on their website, in an automatic way. Not as an "SEO" plus, but more as a service for their current users/visitors base: so if you have a B&B you can have on your site an "events" section with curated content, or a section about thing to see (monuments, parks, museums, etc) in that area, so that your visitors can enjoy reading some content about the territory. I was wondering if, apart from NOT being benefical, it would be BAD from an SEO point of view... ie: if they could be actually penlized by google. Thanks 🙂 Best
Intermediate & Advanced SEO | | Enrico_Cassinelli0 -
Tools to scan entire site for duplicate content?
HI guys, Just wondering if anyone knows of any tools to scan a site for duplicate content (with other sites on the web). Looking to quickly identify product pages containing duplicate content/duplicate product descriptions for E-commerce based websites. I know copy scape can which can check up to 10,000 pages in a single operation with Batch Search. But just wondering if there is anything else on the market i should consider looking at? Cheers, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
Internal Link Analysis (Site Wide)
Hi i'm currently doing a internal link analysis for one of my clients and want to pull internal link data for the entire website. So i can look at the distribution of internal anchor text and to identify ways in which we can optimize internal linking. I have had a look at screaming frog the trouble is, this data is only exportable one page at a time. Meaning, you can’t export an entire site “In Link” data. The site has 200+ pages so pulling in link data for each page would take quite long! Can anyone recommend anyways or tools which can look at the entire link profile for a website. I have checked OSE but there's not much data because the site is relatively new. Cheers, RM
Intermediate & Advanced SEO | | MBASydney0 -
Strange situation - Started over with a new site. WMT showing the links that previously pointed to old site.
I have a client whose site was severely affected by Penguin. A former SEO company had built thousands of horrible anchor texted links on bookmark pages, forums, cheap articles, etc. We decided to start over with a new site rather than try to recover this one. Here is what we did: -We noindexed the old site and blocked search engines via robots.txt -Used the Google URL removal tool to tell it to remove the entire old site from the index -Once the site was completely gone from the index we launched the new site. The new site had the same content as the old other than the home page. We changed most of the info on the home page because it was duplicated in many directory listings. (It's a good site...the content is not overoptimized, but the links pointing to it were bad.) -removed all of the pages from the old site and put up an index page saying essentially, "We've moved" with a nofollowed link to the new site. We've slowly been getting new, good links to the new site. According to ahrefs and majestic SEO we have a handful of new links. OSE has not picked up any as of yet. But, if we go into WMT there are thousands of links pointing to the new site. WMT has picked up the new links and it looks like it has all of the old ones that used to point at the old site despite the fact that there is no redirect. There are no redirects from any pages of the old to the new at all. The new site has a similar name. If the old one was examplekeyword.com, the new one is examplekeywordcity.com. There are redirects from the other TLD's of the same to his (i.e. examplekeywordcity.org, examplekeywordcity.info), etc. but no other redirects exist. The chances that a site previously existed on any of these TLD's is almost none as it is a unique brand name. Can anyone tell me why Google is seeing the links that previously pointed to the old site as now pointing to the new? ADDED: Before I hit the send button I found something interesting. In this article from dejan SEO where someone stole Rand Fishkin's content and ranked for it, they have the following line: "When there are two identical documents on the web, Google will pick the one with higher PageRank and use it in results. It will also forward any links from any perceived ’duplicate’ towards the selected ‘main’ document." This may be what is happening here. And just to complicate things further, it looks like when I set up the new site in GA, the site owner took the GA tracking code and put it on the old page. (The noindexed one that is set up with a nofollowed link to the new one.) I can't see how this could affect things but we're removing it. Confused yet? I'd love to hear your thoughts.
Intermediate & Advanced SEO | | MarieHaynes0 -
Does link building through content syndication still actually work?
I stumbled across this old SEOmoz whilteboard http://www.seomoz.org/blog/whiteboard-friday-leveraging-syndicated-content-effectively and was wondering if this is still a valid technique given the Panda & Penguin updates. Is anyone here still doing this (and seeing results)?
Intermediate & Advanced SEO | | nicole.healthline0 -
Dynamic 301's causing duplicate content
Hi, wonder if anyone can help? We have just changed our site which was hosted on IIS and the page url's were like this ( example.co.uk/Default.aspx?pagename=About-Us ). The new page url is example.co.uk/About-Us/ and is using Apache. The 301's our developer told us to use was in this format: RewriteCond %{REQUEST_URI} ^/Default.aspx$
Intermediate & Advanced SEO | | GoGroup51
RewriteCond %{QUERY_STRING} ^pagename=About-Us$
RewriteRule ^(.*)$ http://www.domain.co.uk/About-Us/ [R=301,L] This seemed to work from a 301 point of view; however it also seemed to allow both of the below URL's to give the same page! example.co.uk/About-Us/?pagename=About-Us example.co.uk/About-Us/ Webmaster Tools has now picked up on this and is seeing it a duplicate content. Can anyone help why it would be doing this please. I'm not totally clued up and our host/ developer cant understand it too. Many Thanks0 -
How to get the 'show map of' tag/link in Google search results
I have 2 clients that have apparently random examples of the 'show map of' link in Google search results. The maps/addresses are accurate and for airports. They are both aggregators, they service the airports e.g. lax airport shuttle (not actual example) BUT DO NOT have Google Place listings for these pages either manually OR auto populated from Google, DO NOT have the map or address info on the pages that are returned in the search results with the map link. Does anyone know how this is the case? Its great that this happens for them but id like to know how/why so I can replicate across all their appropriate pages. My understanding was that for this to happen you HAD to have Google Place pages for the appropriate pages (which they cant do as they are aggregators). Thanks in advance, Andy
Intermediate & Advanced SEO | | AndyMacLean0 -
Site Wide Internal Navigation links
Hello all, All our category pages www.pitchcare.com/shop are linked to from every product page via the sidebar navigation. Which results in every category page having over 1700 links with the same anchor text. I have noticed that the category pages dont appear to be ranked when they most definately should be. For example http://www.pitchcare.com/shop/moss-control/index.html is not ranked for the term "moss control" instead another of our deeper pages is ranked on page 1. Reading a previous SEO MOZ article · Excessive Internal Anchor Text Linking / Manipulation Can Trip An Automated Penalty on Google
Intermediate & Advanced SEO | | toddyC
I recently had my second run-in with a penalty at Google that appears to punish sites for excessive internal linking with "optimized" (or "keyword stuffed anchor text") links. When the links were removed (in both cases, they were found in the footer of the website sitewide), the rankings were restored immediately following Google's next crawl, indicating a fully automated filter (rather than a manual penalty requiring a re-consideration request). Do you think we may have triggered a penalty? If so what would be the best way to tackle this? Could we add no follows on the product pages? Cheers Todd0