I've copied a content from a government site as it is necessary. Should I add a canonical or just a reference link?
-
Thanks!
-
You may find this helpful - https://www.youtube.com/watch?v=hy3_Rjc0Tso
I suppose you could get around it by creating it in an image or a way that Google bot wouldn't see is as duplicate content as much but its iffy.
Alternatively don't copy the content just reference it in a link then you don't have the content problem but the users can still see the content.
-
Then you'd want to avoid the canonical, but it's unlikely that the page will rank well if you have copied it from a reliable resource like a government website. Google tends to try and filter copies like this, although sometimes you see the same thing ranking over and over again on different sites because those duplicated resources are legitimately the only relevant results for a user's query. When Google does filter duplicate results, it will try to pick the most authoritative resource to rank, discarding the rest. In a case like this, it'll pick the government website 99.9% of the time and discard copies.
If you really want that page to rank, you'd also want to avoid linking to the original source as well, as linking was a good way of specifying the source before canonicalisation. I wouldn't say that it's a good idea, though - there's no point adding duplicate content that lacks canonicalisation to your website when you don't need to, even if the content is a good resource.
-
What if I still want the page to rank in Google since it's a resource though it's a duplicate content?
-
The link might be enough but I am not sure what a Googler would say to the question. They might advise you to add a canonical tag due to the entire page being a duplicate. Using the canonical certainly can't hurt your site at all, besides the fact that that page won't rank (which isn't an issue). The rest of the site remains totally unaffected.
-
Yes, I copied an entire page for a legitimate reason. Is it fine if I'll just add a link below the copied content for example "Original source: [url]"?
-
Depending on how extensive your quoting of the government content is, you might just be able to link, or you might be better off canonicalising. A simple quote on an otherwise unique page is not reason to canonicalise, just as if you had quoted from a newspaper website in an article about a subject. There is no way you'd need to canonicalise your own article to that subject.
An entire page, lifted and republished for legitimate reasons, you could canonicalise to avoid any duplication confusion (even though a link was the proper way to go about identifying the original source of the content in the past).
-
Both do the same really with the exception of the user can see one more than the other. I would recommend the canonical which should help avoid duplicate content issues as the content is already there and I don't foresee the user needing a link.
in short- canonical it
more info - https://support.google.com/webmasters/answer/139066?hl=en
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What do you add to your robots.txt on your ecommerce sites?
We're looking at expanding our robots.txt, we currently don't have the ability to noindex/nofollow. We're thinking about adding the following: Checkout Basket Then possibly: Price Theme Sortby other misc filters. What do you include?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Internal Link Analysis (Site Wide)
Hi i'm currently doing a internal link analysis for one of my clients and want to pull internal link data for the entire website. So i can look at the distribution of internal anchor text and to identify ways in which we can optimize internal linking. I have had a look at screaming frog the trouble is, this data is only exportable one page at a time. Meaning, you can’t export an entire site “In Link” data. The site has 200+ pages so pulling in link data for each page would take quite long! Can anyone recommend anyways or tools which can look at the entire link profile for a website. I have checked OSE but there's not much data because the site is relatively new. Cheers, RM
Intermediate & Advanced SEO | | MBASydney0 -
Site Wide Footer Links Exception, Any Advice ?
I was reading the following Q&A on site wide footer links, http://moz.com/community/q/site-wide-links-from-another-domain-could-these-cause-a-problem I feel my situation is slightly different however,we have lots of international sites linking to each other through these links like our sites for different counties and languages so our German, French and Spanish sites, http://www.cirrusresearch.co.uk/ Our main UK site has always ranked very well and has never really had a problem despite always having had these followed sitewide footer links, Because of this we regularly get high amount of visitors performing English language searches from different counties and i don't think it is a bad thing having more country/language specific sites of ours available in the footer for visitors that may prefer a more localized site, Our main website has to be at least 10+ years old at least, has a lot of strong links compared to our competitors, but the smaller German and Spanish sites are relatively smaller in size and most only 1-2 years old, my big fear is that these smaller sites would not be able to stand on there own without these footer links from our main site, After reading the community question caused me to question this ?, should i take a leap of faith and no-follow all of these site wide footer links connecting all of our sites ? we never really had a problem ranking so i don't really see the need but would this be the best thing to do ? Thank you, James
Intermediate & Advanced SEO | | Antony_Towle0 -
Using rel="nofollow" when link has an exact match anchor but the link does add value for the user
Hi all, I am wondering what peoples thoughts are on using rel="nofollow" for a link on a page like this http://askgramps.org/9203/a-bushel-of-wheat-great-value-than-bushel-of-goldThe anchor text is "Brigham Young" and the page it's pointing to's title is Brigham Young and it goes into more detail on who he is. So it is exact match. And as we know if this page has too much exact match anchor text it is likely to be considered "over-optimized". I guess one of my questions is how much is too much exact match or partial match anchor text? I have heard ratios tossed around like for every 10 links; 7 of them should not be targeted at all while 3 out of the 10 would be okay. I know it's all about being natural and creating value but using exact match or partial match anchors can definitely create value as they are almost always highly relevant. One reason that prompted my question is I have heard that this is something Penguin 3.0 is really going look at.On the example URL I gave I want to keep that particular link as is because I think it does add value to the user experience but then I used rel="nofollow" so it doesn't pass PageRank. Anyone see a problem with doing this and/or have a different idea? An important detail is that both sites are owned by the same organization. Thanks
Intermediate & Advanced SEO | | ThridHour0 -
How to Remove Joomla Canonical and Duplicate Page Content
I've attempted to follow advice from the Q&A section. Currently on the site www.cherrycreekspine.com, I've edited the .htaccess file to help with 301s - all pages redirect to www.cherrycreekspine.com. Secondly, I'd added the canonical statement in the header of the web pages. I have cut the Duplicate Page Content in half ... now I have a remaining 40 pages to fix up. This is my practice site to try and understand what SEOmoz can do for me. I've looked at some of your videos on Youtube ... I feel like I'm scrambling around to the Q&A and the internet to understand this product. I'm reading the beginners guide.... any other resources would be helpful.
Intermediate & Advanced SEO | | deskstudio0 -
Sites with dynamic content - GWT redirects and deletions
We have a site that has extremely dynamic content. Every day they publish around 15 news flashes, each of which is setup as a distinct page with around 500 words. File structure is bluewidget.com/news/long-news-article-name. No timestamp in URL. After a year, that's a lot of news flashes. The database was getting inefficient (it's managed by a ColdFusion CMS) so we started automatically physically deleting news flashes from the database, which sped things up. The problem is that Google Webmaster Tools is detecting the freshly deleted pages and reporting large numbers of 404 pages. There are so many 404s that it's hard to see the non-news 404s, and I understand it would be a negative quality indicator to Google having that many missing pages. We were toying with setting up redirects, but the volume of redirects would be so large that it would slow the site down again to load a large htaccess file for each page. Because there isn't a datestamp in the URL we couldn't create a mask in the htaccess file automatically redirecting all bluewidget.com/news/yymm* to bluewidget.com/news These long tail pages do send traffic, but for speed we only want to keep the last month of news flashes at the most. What would you do to avoid Google thinking its a poorly maintained site?
Intermediate & Advanced SEO | | ozgeekmum0 -
Do you think Link:Content Ratio counts in SEO?
We posted the same question in Quora. But hope to get responses or test results from SEOMOzers. This might help Google for identifying: High Link:Content Ratio = Parked Domain
Intermediate & Advanced SEO | | arousta
Moderate Link:Content Ratio = Directory Site
Relatively High Link:Content Ratio = Normal Website
Very High Link:Content Ratio = Article page or Blog Do you think Google is using it, especially during Panda Update? I am trying to find a reasonable cause of many situations like the PR of deep links or category pages in Directory sites has vanished. And if that has something to do with it.0 -
Migrating a site from a standalone site to a subdivision of large .gov.uk site
The scenario We’ve been asked by a client, a Non-Government Organisation who are being absorbed by a larger government ministry, for help with the SEO of their site. They will be going from a reasonably large standalone site to a small sub-directory on a high authority government site and they want some input on how best to maintain their rankings. They will be going from the Number 1 ranked site in their niche (current site domainRank 59) to being a sub directory on a domainRank 100 site). The current site will remain, but as a members only resource, behind a paywall. I’ve been checking to see the impact that it had on a related site, but that one has put a catch all 302 redirect on it’s pages so is losing the benefit of a it’s historical authority. My thoughts Robust 301 redirect set up to pass as much benefit as possible to the new pages. Focus on rewriting content to promote most effective keywords – would suggest testing of titles, meta descriptions etc but not sure how often they will be able to edit the new site. ‘We have moved’ messaging going out to webmasters of existing linking sites to try to encourage as much revision of linking as possible. Development of link-bait to try and get the new pages seen. Am I going about this the right way? Thanks in advance. Phil
Intermediate & Advanced SEO | | smrs-digital0