Using unique content from "rel=canonical"ized page
-
Hey everyone, I have a question about the following scenario:
Page 1: Text A, Text B, Text C
Page 2 (rel=canonical to Page 1): Text A, Text B, Text C, Text D
Much of the content on page 2 is "rel=canonical"ized to page 1 to signalize duplicate content. However, Page 2 also contains some unique text not found in Page 1.
How safe is it to use the unique content from Page 2 on a new page (Page 3) if the intention is to rank Page 3?
Does that make any sense?
-
Yeah, I tend to agree with Maximilian and Mike - I'm not clear on the use-case scenario here and, technically, pages 1 and 2 aren't duplicated. Rel=canonical probably will still work, in most cases, and will keep page 2 from looking like a duplicate (and from ranking), but I'd like to understand the situation better.
If Google did honor the canonical tag on page 2, then the duplication between pages 2 and 3 shouldn't be a problem. I'm just thinking there may be a better way.
-
Technically Page 1 would contain the subset of Page 2's superset except that Page 1 is likely older, ranking better and the page you want to keep so would take precedence. In which case Page 2's content would be considered as duplicating Page 1's superset of content and Page 2 should be canonicalized to Page 1. Of course, Rel=Canonical is a suggestion not a directive so the search engines reserve the right to not listen to it if they feel the tag isn't relevant.
The real question here would be why are you reusing all of that copy and would those pages be better served with more unique content instead of continuing to reuse and canonicalize?
-
Hey Mak,
One thing to bear in mind is that the canonical tag should be used on pages with the same content, if there is extra content on Page 2 that doesn't appear on Page 1, then Google could ignore the canonical tag al together:
_The
rel="canonical"
attribute should be used only to specify the preferred version of many pages with identical content (although minor differences, such as sort order, are okay).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are rel=author and rel=publisher meta tags currently in use?
Hello, Do these meta tags have any current usage? <meta name="author" content="Author Name"><meta name="publisher" content="Publisher Name"> I have also seen this usage linking to a companies Google+ Page:Thank you
Intermediate & Advanced SEO | | srbello0 -
Minimum amount of content for Ecommerce pages?
Hi Guys, Currently optimizing my e-commerce store which currently has around 100 words of content on average for each category page. Based on this study by Backlinko the more content the better: http://backlinko.com/wp-content/uploads/2016/01/02_Content-Total-Word-Count_line.png Would you say this is true for e-commerce pages, for example, a page like this: http://www.theiconic.com.au/yoga-pants/ What benefits would you receive with adding more content? Is it basically more content, leads to more potential long-tail opportunity and more organic traffic? Assuming the content is solid and not built just for SEO reasons. Cheers.
Intermediate & Advanced SEO | | seowork2140 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
Best to Fix Duplicate Content Issues on Blog If URLs are Set to "No-Index"
Greetings Moz Community: I purchased a SEMrush subscription recently and used it to run a site audit. The audit detected 168 duplicate content issues mostly relating to blog posts tags. I suspect these issues may be due to canonical tags not being set up correctly. My developer claims that since these blog URLs are set to "no-index" these issues do not need to be corrected. My instinct would be to avoid any risk with potential duplicate content. To set up canonicalization correctly. In addition, even if these pages are set to "no-index" they are passing page rank. Further more I don't know why a reputable company like SEMrush would consider these errors if in fact they are not errors. So my question is, do we need to do anything with the error pages if they are already set to "no-index"? Incidentally the site URL is www.nyc-officespace-leader.com. I am attaching a copy of the SEMrush audit. Thanks, Alan BarjWaO SqVXYMy
Intermediate & Advanced SEO | | Kingalan10 -
HTTPS in Rel Canonical
Hi, Should I, or do I need to, use HTTPS (note the "S") in my canonical tags? Thanks Andrew
Intermediate & Advanced SEO | | Studio330 -
Using "Read More" buttons as a tool to cram in Content
Hi Mozzers! Let's say our website is clean, professional, and minimalistic. Can we use a "read more" button that will expand the text on the page to increase the amount of content while (unless clicked) not impacting the appearance? I want to make sure I am not violating Google Webmaster's guidelines for "Hidden Text" Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
REL canonicals not fixing duplicate issue
I have a ton of querystrings in one of the apps on my site as well as pagination - both of which caused a lot of Duplicate errors on my site. I added rel canonicals as a php condition so every time a specific string (which only exists in these pages) occurs. The rel canonical notification shows up in my campaign now, but all of the duplicate errors are still there. Did I do it right and just need to ignore the duplicate errors? Is there further action to be taken? Thanks!
Intermediate & Advanced SEO | | Ocularis0 -
Where does "Pages Similar" link text come from?
When I type in a competitor name (in this case "buycostumes") Google shows several related websites in it's "Pages Similar to..." section at the bottom of the page: My question, can anyone tell me where the text comes from that Google uses as the link. Our competitors have nice branded links and our is just a keyword. I can find nothing on-page that Google is using so it must be coming from someplace off-page, but where?
Intermediate & Advanced SEO | | costume0