Rel=Canonical to Longer Page?
-
We've got a series of articles on the same topic and we consolidated the content and pasted it altogether on a single page. We linked from each individual article to the consolidated page. We put a noindex on the consolidated page.
The problem: Inbound links to individual articles in the series will only count toward the authority of those individual pages, and inbound links to the full article will be worthless.
I am considering removing the noindex from the consolidated article and putting rel=canonicals on each individual post pointing to the consolidated article. That should consolidate the PageRank. But I am concerned about pointing****a rel=canonical to an article that is not an exact duplicate (although it does contain the full text of the original--it's just that it contains quite a bit of additional text).
An alternative would be not to use rel=canonicals, nor to place a noindex on the consolidated article. But then my concern would be duplicate content and unconsolidated PageRank.
Any thoughts?
-
Nice.
-
I am doubting it. Seems like an either/or thing. I'll probably do rel=canonical to the view-all.
Thanks everybody.
-
Now I just want to know if that usage of rel=canonical can coexist with rel=next & rel=prev.
-
"rel=”canonical” can specify the superset of content" -http://googlewebmastercentral.blogspot.com/2011/09/view-all-in-search-results.html
-
Rel=canonical is supposed to point to an identical page.
-
Consolidating for usability - I want to provide access to both formats. I hate the idea of noindexing a page that could be the target of inbound links.
-
Using rel=prev and rel=next would do the trick if it's still cool to point the rel=canonical to the view-all page. Anyone know?
-
Maybe you're better off noindexing the partial articles and linking from them to the main article with a "Read the full article" link, or something like that.
How many of these articles you have relative to the rest of your content could make a difference--a very small percentage probably wouldn't be an issue in the overall health of your site if you just left them as is.
Why are you consolidating?
-
"Should I make the consolidated article a PDF download? Let them all be indexed without canonicals or 301s? I just don't know the best practice here."
First off I would say I would not use a PDF unless you have the photographs and other content to make flow and have a good end-user experience
PDFs in Google search results
Thursday, September 01, 2011 at 7:23 AM
Webmaster level: All
Our mission is to organize the world’s information and make it universally accessible and useful. During this ambitious quest, we sometimes encounter non-HTML files such as PDFs, spreadsheets, and presentations. Our algorithms don’t let different filetypes slow them down; we work hard to extract the relevant content and to index it appropriately for our search results. But how do we actually index these filetypes, and—since they often differ so much from standard HTML—what guidelines apply to these files? What if a webmaster doesn’t want us to index them?
Google first started indexing PDF files in 2001 and currently has hundreds of millions of PDF files indexed. We’ve collected the most often-asked questions about PDF indexing; here are the answers:
Q: Can Google index any type of PDF file?
A: Generally we can index textual content (written in any language) from PDF files that use various kinds of character encodings, provided they’re not password protected or encrypted. If the text is embedded as images, we may process the images with OCR algorithms to extract the text. The general rule of the thumb is that if you can copy and paste the text from a PDF document into a standard text document, we should be able to index that text.Q: What happens with the images in PDF files?
A: Currently the images are not indexed. In order for us to index your images, you should create HTML pages for them. To increase the likelihood of us returning your images in our search results, please read the tips in our Help Center.Q: How are links treated in PDF documents?
A: Generally links in PDF files are treated similarly to links in HTML: they can pass PageRank and other indexing signals, and we may follow them after we have crawled the PDF file. It’s currently not possible to "nofollow" links within a PDF document.Q: How can I prevent my PDF files from appearing in search results; or if they already do, how can I remove them?
A: The simplest way to prevent PDF documents from appearing in search results is to add an X-Robots-Tag: noindex in the HTTP header used to serve the file. If they’re already indexed, they’ll drop out over time if you use the X-Robot-Tag with the noindex directive. For faster removals, you can use the URL removal tool in Google Webmaster Tools.Q: Can PDF files rank highly in the search results?
A: Sure! They’ll generally rank similarly to other webpages. For example, at the time of this post, [mortgage market review], [irs form 2011] or [paracetamol expert report] all return PDF documents that manage to rank highly in our search results, thanks to their content and the way they’re embedded and linked from other webpages.Q: Is it considered duplicate content if I have a copy of my pages in both HTML and PDF?
A: Whenever possible, we recommend serving a single copy of your content. If this isn’t possible, make sure you indicate your preferred version by, for example, including the preferred URL in your Sitemap or by specifying the canonical version in the HTML or in the HTTP headersof the PDF resource. For more tips, read our Help Center article about canonicalization.Q: How can I influence the title shown in search results for my PDF document?
A: We use two main elements to determine the title shown: the title metadata within the file, and the anchor text of links pointing to the PDF file. To give our algorithms a strong signal about the proper title to use, we recommend updating both.If you want to learn more, watch Matt Cutt’s video about PDF files’ optimization for search, and visit our Help Center for information about the content types we’re able to index. If you have feedback or suggestions, please let us know in the Webmaster Help Forum.
As an end-user my personal preference would be to have a regular webpage over a PDF. However a PDF is always going to be indexed course and can be of great value. I personally would go with the individual page versus the extra clicking that it takes to download or view a PDF person does not have the right plug-in install another browser then they cannot appear in their browser and will have to download some people have apprehension about downloading anything at all.For instance this PDF is something that does not rank as well the website counterpart
I hope this is been of help to you sincerely,Thomas
-
I apologize for that confusing post. Using a voice recognition system. So I apologize for any errors
I should've worded my question differently instead of asking I assume that you're using WordPress correct?
I should've asked are you using WordPress or not?
If you were using WordPress in all honesty it would be easier to reference a plug-in for Pagination and that's why I was asking.
What I said below unfortunately it was an error I apologize for that.
"I would of course index your house.
I don't know what this means."
I would of course indexed the webpage is what I meant essentially. I'm sorry for the error
"What exactly should I check using OSE, and what actions should I take in response to what findings?"
Please check your inbound links going to articles that you know are from quality websites if you were to lose those even if they're pointing to partial articles for full articles do you believe that your site would lose page rank?
My reference to open site Explorer was in reply to this you stating the problem with inbound links not counting towards the authority of individual pages and full article inbound links will be worthless.
"The problem: Inbound links to individual articles in the series will only count toward the authority of those individual pages, and inbound links to the full article will be worthless."
Using open site Explorer you can figure out how many inbound links of value are pointing to your articles and they would. OSE gives you the total for your root domain as well as your page and subdomain. Good links regardless of if their pointed at articles or pointed at a secular page would Make a stronger website altogether if they are pointing to any page on your site at all. They would give you stronger domain trust along with stronger page rank.
Please review the articles about new Pagination
However I would strongly recommend handling Pages this way in this article here.
http://searchengineland.com/google-provides-new-options-for-paginated-content-92906
http://googlewebmastercentral.blogspot.com/2011/09/view-all-in-search-results.html
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
I am very sorry that my first answer was not very helpful to you. I hope this one is of more help and I appreciate you letting me know that I have made an error.
I sincerely hope that this will be of more help to you and answer your question fully.
New Handling of View All Pages
Google has been evolving their detection of a series of component pages and the corresponding view all page. When you have a view all page and paginated URLs with a detectable pattern, Google clusters those together and consolidates the PageRank value and indexing relevance. Basically, all of the paginated URLs are seen as components in a series that rolls up to the view all page. In most cases, Google has found that the best experience for searchers is to rank the view all page in search results. (You can help this process along by using the rel=”canonical” attribute to point all pages to the view all version.)
If You Don’t Want The View All Page To Rank Instead of Paginated URLs
If you don’t want the view all version of your page shown and instead want individual paginated URLs to rank, you can block the view all version with robots.txt or meta noindex. You can also use the all new rel=”next”/rel=”prev” attributes, so read on!
New Pagination Options
If you don’t have a view all page, or you don’t want the view all page to be what appears in search results, you can use the new attributes rel=”next” and rel=”prev” to cluster all of the component pages into a single series. All of the indexing properties for all components in the series are consolidated and the most relevant page in the series will rank for each query. (Yay!)
You can use these attributes for article pagination, product lists, and any other types of pagination your site might have. The first page of the series has only a rel=”next” attribute and the last page of the series has only a rel=”prev” attribute, and all other pages have both. You can still use the rel=”canonical” attribute on all pages in conjunction.
Typically, in this setup, as Google sees all of these component pages as series, the first page of the series will rank, but there may be times when another page is more relevant and will rank instead. In either case, the indexing signals (such as incoming links) are consolidated and shared by the series.
Make sure that the value of rel=”next” and rel=”prev” match the URL (even if it’s non-canonical) as the rel/next values in the series have to match up (you likely will need to dynamically write the values based on the display URL).
There are lots of intricacies to consider here, and I’m working on an in-depth article that runs through everything that came up in the session, so if you have questions, post them here and I’ll add them in!
if you strongly desire your view-all page not to appear in search results: 1) make sure the component pages in the series don’t include rel=”canonical” to the view-all page, and 2) mark the view-all page as “noindex” using any of the standard methods
A few points to mention:
-
The first page only contains rel=”next” and no rel=”prev” markup.
-
Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup.
-
The last page only contains markup for rel=”prev”, not rel=”next”.
-
rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a
<base>
link in your document, relative paths will resolve according to the base URL. -
rel=”next” and rel=”prev” only need to be declared within the section, not within the document .
-
We allow rel=”previous” as a syntactic variant of rel=”prev” links.
-
rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:
-
rel=”prev” and rel=”next” act as hints to Google, not absolute directives.
-
When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.
Sincerely,
Thomas
-
-
Well, the individual articles are entries in a blog while the consolidated article is a separate page altogether. I could 301 the individual article URLs all over to the consolidated article I suppose, but I'd rather not for usability reasons. For the end users I think the ideal really is to have all the individual blog posts as they are, but provide access to the consolidated article.
Should I make the consolidated article a PDF download? Let them all be indexed without canonicals or 301s? I just don't know the best practice here.
-
Since you have consolidated the content and pasted it altogether on a single page who not 301 the old pages to the new consolidated page which has got all the content from the old ones ?
If you can answer NO to all these then I would suggest doing a 301
- Can a user land on the old page and get any information he/she would not have received from the new page?
- Can you think of any reason why the user would want to see the old page ( as opposed to the new one ) ?
-
Expresso, why not just 301 the individual pages to the consolidated article?
-
I assume that you're using WordPress correct?
Nope.
Normally I would say use [rel=canonicals] 100% of the time. For instance if I had a blog post example.com/ABC/
I would obviously use a [rel=canonical]
Same thing if it is example.com/blog/ABC/
use rel=canonicals
To clarify: I am not talking about placing a rel=canonical on a resource located by more than one URL (eg. www.example.com/article-one/ and www.example.com/article-one/?fp=email). I am talking about placing a rel=canonical on each resource in a series, that points to a distinct resource that contains all of the content from all of the resources (eg. www.example.com/article-one/ -> www.example.com/article-all/, www.example.com/article-two/ -> www.example.com/article-all/, etc.).
I would of course index your house.
I don't know what this means.
In all honesty I would check using open site Explorer before you actually change anything. And make sure that your inbound links are not the problem that you're talking about.
What exactly should I check using OSE, and what actions should I take in response to what findings?
-
I assume that you're using WordPress correct?
If you were willing to post the domain I can tell you whether or not you should use rel=canonicals
Normally I would say use the 100% of the time. For instance if I had a blog post example.com/ABC/
I would obviously use a rel=canonicals
Same thing if it is example.com/blog/ABC/
use rel=canonicals
I would of course index your website if it is only showing partial views of your content if you use rel=canonical you will not have to worry about duplicate content issues.
If you're talking about simply changing the post and that will count as a full page I believe you can already do that and you don't have to worry about the page example.com/blog/ taking all the page rank and leaving you with nothing and your articles will rank. However you can simply create new pages instead of new posts in WordPress and that way you would be getting complete inbound link juice to that one secular page.
In all honesty I would check using open site Explorer before you actually change anything. And make sure that your inbound links are not the problem that you're talking about.
http://www.opensiteexplorer.org/
I also recommend using benchwork press hosting from the WPEngine, ZippyKid, Web synthesis, or Pagely they truly do it are worth every cent with the added speed and helpfulness of the WordPress only host.
I hope this is of help sincerely,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel canonical tag from shopify page to wordpress site page
We have pages on our shopify site example - https://shop.example.com/collections/cast-aluminum-plaques/products/cast-aluminum-address-plaque That we want to put a rel canonical tag on to direct to our wordpress site page - https://www.example.com/aluminum-plaques/ We have links form the wordpress page to the shop page, and over time ahve found that google has ranked the shop pages over the wp pages, which we do not want. So we want to put rel canonical tags on the shop pages to say the wp page is the authority. I hope that makes sense, and I would appreciate your feeback and best solution. Thanks! Is that possible?
Intermediate & Advanced SEO | | shabbirmoosa0 -
Why is Google no longer Indexing and Ranking my state pages with Dynamic Content?
Hi, We have some state specific pages that display dynamic content based on the state that is selected here. For example this page displays new york based content. But for some reason google is no longer ranking these pages. Instead it's defaulting to the page where you select the state here. But last year the individual state dynamic pages were ranking. The only change we made was move these pages from http to https. But now google isn't seeing these individual dynamically generated state based pages. When I do a site: url search it doesn't find any of these state pages. Any thoughts on why this is happening and how to fix it. Thanks in advance for any insight. Eddy By the way when I check these pages in google search console fetch as google, google is able to see these pages fine and they're not being blocked by any robot.txt.
Intermediate & Advanced SEO | | eddys_kap0 -
Why isn't the rel=canonical tag working?
My client and I have a problem: An ecommerce store with around 20 000 products has nearly 1 000 000 pages indexed (according to Search Console). I frequently get notified by messages saying “High number of URLs found” in search console. It lists a lot of sample urls with filter and parameters that are indexed by google, for example: https://www.gsport.no/barn-junior/tilbehor/hansker-votter/junior?stoerrelse-324=10-11-aar+10-aar+6-aar+12-aar+4-5-aar+8-9-aar&egenskaper-368=vindtett+vanntett&type-365=hansker&bruksomraade-367=fritid+alpint&dir=asc&order=name If you check the source code, there’s a canonical tag telling the crawler to ignore (..or technically commanding it to regard this exact page as another version of the page without all the parameters) everything after the “?” Does this url showing up in the Search Console message mean that this canonical isn’t working properly? If so: what’s wrong with it? Regards,
Intermediate & Advanced SEO | | Inevo
Sigurd0 -
Rel Canonical attribute order
So the position of the attribute effect the rel canonical tags' ability to function? is the way I see it across multiple documents and websites. Having a discussion with someone in the office and there is a website with it set up as: Will that cause any problems? The website is inquestion still has both pages indexed within Google using the SITE:domain.com/product as well as SITE:domain.com/category/product
Intermediate & Advanced SEO | | jasondexter0 -
Using Canonical on home page
Our home page has the canonical tag pointing to itself (something from wordpress i understand). Is there any positive or negative affect that anyone is aware of from having pages canonical'ed to themselves?
Intermediate & Advanced SEO | | halloranc0 -
Ranking slipped to page 6 from page 1 over the weekend?
My site has been on page one for 2 phrases consistently from May onwards this year. The site has fewer than 100 backlinks and the link profile looks fairly even. On Friday we were on page 1, we even had a position 1, however now we are on page 6. Do you think this is Penguin or some strange Google blip? We have no webmaster tools messages at all. Thanks for any help!
Intermediate & Advanced SEO | | onlinechester0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Does having multiple links to the same page influence the Link juice this page is able to pass
Say you have a page and it has 4 outgoing links to the same internal page. In the original Pagerank algo if these links were links to an page outside your own domain, this would mean that the linkjuice this page is able to pass would be devided by 4. The thing is i'm not sure if this is also the case when the outgoing link, is linking to a page on your own domain. I would say that outgoing links (whatever the destination) will use some of your link juice, so it would be better to have 1 outgoing link instead of 4 to the same destination, the the destination will profit more form that link. What are you're thoughts?
Intermediate & Advanced SEO | | TjeerdvZ0