Have Your Thoughts Changed Regarding Canonical Tag Best Practice for Pagination? - Google Ignoring rel= Next/Prev Tagging
-
Hi there,
We have a good-sized eCommerce client that is gearing up for a relaunch. At this point, the staging site follows the previous best practice for pagination (self-referencing canonical tags on each page; rel=next & prev tags referencing the last and next page within the category).
Knowing that Google does not support rel=next/prev tags, does that change your thoughts for how to set up canonical tags within a paginated product category? We have some categories that have 500-600 products so creating and canonicalizing to a 'view all' page is not ideal for us. That leaves us with the following options (feel it is worth noting that we are leaving rel=next / prev tags in place):
- Leave canonical tags as-is, page 2 of the product category will have a canonical tag referencing ?page=2 URL
- Reference Page 1 of product category on all pages within the category series, page 2 of product category would have canonical tag referencing page 1 (/category/) - this is admittedly what I am leaning toward.
Any and all thoughts are appreciated! If this were in relation to an existing website that is not experiencing indexing issues, I wouldn't worry about these. Given we are launching a new site, now is the time to make such a change.
Thank you!
Joe
-
An old question, but thought I'd weigh in with to report that Google seems to be ignoring self-referring pagination canonicals on a news site that I'm working on.
Pages such as /news/page/36/ have themselves as declared canonicals, but Search Console reports that Google is selecting the base page /news/ as the canonical instead.
Would be interested to know if anyone else is seeing that.
-
Hi,
I'm also very interested in what the new best approach for pagination would be.
In a lot of webshops, option 2 is used. However, in this article the possible negative outcome of this option is described (search the article for 'Canonicalize to the first page'). In my opinion, this is particularly true for paginated blog articles, and less so for paginated results of products per category in webshops. I think the root page is the one you want to rank in the end.
What you certainly don't want, is create duplicate content. Yes, your products (and of course their links to the product pages) are different for each page. And yes, there will be also more internal links pointing to the root category page, and not to the second or third results page. But if you invested time in writing content for your category, and invested time in all the other on page optimizations, these will be the same across all your result pages.
So in the end, we leave it to Google and hope that they do recognize your pagination. Is this the best option? Maybe, maybe not. Anyway, we didn't know that they didn't use rel=next/prev for several years, and mostly it worked fine.
So I think in the end EffectDigital is right, just do nothing. If you see problems, I would try option 2, using your first results page as canonical.
-
The only thing it changes IMO is delete rel=prev / next tags to save on code bloat. Other than that, nothing changes in my opinion. It's still best to allow Google to rank paginated URLs if Google chooses to do so - as it usually happens for a reason!
I might lift the self referencing canonicals, maybe. Just leave them without directives of any kind, and force Google to determine what to do with them via URL structure ('?p=', '/page/', '?page=' etc). If they're so confident they don't need these tags now, maybe using any directives at all is just creating polluting signals that will unnecessarily interfere
In the end I think I'd just strip it all off and monitor it, see what happened
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Using Multiple links/names for the same product?
I am being asked to change these product links on the home page: Home/Condo
Web Design | | RoxBrock
Watercraft/Boat to Home
Condo
Watercraft
Boat (Along with several other product links) How does this affect the customer experience/usability, and SEO? Is it a good idea or is it confusing? Thank you.0 -
Can multiple domains compete with one another if they have the same/similar content?
Could an ecommerce site with a .co.nz nd .com.au domain compete with one another and hard organic rankings if the content of the pages are the same? All the links would be identical (apart from .co.nz or .com.au) the product descriptions, pages titles etc... would all be identical or similar (our page titles are ever so slightly different). Could this be hurting us? Thanks in advance ^ Paul
Web Design | | kevinliao0 -
Http://www.domain.com/services/city or http://www.domain.com/city
Hi, I have a website created on a WordPress Platform. My site is in the "Service" industry and we perform the same service for many different "cities." My question pertains to the Navigation Bar and SEO. Is it better to have a "Service" tab on the navigation bar - that has sub navigation that lists all the cities on a drop down menu. When this happens the URL string looks like http://www.domain.com/service/city The other choice would be to create individual TABS on the NAV bar, by doing this the URL string would look - http://www.domain.com/city . I could be wrong, but I am assuming the http://www.domain.com/city is better than http://www.domain.com/services/city for SEO purposes, ....if I am correct is there a way to make SUB Menu URLS appear as http://www.domain.com/city ? Any input as always would be appreciated best regards, Jimmy
Web Design | | jimmy02251 -
Geo Tagging Your Website?
Is it worth it to do this to your site if it has a local focus? What are the advantages and disadvantages? Thanks! ~Ricky
Web Design | | RickyShockley0 -
Best Practice issue: Modx vs Wordpress
Lately I've been working a lot with Modx to create a new site for our own firm as well for other projects. But so far I haven't seen the advantages for SEO purposes other then the fact that with ModX you can manage almost everything yourself including snippets etc without to much effort. Wordpress is a known factor for blogging and since the last 2 years or so for websites. My question is: Which platform is better suited for SEO purposes? Which should I invest my time in? ModX or Wordpress? Hope to hear your thought on the matter
Web Design | | JarnoNijzing0 -
How Important is Title Tag while viewing in browser's tab
Hi SEOmozer,I have a dumb/silly question. Ok, I know Title tags are important for SEO and users and it shows up in the SERP and all that. My question is that, using a weird CMS, I have the title tag implemented and it appear in the SERP the way I want it. However, the problem is that when I hover over the tab on the browser, it doesn't appear the same way it is in the SERP. Does that really matter that it appears differently? I checked the HTMl and this what I got<title>Example Keywordtitle><meta name="layout" content="main"/><meta name="description" content="Keyword 1 | Keyword 2 | Company Name"/>So whats within the "content" is showing in the SERP and what is in "title" tag is showing in the browser tab. Shouldn't they be the same?
Web Design | | TommyTan0 -
What's the best way to sculpt links on a page?
I know PR isn't a top ranking factor anymore, so "PR sculpting" isn't something to focus on. But isn't it still true that having more links that you need on any given page is worse than having fewer, in terms of that page's authority? I'm managing a site that has a lot of navigational links in the footer, which are duplicative because they're almost all included in the top nav bar, and several are triplicated in the sidebar as well. I wanted to remove 85% of these duplicative links from the footer, thinking they diluted the page authority and that most users probably won't scroll there anyway when we launch the site. The site owner is pushing back, though, not wanting to remove so many links because he believes they might be useful to some users. We can test our respective user-behavior theories after launching, but right now I have two questions: Will having a sizable number of duplicative links in the footer dilute the page's authority? and 2) Are there any other ways to reduce this dilution, aside from simply removing the links? (I know nofollow is not the answer, but possibly using iframes or Java or something like that?)
Web Design | | KyleJB0