Have Your Thoughts Changed Regarding Canonical Tag Best Practice for Pagination? - Google Ignoring rel= Next/Prev Tagging
-
Hi there,
We have a good-sized eCommerce client that is gearing up for a relaunch. At this point, the staging site follows the previous best practice for pagination (self-referencing canonical tags on each page; rel=next & prev tags referencing the last and next page within the category).
Knowing that Google does not support rel=next/prev tags, does that change your thoughts for how to set up canonical tags within a paginated product category? We have some categories that have 500-600 products so creating and canonicalizing to a 'view all' page is not ideal for us. That leaves us with the following options (feel it is worth noting that we are leaving rel=next / prev tags in place):
- Leave canonical tags as-is, page 2 of the product category will have a canonical tag referencing ?page=2 URL
- Reference Page 1 of product category on all pages within the category series, page 2 of product category would have canonical tag referencing page 1 (/category/) - this is admittedly what I am leaning toward.
Any and all thoughts are appreciated! If this were in relation to an existing website that is not experiencing indexing issues, I wouldn't worry about these. Given we are launching a new site, now is the time to make such a change.
Thank you!
Joe
-
An old question, but thought I'd weigh in with to report that Google seems to be ignoring self-referring pagination canonicals on a news site that I'm working on.
Pages such as /news/page/36/ have themselves as declared canonicals, but Search Console reports that Google is selecting the base page /news/ as the canonical instead.
Would be interested to know if anyone else is seeing that.
-
Hi,
I'm also very interested in what the new best approach for pagination would be.
In a lot of webshops, option 2 is used. However, in this article the possible negative outcome of this option is described (search the article for 'Canonicalize to the first page'). In my opinion, this is particularly true for paginated blog articles, and less so for paginated results of products per category in webshops. I think the root page is the one you want to rank in the end.
What you certainly don't want, is create duplicate content. Yes, your products (and of course their links to the product pages) are different for each page. And yes, there will be also more internal links pointing to the root category page, and not to the second or third results page. But if you invested time in writing content for your category, and invested time in all the other on page optimizations, these will be the same across all your result pages.
So in the end, we leave it to Google and hope that they do recognize your pagination. Is this the best option? Maybe, maybe not. Anyway, we didn't know that they didn't use rel=next/prev for several years, and mostly it worked fine.
So I think in the end EffectDigital is right, just do nothing. If you see problems, I would try option 2, using your first results page as canonical.
-
The only thing it changes IMO is delete rel=prev / next tags to save on code bloat. Other than that, nothing changes in my opinion. It's still best to allow Google to rank paginated URLs if Google chooses to do so - as it usually happens for a reason!
I might lift the self referencing canonicals, maybe. Just leave them without directives of any kind, and force Google to determine what to do with them via URL structure ('?p=', '/page/', '?page=' etc). If they're so confident they don't need these tags now, maybe using any directives at all is just creating polluting signals that will unnecessarily interfere
In the end I think I'd just strip it all off and monitor it, see what happened
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow: /sr/ and Disallow: /si/ - robots.txt
Hello Mozzers - I have come across the two directives above in a robots.txt file of a website - the web dev isn't sure what they meant although he implemented robots.txt - I think just legacy stuff that nobody has analysed for years - I vaguely recall sr means search request but can't remember. If any of you know what these directives do, then please let me know.
Web Design | | McTaggart0 -
A question about title tag when the page has 2 services.
Hi all, Assuming a company has two services: SEO and PPC. Here is the situation: I would like to focus on SEO for now but also don't want to leave my PPC service out of the page. SEO accounts for 60% of the content, while PPC accounts for 40%. Assuming the content (SEO + PPC) of the page will not change, which title tag would you prefer, and why? SEO | brand name (Is it appropriate that the title focus on SEO but the content of the page contains PPC) SEO | PPC | brand name (Will the keywords dilute each other?) SEO | SEM Agency | brand name (The idea behind it is that SEM includes SEO and PPC so I think Google would be OK with the page ranking for SEO and also including PPC in the content. I really appreciate your help and explanation. Thank you!
Web Design | | Raymondlee0 -
Is The HREF Link "Title" Tag Needed on Mobile Websites?
Hello To Those Who Are Wiser Than I, I am wondering if the href link "title" tag is needed, or serves any purpose, on mobile websites? Also, does it effect SEO in any way? I ask because generally the href link title tag provides more information to the user when they scroll their mouse over the link - but this action does not happen on mobile! Users have no mouse and thus no extra information would be displayed. I'm really wondering if it still matters for SEO purposes on mobile though. -The UnEnlightened
Web Design | | Stew2220 -
Help, site traffic has dropped significantly since we changed from http to https
Heya, so I am just in charge of the content on the site, and the SEO content, not the actual back-end stuff. A little under 2 weeks ago we switched to https, and our site traffic has been down a lot ever since. When I SERP check our keywords, they don't seem to have dropped in rankings pages. Here is what I got when I asked our dev guy if 301 redirects were put in: I did not add any redirects so all of the content is accessible on both unless individual links get hardcoded one way or the other. The only thing in place is a Cloudflare plugin which rewrites links in cached pages to match the way its accessed, so if for example you access a page over https you don’t get the version cached with a bunch of http links since that will throw up mixed content warnings in the browser. Other than that WP mostly generates all its links to match whatever protocol you are accessing the current page with. We can make specific pages redirect one way or the other in the future if we want to though... As a startup, site traffic is a metric we track to gouge progress, and so I really need to get to the bottom of if it was the change from http to https that has causes the drop, and if so, what can we do about it? Also, in case it is relevant: the bounce rate is now sky high (ave. 15% to 64% this last week!) Any help is very welcome! Site: https://mobileday.com Thank you!
Web Design | | MobileDay1 -
Text in Images vs. Alt tags
Hi on my homepage i h ave multiple images They have the appropriate alt text for each image, but the text which the image displays is not written into the page and styled using CSS rather than placing text within an image. Is this a issue worth correcting, or is it sufficient to have just alt text for each image. Any major pros from having putting the text in the image into the CMS using appropriate CSS styling to achieve the same effect.
Web Design | | monster990 -
Infinite Scrolling vs. Pagination on an eCommerce Site
My company is looking at replacing our ecommerce site's paginated browsing with a Javascript infinite scroll function for when customers view internal search results--and possibly when they browse product categories also. Because our internal linking structure isn't very robust, I'm concerned that removing the pagination will make it harder to get the individual product pages to rank in the SERPs. We have over 5,000 products, and most of them are internally linked to from the browsing results pages in the category structure: e.g. Blue Widgets, Widgets Under $250, etc. I'm not too worried about removing pagination from the internal search results pages, but I'm concerned that doing the same for these category pages will result in de-linking the thousands of product pages that show up later in the browsing results and therefore won't be crawlable as internal links by the Googlebot. Does anyone have any ideas on what to do here? I'm already arguing against the infinite scroll, but we're a fairly design-driven company and any ammunition or alternatives would really help. For example, would serving a different page to the Googlebot in this case be a dangerous form of cloaking? (If the only difference is the presence of the pagination links.) Or is there any way to make rel=next and rel=prev tags work with infinite scrolling?
Web Design | | DownPour0 -
Google cache and rel alternate
http://groups.google.com/a/googleproductforums.com/forum/#!searchin/webmasters/rel$20alternate$20not$20works/webmasters/xzwTBJemPss/LyRjRCigZdYJ http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
Web Design | | ctam
Situation:www.example.ca - canadian IP and https://www.google.com/webmasters/tools/settings:"Your site's domain is currently associated with the target: Canada" - done long time ago.<head><link rel="canonical" href="http://www.example.ca/" /> <link< span="">rel="alternate" hreflang="en-us" href="http://www.example.com/" /></link<><link rel="alternate" hreflang="en-ca" href="http://www.example.ca/" /><link rel="alternate" hreflang="en" href="http://www.example.com/" />```
www.example.com - US IP and
https://www.google.com/webmasters/tools/settings:
"Geographic target Target users in: United States" done long time ago.
<head><link rel="canonical" href="http://www.example.com/" /> <link< span="">rel="alternate" hreflang="en-us" href="http://www.example.com/" /><link rel="alternate" hreflang="en-ca" href="http://www.example.ca/" />
<link rel="alternate" hreflang="en" href="http://www.example.com/" /> Differences: Prices and some minor changes in design. cache:www.example.com - shows .ca version,
with snapshot's date after rel="alternate" had been added.
Results: In usexample.com pages do not appear in search results.
Some times www.example.ca pages do,
but they are even close so well ranked as example.com pages before. Question: What we are doing wrong?</link<>0 -
Google search issue with exact domain
We had a site from Feb-2011 to Nov-2011 at the domain amcoexterminating.com. The site was pure HTML/CSS and the daily unique visitors steadily increased over that time. So all was fine. We then moved the site to a CMS (Joomla) on Dec. 6th. From that day forward, the daily visitors went into the tank. Before the move, if you typed "amcoexterminating.com" or "amco exterminating" into Google search, the site would be the first result (as you'd expect since those are the words that make up the actua domain). But we tried this yesterday and the site did not come up at all. NOT GOOD. It would work in Yahoo or Bing, but not in Google. So obviously, the problem with Google search directly affected the daily visitors. We just checked Webmaster tools yesterday (yes, this should have been done sooner, lesson learned) and it said "Site has severe health issues - Important page blocked by robots.txt". It listed the "important" page URL and it was just a link to an image. Regardless, I wiped out the Joomla created robots.txt file and added a new one and made it just say... User-agent: *Allow: / About 14 hours later, after the new robots.txt file was recognized by Google, the "severe health" message went away. However if I search in Google for "amcoexterminating.com", it still doesn't show up and the client is concerned (as they should be). Do you think the search engines just need more time to refresh? If so, once it refreshes, should the site show up first again right away? Or is it possible the robots.txt file had nothing to do with the issue? If so, what other things could I check into that might cause Google search to not find a site even if you search for exact domain name? Please share any and all things I should look into as I need to get this site showing in Google search again (as it was before moving to the CMS). Thanks!
Web Design | | MarathonMS0