Rel="prev" / "next"
-
Hi guys,
The tech department implemented rel="prev" and rel="next" on this website a long time ago.
We also added a canonical tag to the 'own' page.We're talking about the following situation:
However we still see a situation where a lot of paginated pages are visible in the SERP.
Is this just a case of rel="prev" and "next" being directives to Google?
And in this specific case, Google deciding to not only show the 1st page in the SERP, but still show most of the paginated pages in the SERP?Please let me know, what you think.
Regards,
Tom -
Interesting development which may be of interest to you Ernst:
Google admitted just the other day that they "haven't supported rel=next/prev for years." https://searchengineland.com/google-apologizes-for-relnext-prev-mixup-314494
"Should you remove the markup? Probably not. Google has communicated this morning in a video hangout that while it may not use rel=next/prev for search, it can still be used by other search engines and by browsers, among other reasons. So while Google may not use it for search indexing, rel=prev/next can still be useful for users. Specifically some browsers might use those annotations for things like prefetching and accessibility purposes."
-
I was looking into this today and happened across this line in Google's Search Console Help documents:
rel="next" and rel="prev" are compatible with rel="canonical" values. You can include both declarations in the same page. For example, a page can contain both of the following HTML tags:
Here's the link to the doc - https://support.google.com/webmasters/answer/1663744?hl=en
But I wouldn't be using a canonical to somewhere else and the rel="next" directives.
-
I had never actually considered that. My thought is, no. I'd literally just leave canonicals entirely off ambiguous URLs like that. Have seen a lot of instances lately where over-zealous sculpting has led to loss of traffic. In the instance of this exact comment / reply, it's just my hunch here. I'd just remove the tag entirely. There's always risk in adding layers of unrequired complexity, even if it's not immediately obvious
-
I'm going to second what @effectdigital is outlining here. Google does what they want, and sometimes they index paginated pages on your site. If you have things setup properly and you are still seeing paginated pages when you do a site: search in Google then you likely need to strengthen your content elsewhere because Google still sees these paginated URLs as authoritative for your domain.
I have a question for you @effectdigital - Do you still self-canonical with rel= prev / next? I mean, I knew that you wouldn't want to canonical to another URL, but I hadn't really thought about the self-canonical until I read something you said above. Hadn't really thought about that one haha.
Thanks!
-
Both are directives to google. All of the "rel=" links are directives, including hreflang, alternate/mobile, AMP, prev/next
It's not really necessary to use a canonical tag in addition to any of the other "rel=" family links
A canonical tag says to Google: "I am not the real version of this page, I am non-canonical. For the canonical version of the page, please follow this canonical tag. Don't index me at all, index the canonical destination URL"
The pagination based prev/next links say to Google: "I am the main version of this page, or one of the other paginated URLs. Did you know, if you follow this link - you can find and index more pages of content if you want to"
So the problem you create by using both, is creating the following dialogue to Google:
1.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
2.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from #buildawall"
*Google goes backwards to non-paginated URL
3.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
4.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from"
*Google goes backwards to non-paginated URL
... etc.
As you can see, it's confusing to tell Google to crawl and index URLs with one tag, then tell them not to with another. All your indexation factors (canonical tags, other rel links, robots tags, HTTP header X-Robots, sitemap, robots.txt files) should tell the SAME, logical story (not different stories, which contradict each other directly)
If you point to a web page via any indexation method (rel links, sitemap links) then don't turn around and say, actually no I've changed my mind I don't want this page indexed (by 'canonicalling' that URL elsewhere). If you didn't want a page to be indexed, then don't even point to it via other indexation methods
A) If you do want those URLs to be indexed by Google:
1) Keep in mind that by using rel prev/next, Google will know they are pagination URLs and won't weight them very strongly. If however, Google decides that some paginated content is very useful - it may decide to rank such URLs
2) If you want this, remove the canonical tags and leave rel=prev/next deployment as-is
B) If you don't want those URLs to be indexed by Google:
1) This is only a directive, Google can disregard it but it will be much more effective as you won't be contradicting yourself
2) Remove the rel= prev / next stuff completely from paginated URLs. Leave the canonical tag in place and also add a Meta no-index tag to paginated URLs
Keep in mind that, just because you block Google from indexing the paginated URLs, it doesn't necessarily mean that the non-paginated URLs will rank in the same place (with the same power) as the paginated URLs (which will be, mostly lost from the rankings). You may get lucky in that area, you may not (depending upon the content similarity of both URLs, depending whether or not Google's perceived reason to rank that URL - hinged strongly on a piece of content that exists only in the paginated URL variant)
My advice? Don't be a control freak and use option (B). Instead use option (A). Free traffic is free traffic, don't turn your nose up at it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Call for Help. Hit Badly with "Medic" and another 30% Loss with Sept 28th Update
Hi Everyone, I am not sure how this is all happening. We have been online for about 15 years, and now we are at our lowest amount of traffic in about 10 years. Our sites are www.bestpricenutrition.com and www.mysupplementstore.com. We sell commodity items, but I have focused on unique product descriptions, tons of UGC, blog posts and guides for awhile now and it has always done us well. Until as of late. This is what I feel led up to this, but I am hoping there is something I missed. May 1st, 2018: Migrated www.bestpricenutrition.com and www.mysupplementstore.com from Shopify. Similar sites, but almost all unique content. We purchased www.mysupplementstore.com about 8 years ago. A ton of traffic and sales, which is why we didn't just redirect it. Around May 25th: www.mysupplementstore.com took a big hit and lost almost 40% of its traffic. Nothing happened to www.bestpricenutrition.com, we actually increased traffic. Aug 1st Update: www.mysupplementstore.com lost another 25% of its traffic. www.bestpricenutrition.com lost about 40% of it's traffic. Sept 28th: Nothing happened to www.mysupplementstore.com, but www.bestpricenutrition.com lost another 30% of it's traffic. So I have been trying to figure out if there is anything technically wrong, but doesn't seem so. These are issues we discovered in August. During the migration, the reviews from each site were syndicated to both websites. There were 1000's. This was resolved in mid August. During the migration, the company doing the migration pushed our blog posts to both websites. 100's of blog posts duplicated to each website. This was resolved mid August. We found that a disgruntled employee instead writing unique content for our product pages, she was copying them one from another. This was about 100 product pages, which we have since resolved. What's Left I noticed on www.bestpricenutrition.com that we have 100's of blog posts that are getting hardly any traffic. I had trimmed www.mysupplementstore.com of this low traffic content. I am working on www.bestpricenutrition.com still. I have been in this industry since 2003, survived 2012, but have exhausted everything I know to figure this out. It's another sob story I know, but trying to keep everyone's job alive here, but it doesn't look like it's going to happen. Any help would be greatly appreciated.
Intermediate & Advanced SEO | | vetofunk0 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Duplicate content across similar computer "models" and how to properly handle it.
I run a website that revolves around a niche rugged computer market. There are several "main" models for each computer that also has several (300-400) "sub" models that only vary by specifications for each model. My problem is I can't really consolidate each model to one product page to avoid duplicate content. To have something like a drop down list would be massive and confusing to the customer when they could just search the model they needed. Also I would say 80-90% of the market searches for a specific model when they go to purchase or in Google. A lot of our customers are city government, fire departments, police departments etc. they get a list of approved models and purchase off that they don't really search by specs or "configure" a model so each model number having a chance to rank is important. Currently we have all models in each sub category rel=canonical back to the main category page for that model. Is there a better way to go about this? Example page you can see how there are several models all product descriptions are the same they only vary by model writing a unique description for each one is an unrealistic possibility for us. Any suggestions on this would be appreciated I keep going back on forth on what the correct solution would be.
Intermediate & Advanced SEO | | The_Rugged_Store0 -
I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2\. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect?
I'm going through the crawl report and it says I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2/ Now, the author/admin/page/2 I can't even find in WordPress, but it is the same thing as blog/page/2 nonetheless. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect it to blog/page/2?
Intermediate & Advanced SEO | | shift-inc0 -
Domain Authority... http://www.domain.com/ vs. http://domain.com vs. http://domain.com/
Hey Guys, Looking at Page Authority for my Site and ranking them in Decending Order, I see these 3 http://www.domain.com/ | Authority 62 http://domain.com | Authority 52 http://domain.com/ | Authority 52 Since the first one listed has the highest Authority, should I be using a 301 redirects on the lower ranking variations (which I understand how works) or should I be using rel="canonical" (which I don't really understand how it works) Also, if this is a problem that I should address, should we see a significant boost if fixed? Thanks ahead of time for anyone who can help a lost sailor who doesn't know how to sail and probably shouldn't have left shore in the first place. Cheers ZP!
Intermediate & Advanced SEO | | Mr_Snack0 -
URL Redirect: http://www.example.net/ vs. http://www.example.net
I currently have a website set up so that http://www.example.net/ redirects to http://www.example.net but **http://www.example.net/ **has more links and a higher page authority. Should I switch the redirect around? Here's the Open Site Explorer metrics for both: http://www.example.net/ Domain Authority: 38/100 Page Authority: 48/100 Linking Root Domains: 112 Total Links: 235 http://www.example.net Domain Authority: 38/100 Page Authority: 45/100 Linking Root Domains: 18 Total Links: 39
Intermediate & Advanced SEO | | kbrake0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Redirects/Forwarding
I have two niche e-commerce sites. One is a PR3 with 3K pages indexed, the other is PR0 with 5K pages indexed. Each site has a blog that has been updated regularly. They both rank well for some fairly competitive keywords and some good links pointing to them. I also have a main site that is PR3. I am thinking of closing down the sites because they are not generating enough revenue, here are my questions: What is the best way to get the most SEO value from these sites? Do I just do a redirect to the main site? Should I keep the sites and use canonical URLs to the main site? Should I keep the domain as a wordpress blog and point links to the main site? What should I do with the blogs? They are on sub-domains, neither has pagerank. Thanks
Intermediate & Advanced SEO | | inhouseseo0