Value in adding rel=next prev when page 2-n are "noindex, follow"?
-
Category A spans over 20 pages (not possible to create a "view all" because page would get too long). So I have page 1 - 20. Page 1 has unique content whereas page 2-20 of the series does not. I have "noindex, follow" on page 2-20. I also have rel=next prev on the series.
Question: Since page 2-20 is "noindex, follow" doesn't that defeat the purpose of rel=next prev? Don't I run the risk of Google thinking "hmmm….this is odd. This website has noindexed page 2-20, yet using rel=next prev."
Even though I do not run the risk, what is my upset in keeping rel=next prev when, again, the pages 2-20 are noindex, follow.
thank you
-
I don't see an upset for keeping rel=next prev, I see only an upside.
Google doesn't have to listen to the noindex tag, although they almost always do. Furthermore, adding the rel=next prev tag only makes your code even more correctly developed, and since Google isn't the only service that looks for tags like this I would feel more comfortable with it present.
From an accessibility standpoint, it makes sense to have the rel=next prev tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Optimization for "Search by Photos" feature
Howdy, fellow mozzers, Does anyone know what affects a given company photos show up in the "Search by Photos" section? I can't find any decent info.. Here is the link to SEL, describing the feature (not even google themselves seem to have an announcement about it). https://searchengineland.com/google-showing-mobile-search-by-photos-option-in-selected-local-verticals-323237 Thanks in advance!
Intermediate & Advanced SEO | | DmitriiK0 -
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
"Leeching" backlinks...yes or no?
A lot of websites, by virtue of practicality, will link to wikipedia articles to explain certain concepts. Would it be worthwhile to reach out to those websites and ask them to change the link to a different resource if that resource is a much better alternative than the wikipedia article? And how would you approach this? Thanks!
Intermediate & Advanced SEO | | mack-ayache0 -
Is this the "Google Dance"?
We just did a site redesign, and removed the noindex, etc. about 10 days ago. Over the last 24 hours, I've gotten some of my top keywords on the first page, but now they are gone, a few hours later. I assume this is typical?
Intermediate & Advanced SEO | | CsmBill0 -
To noindex or not to noindex
Our website lets users test whether any given URL or keyword is censored in China. For each URL and keyword that a user looks up, a page is created, such as https://en.greatfire.org/facebook.com and https://zh.greatfire.org/keyword/freenet. From a search engines perspective, all these pages look very similar. For this reason we have implemented a noindex function based on certain rules. Basically, only highly ranked websites are allowed to be indexed - all other URLs are tagged as noindex (for example https://en.greatfire.org/www.imdb.com). However, we are not sure that this is a good strategy and so are asking - what should a website with a lot of similar content do? Don't noindex anything - let Google decide what's worth indexing and not. Noindex most content, but allow some popular pages to be indexed. This is our current approach. If you recommend this one, we would like to know what we can do to improve it. Noindex all the similar content. In our case, only let overview pages, blog posts etc with unique content to be indexed. Another factor in our case is that our website is multilingual. All pages are available (and equally indexed) in Chinese and English. Should that affect our strategy?References:https://zh.greatfire.orghttps://en.greatfire.orghttps://www.google.com/search?q=site%3Agreatfire.org
Intermediate & Advanced SEO | | GreatFire.org0 -
Rel=canonical tag on original page?
Afternoon All,
Intermediate & Advanced SEO | | Jellyfish-Agency
We are using Concrete5 as our CMS system, we are due to change but for the moment we have to play with what we have got. Part of the C5 system allows us to attribute our main page into other categories, via a page alaiser add-on. But what it also does is create several url paths and duplicate pages depending on how many times we take the original page and reference it in other categories. We have tried C5 canonical/SEO add-on's but they all seem to fall short. We have tried to address this issue in the most efficient way possible by using the rel=canonical tag. The only issue is the limitations of our cms system. We add the canonical tag to the original page header and this will automatically place this tag on all the duplicate pages and in turn fix the problem of duplicate content. The only problem is the canonical tag is on the original page as well, but it is referencing itself, effectively creating a tagging circle. Does anyone foresee a problem with the canonical tag being on the original page but in turn referencing itself? What we have done is try to simplify our duplicate content issues. We have over 2500 duplicate page issues because of this aliasing add-on and want to automate the canonical tag addition, rather than go to each individual page and manually add this tag, so the original reference page can remain the original. We have implemented this tag on one page at the moment with 9 duplicate pages/url's and are monitoring, but was curious if people had experienced this before or had any thoughts?0 -
We are changing ?page= dynamic url's to /page/ static urls. Will this hurt the progress we have made with the pages using dynamic addresses?
Question about changing url from dynamic to static to improve SEO but concern about hurting progress made so far.
Intermediate & Advanced SEO | | h3counsel0