Rel=next/prev for paginated pages then no need for "no index, follow"?
-
I have a real estate website and use rel=next/prev for paginated real estate result pages. I understand "no index, follow" is not needed for the paginated pages. However, my case is a bit unique: this is real estate site where the listings also show on competitors sites. So, I thought, if I "no index, follow" the paginated pages that would reduce the amount of duplicate content on my site and ultimately support my site ranking well.
Again, I understand "no index, follow" is not needed for paginated pages when using rel=next/prev, but since my content will probably be considered fairly duplicate, I question if I should do anyway.
-
adding canonical tags does not sound right since the paginated pages are not duplicate, rather part of a series which I have addressed by adding rel=next/prev…….
-
Hi,
Since these MLS listings are technically not your content but listing from another site, including the noindex will be on the safe side. However, i checked a few other real estate websites and it seems like they use canonical tags pointing to the Search Page instead of noindex tag.
-
thank you. Let me clarify: All real estate agents post their listings in to the "MLS". these "MLS" listings all agencies can upload to their websites. On my site I have these "MLS" listings. In other words, these listings will also appear on many other sites. I have lots of unique content and have no duplicate content issues. The duplicate issue comes from these MLS listings that I show on my site, which are also to be seen on 100+ other real estate sites. I use rel=next/prev and according to some Google blog I read there are no needs to include "no index, follow" for such paginated pages. However, in my case, I thought it may make sense to "no index, follow" since it is "MLS" property listings and that would mean I would reduce the amount of duplicate content on my site being indexed.
I appreciate your view on this and would appreciate reasoning why you would / would not "no index, follow" these paginated MLS pages.
-
One doesn't relate to the other.
I personally wouldn't go with the noindex,follow if the duplicate content is on other sites besides yours. If you are having duplicate content issues within your site, that's a different story.
If your content within your site is unique, then do not add the noindex (keep the prev/next). Just try to make your pages stand out from your competitors.
On the other hand, if you are getting duplicate content warnings withing your own site, then you could perhaps use the noindex or find another solution to avoid duplicate content.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
E-Commerce Site Collection Pages Not Being Indexed
Hello Everyone, So this is not really my strong suit but I’m going to do my best to explain the full scope of the issue and really hope someone has any insight. We have an e-commerce client (can't really share the domain) that uses Shopify; they have a large number of products categorized by Collections. The issue is when we do a site:search of our Collection Pages (site:Domain.com/Collections/) they don’t seem to be indexed. Also, not sure if it’s relevant but we also recently did an over-hall of our design. Because we haven’t been able to identify the issue here’s everything we know/have done so far: Moz Crawl Check and the Collection Pages came up. Checked Organic Landing Page Analytics (source/medium: Google) and the pages are getting traffic. Submitted the pages to Google Search Console. The URLs are listed on the sitemap.xml but when we tried to submit the Collections sitemap.xml to Google Search Console 99 were submitted but nothing came back as being indexed (like our other pages and products). We tested the URL in GSC’s robots.txt tester and it came up as being “allowed” but just in case below is the language used in our robots:
Intermediate & Advanced SEO | | Ben-R
User-agent: *
Disallow: /admin
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9545580/checkouts
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /apple-app-site-association
Sitemap: https://domain.com/sitemap.xml A Google Cache:Search currently shows a collections/all page we have up that lists all of our products. Please let us know if there’s any other details we could provide that might help. Any insight or suggestions would be very much appreciated. Looking forward to hearing all of your thoughts! Thank you in advance. Best,0 -
Best to Combine Listing URLs? Are 300 Listing Pages a "Thin Content" Risk?
We operate www.metro-manhattan.com, a commercial real estate website. There about 550 pages. About 300 pages are for individual listings. About 150 are for buildings. Most of the listings pages have 180-240 words. Would it be better from an SEO perspective to have multiple listings on a single page, say all Chelsea listings on the Chelsea neighborhood page? Are we shooting ourselves in the foot by having separate URLs for each listing? Are we at risI for a thin cogent Google penalty? Would the same apply to building pages (about 150)? Sample Listing: http://www.nyc-officespace-leader.com/listings/364-madison-ave-office-lease-1802sf Sample Building: http://www.nyc-officespace-leader.com/for-a-new-york-office-space-rental-consider-one-worldwide-plaza-825-eighth-avenue My concern is that the existing site architecture may result in some form of Google penalty. If we have to consolidate these pages what would be the best way of doing so? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
How does Googlebot evaluate performance/page speed on Isomorphic/Single Page Applications?
I'm curious how Google evaluates pagespeed for SPAs. Initial payloads are inherently large (resulting in 5+ second load times), but subsequent requests are lightning fast, as these requests are handled by JS fetching data from the backend. Does Google evaluate pages on a URL-by-URL basis, looking at the initial payload (and "slow"-ish load time) for each? Or do they load the initial JS+HTML and then continue to crawl from there? Another way of putting it: is Googlebot essentially "refreshing" for each page and therefore associating each URL with a higher load time? Or will pages that are crawled after the initial payload benefit from the speedier load time? Any insight (or speculation) would be much appreciated.
Intermediate & Advanced SEO | | mothner1 -
Rel=canonical on image pages
Hi, Im working on a Wordpress hosted blog site. I recently did a "site:search" in Google for a specific article page to make sure it was getting crawled, and it returned three separate URLs in the search results. One was the article page, and the other two were the URLs that hosted the images that are found in the article. Would you suggest adding the rel=canonical tag to the pages that host the images so they point back to the actual context article page? Or are they fine being left alone? Thank you!
Intermediate & Advanced SEO | | dbfrench0 -
Indexed non existent pages, problem appeared after we 301d the url/index to the url.
I recently read that if a site has 2 pages that are live such as: http://www.url.com/index and http://www.url.com/ will come up as duplicate if they are both live... I read that it's best to 301 redirect the http://www.url.com/index and http://www.url.com/. I read that this helps avoid duplicate content and keep all the link juice on one page. We did the 301 for one of our clients and we got about 20,000 errors that did not exist. The errors are of pages that are indexed but do not exist on the server. We are assuming that these indexed (nonexistent) pages are somehow linked to the http://www.url.com/index The links are showing 200 OK. We took off the 301 redirect from the http://www.url.com/index page however now we still have 2 exaact pages, www.url.com/index and http://www.url.com/. What is the best way to solve this issue?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
Trailing slash and rel="canonical"
Our website is in a directory format: http://www.website.com/website.asp Our homepage display URL is http://www.website.com which currently matches our to eliminate the possibility of duplicate content. However, I noticed that in the SERPs, google displays the homepage with a trailing slash http://www.website.com/ My question: should I change the rel="canonical" to have a trailing slash? I noticed one of our competitors uses the trailing slash in their rel="canonical" Do potential benefits outweigh the risks? I can PM further information if necessary. Thanks for the assistance in advance...
Intermediate & Advanced SEO | | BethA0 -
Help with setting up 301 redirects from /default.aspx to the "/" in ASP.NET using MasterPages?
Hi SEOMoz Moderators and Staff, My web developer and I are having a world of trouble setting up the best way to 301 redirect from www.tisbest.org/default.aspx to the www.tisbest.org since we're using session very heavily for our ASP.NET using MasterPages. We're hoping for some help since our homepage has dropped 50+ positions for all of our search terms since our first attempt at setting this up 10 days ago. = ( A very bad result. We've rolled back the redirects after realizing that our session system was redirecting www.tisbest.org back to www.tisbest.org/default.aspx?AutoDetectCookieSupport=1 which would redirect to a URL with the session ID like this one: http://www.tisbest.org/(S(whukyd45tf5atk55dmcqae45))/Default.aspx which would then redirect again and throw the spider into an unending redirect loop. The Google gods got angry, stopped indexing the page, and we are now missing from our previous rankings though, thankfully, several of our other pages do still exist on Google. So, has anyone dealt with this issue? Could this be solved by simply resetting up the 301 redirects and also configuring ASP.NET to recognize Google's spider as supporting cookies and thus not serving it the Session ID that has caused issue for us in the past? Any help (even just commiserating!) would be great. Thanks! Chad
Intermediate & Advanced SEO | | TisBest0 -
Why are so many pages indexed?
We recently launched a new website and it doesn't consist of that many pages. When you do a "site:" search on Google, it shows 1,950 results. Obviously we don't want this to be happening. I have a feeling it's effecting our rankings. Is this just a straight up robots.txt problem? We addressed that a while ago and the number of results aren't going down. It's very possible that we still have it implemented incorrectly. What are we doing wrong and how do we start getting pages "un-indexed"?
Intermediate & Advanced SEO | | MichaelWeisbaum0