NOINDEX,FOLLOW on product pages
-
Hi
Can I have people's thoughts on something please. We sell wedding stationery and whilst we can generate lots of good content describing a particular range of stationery we can't relistically differentiate at a product level. So imagine we have three ranges
Range 1 - A Bird
Range 2 - A Heart
Range 3 - A Flower
Within each of these ranges we would have invitations, menus, place cards, magnets etc. The ranges vary quite alot so we can write good textual keyword rich descriptions that attract traffic (i.e. one about the bird, one about the heart and one about the flower). However the individual products within a range just reflect the design for the range as a whole (as all items in a range match). Therefore we can't just copy the content down to the product level and if we just describe the generic attributes of the products they will alll be very similar. We have over 1,000 "products" easily so I am conscious of creating too much duplication over the site in case Mr Panda comes to call.
So I was thinking that I "might" NOINDEX, FOLLOW the product pages to avoid this duplication and put lots of effort into making my category pages much better and content rich. The site would be smaller in the index BUT I do not really expect to generate traffic from the product pages because they are not branded items and any searches looking for particular features of our stationery would be picked up, much more effectively, by the category pages.
Any thoughts on this one?
Gary
-
Thanks for helping me bounce the ideas around. Always valuable comments from SeoMozzers! Have a good day!
-
Yes that's a very good idea. It is much stronger a signal in it's execution than the noindex/follow method given my concerns.
-
OK, thanks for your detailed response.
I am wondering whether I might just have a dynamically generated URL for the "product level" pages, i.e. page.php?id=1, page.php?id=2 etc and then have a canonical tag that is the same across them all. I could then limit the product pages to those that are genuinely different in some way. That way, I can avoid the noindex,follow issue, have very few duplicate product pages and avoid Panda related issues. Sound sensible?
Gary
-
The issue with a high volume of noindex,follow pages is understanding intent, and potentially having the message be confused. "We have x pages with noindex, follow - meaning "we don't think these pages are important enough to index, but the links on them are". Except if those links exist elsewhere, on enough indexed pages, what's the point being made?
Is it an attempt to artificially boost the signals for those pages that are linked by saying "look at all these extra links we have pointing to these other pages"? That's the concern especially since the implementation of over-optimization factors in algorithms. While it may not be Google's intent to devalue a site due to innocent behavior, their ability algorithmically to understand is limited.
Over the past year and a half I've seen more and more situations where Google's many layers of algorithmic decisions have resulted in client sites suffering because of a lack of human review that can determine "this was not an intentional attempt to over-optimize". I've seen it with internal linking, I've seen it when use of noindex/follow conflicts with canonical signals, and I've seen it where either of those conflicts with robots.txt instructions.
While no single case is guaranteed to be problematic (due to hundreds of factors being evaluated across multiple algorithms), at the same time, as a professional audit consultant I am not comfortable enough to then leave out the consideration where no single case is guaranteed to be safe either. Thus, my opinion of "best practices" is to "avoid potentially significant problems".
-
Alan Thanks for this. Can I check I understand your comments. Are u suggesting that a large number of noindex, follow pages causes google to lose interest in following the links from those pages? Do u know this to be the case through an empirical study? I like your suggestion of integrating the product purchase onto the category pages. I agree that would be ideal but the products themselves have alot of options and some are designed online so it could end up quite complex. Food for thought though as it would be a good solution SEO wise. I'm just a little concerned on a user ability front. Gary
-
One alternate method would be to integrate the products onto their individual "range" pages, with purchase capability and options right there. You'd need to ensure the overwhelming majority of those pages is still unique, however it would avoid potential confusion that comes from "noindex,follow" being used on a massive scale, which can itself be problematic. (Google needs to understand WHY there are so many nonidex pages, and what unique links exist on those pages that you want the crawler to follow them for).
-
Sounds like it makes sense and that you have thought it out. If the category pages are conversion friendly, sounds like it can be done. But if there is a way you can get the product pages to have unique content I would personally prefer the product pages to rank. By doing what you are suggesting you're putting the point of purchase a click further away, which isn't the end of the world.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use Internal Search pages as Landing Pages?
Hi all Just a general discussion question about Internal Search pages and using them for SEO. I've been looking to "noindexing / follow" them, but a lot of the Search pages are actually driving significant traffic & revenue. I've over 9,000 search pages indexed that I was going to remove, but after reading this article (https://www.oncrawl.com/technical-seo/seo-internal-search-results/) I was wondering if any of you guys have had success using these pages for SEO, like with using auto-generated content. Or any success stories about using the "noindexing / follow"" too. Thanks!
Technical SEO | | Frankie-BTDublin0 -
Site Crawl -> Duplicate Page Content -> Same pages showing up with duplicates that are not
These, for example: | https://im.tapclicks.com/signup.php/?utm_campaign=july15&utm_medium=organic&utm_source=blog | 1 | 2 | 29 | 2 | 200 |
Technical SEO | | writezach
| https://im.tapclicks.com/signup.php?_ga=1.145821812.1573134750.1440742418 | 1 | 1 | 25 | 2 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=blog&utm_campaign=brightpod-article | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=marketplace&utm_campaign=homepage | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=blog&utm_campaign=first-3-must-watch-videos | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?_ga=1.159789566.2132270851.1418408142 | 1 | 5 | 31 | 2 | 200 |
| https://im.tapclicks.com/signup.php/?utm_source=vocus&utm_medium=PR&utm_campaign=52release | Any suggestions/directions for fixing or should I just disregard this "High Priority" moz issue? Thank you!0 -
Duplicate Page Content
Hello, After crawling our site Moz is detecting high priority duplicate page content for our product and article listing pages, For example http://store.bmiresearch.com/bangladesh/power and http://store.bmiresearch.com/newzealand/power are being listed as duplicate pages although they have seperate URLs, page titles and H1 tags. They have the same product listed but I would have thought the differentiation in other areas would be sufficient for these to not be deemed as duplicate pages. Is it likely this issue will be impacting on our search rankings? If so are there any recommendations as to how this issue can be overcome. Thanks
Technical SEO | | carlsutherland0 -
New Page Showing Up On My Reports w/o Page Title, Words, etc - However, I didn't create it
I have a WordPress site and I was doing a crawl for errors and it is now showing up as of today that this page : https://thinkbiglearnsmart.com/event-registration/?event_id=551&name_of_event=HTML5 CSS3 is new and has no page title, words, etc. I am not even sure where this page or URL came from. I was messing with the robots.txt file to allow some /category/ posts that were being hidden, but I didn't re-allow anything with the above appendages. I just want to make sure that I didn't screw something up that is now going to impact my rankings - this was just a really odd message to come up as I didn't create this page recently - and that shouldnt even be a page accessible to the public. When I edit the page - it is using an Event Espresso (WordPress plugin) shortcode - and I don't want to noindex this page as it is all of my events. Sorry this post is confusing, any help or insight would be appreciated! I am also interested in hiring someone for some hourly consulting work on SEO type issues if anyone has any references. Thank you!
Technical SEO | | webbmason0 -
How can I change the page title "two" (artigos/page/2.html) in each category ?
I have some categories and photo galleries that have more than one page (i.e.: http://www.buffetdomicilio.com/category/artigos and http://www.buffetdomicilio.com/category/artigos/page/2). I think that I must change the tittle and description, but I don't how. I would like to know how can I change the title of each of them without stay with duplicate title and description. Thank you! ahcAORR.jpg
Technical SEO | | otimizador20130 -
Wrong Page Ranking
Higher-level page with more power getting pushed out by weaker page in the SERPs for an important keyword. I don't care about losing the weaker page. Should I: 404 the weaker page and wait for Google to (hopefully) replace it with the stronger page? 301 the weaker page to the stronger page? NOTE: Due to poor communication between content team and myself, the weak and strong pages have similar title tags (i.e, "lawsuits" and "litigation")
Technical SEO | | LCNetwork0 -
Best practice for rich snippet product data - which page shows up?
We have a website with thousands of pages that rank locally for a specific service we offer. What I'd like to do is add rich snippets to these pages. I'd like to setup the services we offer as 'products' in the rich snippets, so that our 2 services show up below the url as rich snippets. I guess I'm not sure if the markup is supposed to be on the product page itself, or if I should use the offerurl tag, to create a separate page on the site whose only purpose is to have a long list of the services we offer pointing to the local pages as the offer url's. What do I do with this page? what are best practices for this offer aggregator? Are there any resources I can look at? Am I even doing this right? I'm new to having markup pages, and I'm hoping that the markup code doesn't actually need to be on the product offer page itself, but that the product offer page is the one that shows up on the results - that is my last question actually - which page will show up? the offerurl link, or the actual markup page.
Technical SEO | | ilyaelbert0 -
Should i use NoIndex, Follow & Rel=Canonical Tag In One Page?
I am having pagination problem with one of my clients site , So I am deciding to use noindex, follow tag for the Page 2,3,4 etc for not to have duplicated content issue, Because obviously SEOMoz Crawl Diagnostics showing me lot of duplicate page contents. And past 2 days i was in constant battle whether to use noindex, follow tag or rel=canonical tag for the Page 2,3,4 and after going through all the Q&A,None of them gives me crystal clear answer. So i thought "Why can't i use 2 of them together in one page"? Because I think (correct me if i am wrong) 1.noindex, follow is old and traditional way to battle with dup contents
Technical SEO | | DigitalJungle
2.rel=canonical is new way to battle with dup contents Reason to use 2 of them together is: Bot finds to the non-canonical page first and looks at the tag nofollow,index and he knows not to index that page,meantime he finds out that canonical url is something something according to the url given in the tag,NO? Help Please???0