Category Pages - Canonical, Robots.txt, Changing Page Attributes
-
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc...
This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about:
a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings.
b. Use canonical tags on subsequent pages and point them back to the main category page.
c. Robots.txt on subsequent pages.
d. ?
Options b and c will orphan or french fry some of our product pages.
Any help on this would be much appreciated. Thank you.
-
I see. I think the concern is with duplicate content though, right?
-
Either way, it will be tough to go that route and still get indexed. Its a pagination issue that everyone would like a solution to, but there just isnt one. It wont hurt you to do this, but wont ultimately get all those pages indexed like you want.
-
Disagree. I think you are missing out big time here- category pages are the bread and butter for eCommerce sites. Search engines have confirmed that these pages are of high value for users, and it gets you a chance to have optimized static content on a page that also shows product results. All the major e retailers heavily rely on these pages (Amazon, ebay, zappos, etc...)
-
Sorry, I don't think I clarified. The page title and meta descriptions would be unique, however they would be almost the same except for it saying "Page [x}" somewhere within it.
-
Option A doesnt do anything for you. I think the search engines flag duplicated title tags, even with different products on the page.
-
Thanks for the comprehensive response, Ryan; really great info here!
Would option A be out of the question in your mind due to the fact that the page attributes would be too similar even though unique content is on all the subsequent category pages? I know this method isn't typical, however, it would be the most efficient way to address.
Note: A big downside to this is also the fact that we will have multiple pages targeting the same keyword, however, since internally and externally, the main category pages are getting more link love, would it still hurt to have all those subsequent pages getting indexed?
-
Ahh... the ultimate IA question that still doesnt have a clear anwer from the search engines. A ton of talk about this at the recent SMX Advanced at Seattle (as is with almost every one). I will try and summarize the common sentiment that i gathered from other pros. I will not claim that this is the correct way, but for now this is what i heard a bunch of people agree on:
- No index, follow the pagination links for all except page 1
- Do not block/hand it with robots.txt (in your case, you realyl cant since you have no identifying parameters in your url)
- If you had paginated parameters in the url you can also manage those in the Google & Bing WMT by telling the SE to ignore those certain parameters.
- Canonical to page 1 was a strategy that some retailers were using, and other want to try. Google reps tried to say this is not the way to do it, but others claim success from it.
- If you have a "View All" link that would display all the products in a longer form on a single page, canonical to that page (if its reasonable)
Notes: Depending on how your results/pages are generated, you will need to remember that they probably arent passing "juice". Any dynamic content is usually not "flow through" links from an SEO perspective (or even crawled sometimes).
The better approach to not orphaning your product pages is finding ways to link to them from other sources besides the results pages. For larger sites, its a hassle, buts thats a challenge we all face
Here are some SEO tips for attacking the "orphan" issue:
- If you have product feeds, create a "deal" or "price change" feed. Create a twitter account that people can sign up for to follow these new deals or price changes on products. Push in your feed into tweets, and these will link to your product page, hence creating an in-link for search engines to follow.
- Can do the same with blogs or facebook, but not on a mass scale. Something a bit more useful for users like "top 10 deals of the week) and link to 10 products, or "Favorites for gifts" or something. over time, you can keep track of which product you recommend, and make sure you eventually hit all your products. Again, the point is creating at least 1 inbound link for search engines to follow.
- Create a static internal "product index page" (this is not for your sitemap page FYI) where either by category or some other structure, you make a static link to every product page you have on the site. Developers can have these links dynamically updated/inserted with some extra effort which will avoid manually needing to be updated.
- Create a xml sitemap index. Instead of everything being clumped into 1 xml sitemap for your site, try creating a sitemap index and with your product pages in their own sitemap. This may help with indexing those pages.
Hope that helps? Anyone else want to chime in?
-
I think that generally speaking you want to block search engines from indexing your category pages (use your sitemap and robots.txt to do this). I could be totally wrong here but that is how I setup my sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Ecommerce category pages & improving rankings
Hi Moz 🙂 I work on an ecommerce site & am getting stuck with how to improve rankings on category pages. I have a competitor who writes loads of content for their category pages under tabs & they perform very well. The content isn't particularly helpful, more about their range and what they offer. I have tested adding similar content under a tab to some of our category pages - with some performing well & others not as well. I know this isn't ideal, and I'd like some help with an alternative. Does anyone have tips on improving rankings on category pages? I don't have much control on the layout, this is controlled by our parent company which restricts us. I am researching writing user guides, but these will be on other pages not directly on the category page & the way we have to add them is a lot of manual work for our webmaster, so I can't get them up as quickly as I'd like. I have seen REI have a small bit of content at the top of their pages that link to guides e.g - https://www.rei.com/c/static-and-rescue-ropes But obviously their domain authority is so high already, that they don't need as much help as me 🙂 At the moment I have some new Chair pages I need to rank, these are competitive and any ideas would be great 🙂 Here are some examples: http://www.key.co.uk/en/key/ergonomic-office-chairs http://www.key.co.uk/en/key/executive-office-chairs Thank you!
Intermediate & Advanced SEO | | BeckyKey0 -
I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2\. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect?
I'm going through the crawl report and it says I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2/ Now, the author/admin/page/2 I can't even find in WordPress, but it is the same thing as blog/page/2 nonetheless. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect it to blog/page/2?
Intermediate & Advanced SEO | | shift-inc0 -
Is it worth changing themes to be Responsive, and risk a SERP change?
I've got a site that ranks #1 for it's term. It's Worpress on Thesis 1.85. The site is not responsive and cannot be because Thesis 1x is not (and Thesis 1x is a dead end). I really would like my site responsive, but I fear changing things might affect my #1 rank. The least impactful change I could do is move to Thesis 2.x, but I have come to really dislike the company and hate to get locked in again. There are other frameworks I would prefer to move to, but their impact on my pages' source would be much more. So, my question is, is it worth moving to a new theme (keeping the layout looking exactly the same, although the "source" would look different) just to make the site responsive? Is it that important?
Intermediate & Advanced SEO | | bizzer0 -
Why does SEOmoz bot see duplicate pages despite I am using the canonical tag?
Hello here, today SEOmoz bot found and marked as "duplicate content" the following pages on my website: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3 http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf And I am wondering why considering the fact I am using on both those pages a canonical tag pointing to the main product page below: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html Shouldn't SEOmoz bot follow the canonical directive and not report those two pages as duplicate? Thank you for any insights I am probably missing here!
Intermediate & Advanced SEO | | fablau0 -
Good category pages - do you have examples?
Hello all. Currently doing a major update to my e-commerce website which sells tractor spare parts. I would like to optimize the category pages, which feature the parts from a particular manufacture of tractor parts. Does anyone have good examples of well optimized product page which do not have a detrimental effect on the visual quality of the site? It is important to see the products. The best I have found is: http://www.simplyelectricals.co.uk/ but I sure a better solution must exist Thanks David
Intermediate & Advanced SEO | | DavidLenehan0 -
Should I prevent Google from indexing blog tag and category pages?
I am working on a website that has a regularly updated Wordpress blog and am unsure whether or not the category and tag pages should be indexable. The blog posts are often outranked by the tag and category pages and they are ultimately leaving me with a duplicate content issue. With this in mind, I assumed that the best thing to do would be to remove the tag and category pages from the index, but after speaking to someone else about the issue, I am no longer sure. I have tried researching online, but there isn't anything that provided any further information. Please can anyone with any experience of dealing with issues like this or with any knowledge of the topic help me to resolve this annoying issue. Any input will be greatly appreciated. Thanks Paul
Intermediate & Advanced SEO | | PaulRogers0