Category Pages - Canonical, Robots.txt, Changing Page Attributes
-
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc...
This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about:
a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings.
b. Use canonical tags on subsequent pages and point them back to the main category page.
c. Robots.txt on subsequent pages.
d. ?
Options b and c will orphan or french fry some of our product pages.
Any help on this would be much appreciated. Thank you.
-
I see. I think the concern is with duplicate content though, right?
-
Either way, it will be tough to go that route and still get indexed. Its a pagination issue that everyone would like a solution to, but there just isnt one. It wont hurt you to do this, but wont ultimately get all those pages indexed like you want.
-
Disagree. I think you are missing out big time here- category pages are the bread and butter for eCommerce sites. Search engines have confirmed that these pages are of high value for users, and it gets you a chance to have optimized static content on a page that also shows product results. All the major e retailers heavily rely on these pages (Amazon, ebay, zappos, etc...)
-
Sorry, I don't think I clarified. The page title and meta descriptions would be unique, however they would be almost the same except for it saying "Page [x}" somewhere within it.
-
Option A doesnt do anything for you. I think the search engines flag duplicated title tags, even with different products on the page.
-
Thanks for the comprehensive response, Ryan; really great info here!
Would option A be out of the question in your mind due to the fact that the page attributes would be too similar even though unique content is on all the subsequent category pages? I know this method isn't typical, however, it would be the most efficient way to address.
Note: A big downside to this is also the fact that we will have multiple pages targeting the same keyword, however, since internally and externally, the main category pages are getting more link love, would it still hurt to have all those subsequent pages getting indexed?
-
Ahh... the ultimate IA question that still doesnt have a clear anwer from the search engines. A ton of talk about this at the recent SMX Advanced at Seattle (as is with almost every one). I will try and summarize the common sentiment that i gathered from other pros. I will not claim that this is the correct way, but for now this is what i heard a bunch of people agree on:
- No index, follow the pagination links for all except page 1
- Do not block/hand it with robots.txt (in your case, you realyl cant since you have no identifying parameters in your url)
- If you had paginated parameters in the url you can also manage those in the Google & Bing WMT by telling the SE to ignore those certain parameters.
- Canonical to page 1 was a strategy that some retailers were using, and other want to try. Google reps tried to say this is not the way to do it, but others claim success from it.
- If you have a "View All" link that would display all the products in a longer form on a single page, canonical to that page (if its reasonable)
Notes: Depending on how your results/pages are generated, you will need to remember that they probably arent passing "juice". Any dynamic content is usually not "flow through" links from an SEO perspective (or even crawled sometimes).
The better approach to not orphaning your product pages is finding ways to link to them from other sources besides the results pages. For larger sites, its a hassle, buts thats a challenge we all face
Here are some SEO tips for attacking the "orphan" issue:
- If you have product feeds, create a "deal" or "price change" feed. Create a twitter account that people can sign up for to follow these new deals or price changes on products. Push in your feed into tweets, and these will link to your product page, hence creating an in-link for search engines to follow.
- Can do the same with blogs or facebook, but not on a mass scale. Something a bit more useful for users like "top 10 deals of the week) and link to 10 products, or "Favorites for gifts" or something. over time, you can keep track of which product you recommend, and make sure you eventually hit all your products. Again, the point is creating at least 1 inbound link for search engines to follow.
- Create a static internal "product index page" (this is not for your sitemap page FYI) where either by category or some other structure, you make a static link to every product page you have on the site. Developers can have these links dynamically updated/inserted with some extra effort which will avoid manually needing to be updated.
- Create a xml sitemap index. Instead of everything being clumped into 1 xml sitemap for your site, try creating a sitemap index and with your product pages in their own sitemap. This may help with indexing those pages.
Hope that helps? Anyone else want to chime in?
-
I think that generally speaking you want to block search engines from indexing your category pages (use your sitemap and robots.txt to do this). I could be totally wrong here but that is how I setup my sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is their value in linking to PPC landing pages and using rel="canonical"
I have ppc landing pages that are similar to my seo page. The pages are shorter with less text with a focus on converting visitors further along in the purchase cycle. My questions are: 1. Is there a benefit for having the orphan ppc pages indexed or should I no index them? 2. If indexing does provide benefits, should I create links from my site to the ppc pages or should I just submit them in a sitemap? 3. If indexed, should I use rel="canonical" and point the ppc versions to the appropriate organic page? Thanks,
Intermediate & Advanced SEO | | BrandExpSteve0 -
Attribution of port number to canonical links...ok?
Hi all A query has recently been raised internally with regard to the use of canonical links. Due to CMS limitations with a client who's CMS is managed by a third party agency, canonical links are currently output with the port number attributed, e.g. example.com/page:80 ...as opposed to the correct absolute URL: example.com/page Note port number are not attributed to the actual page URLs. We have been advised that this canonical link functionality cannot be amended at present. My personal interpretation of canonical link requirements is that such a link should exactly match the absolute URL of the intended destination page, my query is does this extend to the attribution of port number to URLs. Is the likely impact of the inclusion of such potentially incorrect URLs likely to be the same as purely incorrect canonical links. Thanks
Intermediate & Advanced SEO | | 26ryan0 -
My blog's categories are winning over my landing pages, what to do?
Hi My blogs categories for the ecommerce site are by subject and are similar to the product landing pages. Example Domain.com/laptops that sells laptops Domain.com/blog/laptops that shows news and articles on laptops Within the blog posts the links of anchor laptop are to the store. What to do? Thanks
Intermediate & Advanced SEO | | BeytzNet1 -
Should comments and feeds be disallowed in robots.txt?
Hi My robots file is currently set up as listed below. From an SEO point of view is it good to disallow feeds, rss and comments? I feel allowing comments would be a good thing because it's new content that may rank in the search engines as the comments left on my blog often refer to questions or companies folks are searching for more information on. And the comments are added regularly. What's your take? I'm also concerned about the /page being blocked. Not sure how that benefits my blog from an SEO point of view as well. Look forward to your feedback. Thanks. Eddy User-agent: Googlebot Crawl-delay: 10 Allow: /* User-agent: * Crawl-delay: 10 Disallow: /wp- Disallow: /feed/ Disallow: /trackback/ Disallow: /rss/ Disallow: /comments/feed/ Disallow: /page/ Disallow: /date/ Disallow: /comments/ # Allow Everything Allow: /*
Intermediate & Advanced SEO | | workathomecareers0 -
Can use of the id attribute to anchor t text down a page cause page duplication issues?
I am producing a long glossary of terms and want to make it easier to jump down to various terms. I am using the<a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p=""></a> <a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p="">Does anyone know whether Google will pick this up as separate duplicate pages?</a> <a id="anchor-text" ="" attribute="" so="" am="" appending="" #anchor-text="" to="" a="" url="" reach="" the="" correct="" spot<="" p="">If so any ideas on what I can do? Apart from not do it to start with? I am thinking 301s won't work as I want the URL to work. And rel=canonical won't work as there is no actual page code to add it to. Many thanks for your help Wendy</a>
Intermediate & Advanced SEO | | Chammy0 -
Should I use the canonical tag on all my mobile pages?
I've seen flavors of this question asked but did not see the exact response I was looking for. If I have a site at: www.site.com And I am creating a mobile version at: m.site.com (let's say a responsive design is not feasible at this time) And all the content on m.site.com is duplicative of the content on www.site.com What's the best way to handle that from an SEO perspective? Should I put a canonical tag on every mobile page pointing back to the www page? I assume that is better than a 'no index' tag on all pages of the mobile site?
Intermediate & Advanced SEO | | hbrown1080 -
Category Pages in competition with Homepages
I am finding it a real uphill task with a few of our clients with there either product or category pages competing against other sites on main keywords. The sites either categories or product specific pages are in direct competition with other sites homepages and I am finding it increasingly more difficult to break into positions. What are other peoples experiences with this ? Do you feel the way the pages are ranked within the xml sitemap with priority could also be a factor.
Intermediate & Advanced SEO | | onlinemediadirect0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0