Category Pages - Canonical, Robots.txt, Changing Page Attributes
-
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc...
This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about:
a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings.
b. Use canonical tags on subsequent pages and point them back to the main category page.
c. Robots.txt on subsequent pages.
d. ?
Options b and c will orphan or french fry some of our product pages.
Any help on this would be much appreciated. Thank you.
-
I see. I think the concern is with duplicate content though, right?
-
Either way, it will be tough to go that route and still get indexed. Its a pagination issue that everyone would like a solution to, but there just isnt one. It wont hurt you to do this, but wont ultimately get all those pages indexed like you want.
-
Disagree. I think you are missing out big time here- category pages are the bread and butter for eCommerce sites. Search engines have confirmed that these pages are of high value for users, and it gets you a chance to have optimized static content on a page that also shows product results. All the major e retailers heavily rely on these pages (Amazon, ebay, zappos, etc...)
-
Sorry, I don't think I clarified. The page title and meta descriptions would be unique, however they would be almost the same except for it saying "Page [x}" somewhere within it.
-
Option A doesnt do anything for you. I think the search engines flag duplicated title tags, even with different products on the page.
-
Thanks for the comprehensive response, Ryan; really great info here!
Would option A be out of the question in your mind due to the fact that the page attributes would be too similar even though unique content is on all the subsequent category pages? I know this method isn't typical, however, it would be the most efficient way to address.
Note: A big downside to this is also the fact that we will have multiple pages targeting the same keyword, however, since internally and externally, the main category pages are getting more link love, would it still hurt to have all those subsequent pages getting indexed?
-
Ahh... the ultimate IA question that still doesnt have a clear anwer from the search engines. A ton of talk about this at the recent SMX Advanced at Seattle (as is with almost every one). I will try and summarize the common sentiment that i gathered from other pros. I will not claim that this is the correct way, but for now this is what i heard a bunch of people agree on:
- No index, follow the pagination links for all except page 1
- Do not block/hand it with robots.txt (in your case, you realyl cant since you have no identifying parameters in your url)
- If you had paginated parameters in the url you can also manage those in the Google & Bing WMT by telling the SE to ignore those certain parameters.
- Canonical to page 1 was a strategy that some retailers were using, and other want to try. Google reps tried to say this is not the way to do it, but others claim success from it.
- If you have a "View All" link that would display all the products in a longer form on a single page, canonical to that page (if its reasonable)
Notes: Depending on how your results/pages are generated, you will need to remember that they probably arent passing "juice". Any dynamic content is usually not "flow through" links from an SEO perspective (or even crawled sometimes).
The better approach to not orphaning your product pages is finding ways to link to them from other sources besides the results pages. For larger sites, its a hassle, buts thats a challenge we all face Here are some SEO tips for attacking the "orphan" issue:
- If you have product feeds, create a "deal" or "price change" feed. Create a twitter account that people can sign up for to follow these new deals or price changes on products. Push in your feed into tweets, and these will link to your product page, hence creating an in-link for search engines to follow.
- Can do the same with blogs or facebook, but not on a mass scale. Something a bit more useful for users like "top 10 deals of the week) and link to 10 products, or "Favorites for gifts" or something. over time, you can keep track of which product you recommend, and make sure you eventually hit all your products. Again, the point is creating at least 1 inbound link for search engines to follow.
- Create a static internal "product index page" (this is not for your sitemap page FYI) where either by category or some other structure, you make a static link to every product page you have on the site. Developers can have these links dynamically updated/inserted with some extra effort which will avoid manually needing to be updated.
- Create a xml sitemap index. Instead of everything being clumped into 1 xml sitemap for your site, try creating a sitemap index and with your product pages in their own sitemap. This may help with indexing those pages.
Hope that helps? Anyone else want to chime in?
-
I think that generally speaking you want to block search engines from indexing your category pages (use your sitemap and robots.txt to do this). I could be totally wrong here but that is how I setup my sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages blocked by robots
**yazılım sürecinde yapılan bir yanlışlıktı.** Sorunu hızlı bir şekilde nasıl çözebilirim? bana yardım et. ```[XTRjH](https://imgur.com/a/XTRjH)
Intermediate & Advanced SEO | | mihoreis0 -
Startpage and shop page shows the same thing, shall i set canonical url?
Our startpage http://siga-sverige.se/ and http://siga-sverige.se/butik/ shows the same woocommerce loop of all our products. Shall i set canonical url for http://siga-sverige.se/butik/ to http://siga-sverige.se/? Thanks! / Jonas
Intermediate & Advanced SEO | | knubbz0 -
What to do when your home page an index for a series of pages.
I have created an index stack. My home page is http://www.southernwhitewater.com The home page is the index itself and the 1st page http://www.southernwhitewater.com/nz-adventure-tours-whitewater-river-rafting-hunting-fishing My home page (if your look at it through moz bat for chrome bar} incorporates all the pages in the index. Is this Bad? I would prefer to index each page separately. As per my site index in the footer What is the best way to optimize all these pages individually and still have the customers arrive at the top to a picture. rel= canonical? Any help would be great!! http://www.southernwhitewater.com
Intermediate & Advanced SEO | | VelocityWebsites0 -
Pagination on a product page with reviews spread out on multiple pages
Our current product pages markup only have the canonical URL on the first page (each page loads more user reviews). Since we don't want to increase load times, we don't currently have a canonical view all product page. Do we need to mark up each subsequent page with its own canonical URL? My understanding was that canonical and rel next prev tags are independent of each other. So that if we mark up the middle pages with a paginated URL, e.g: Product page #1http://www.example.co.uk/Product.aspx?p=2692"/>http://www.example.co.uk/Product.aspx?p=2692&pageid=2" />**Product page #2 **http://www.example.co.uk/Product.aspx?p=2692&pageid=2"/>http://www.example.co.uk/Product.aspx?p=2692" />http://www.example.co.uk/Product.aspx?p=2692&pageid=3" />Would mean that each canonical page would suggest to google another piece of unique content, which this obviously isn't. Is the PREV NEXT able to "override" the canonical and explain to Googlebot that its part of a series? Wouldn't the canonical then be redundant?Thanks
Intermediate & Advanced SEO | | Don340 -
Issue with Robots.txt file blocking meta description
Hi, Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)? A description for this result is not available because of this site's robots.txt – learn more. Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list. Here is the current Robots.txt file: # XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins: WooCommerce All-in-One SEO Pack Google Analytics for WordPress XML Sitemap Google News Feeds Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above). Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently. One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs. Could it just be that this will take time, to review the new sitemap and re-index the new site? If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
Intermediate & Advanced SEO | | HiddenPeak0 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Is there a negative effect to show categories and products on the same page?
I mean having say 5 different categories on a page and showing the products that are in those categories below the categories. Just In case people don't want to dig deeper to find there product because they know what they need already. I would also want those categories for the people that need to do a little more searching and have a better reference guide. So is there any negatives to my SEO doing it that way?
Intermediate & Advanced SEO | | Mike.Bean0 -
Block an entire subdomain with robots.txt?
Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
Intermediate & Advanced SEO | | kylesuss12