Category Pages - Canonical, Robots.txt, Changing Page Attributes
-
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc...
This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about:
a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings.
b. Use canonical tags on subsequent pages and point them back to the main category page.
c. Robots.txt on subsequent pages.
d. ?
Options b and c will orphan or french fry some of our product pages.
Any help on this would be much appreciated. Thank you.
-
I see. I think the concern is with duplicate content though, right?
-
Either way, it will be tough to go that route and still get indexed. Its a pagination issue that everyone would like a solution to, but there just isnt one. It wont hurt you to do this, but wont ultimately get all those pages indexed like you want.
-
Disagree. I think you are missing out big time here- category pages are the bread and butter for eCommerce sites. Search engines have confirmed that these pages are of high value for users, and it gets you a chance to have optimized static content on a page that also shows product results. All the major e retailers heavily rely on these pages (Amazon, ebay, zappos, etc...)
-
Sorry, I don't think I clarified. The page title and meta descriptions would be unique, however they would be almost the same except for it saying "Page [x}" somewhere within it.
-
Option A doesnt do anything for you. I think the search engines flag duplicated title tags, even with different products on the page.
-
Thanks for the comprehensive response, Ryan; really great info here!
Would option A be out of the question in your mind due to the fact that the page attributes would be too similar even though unique content is on all the subsequent category pages? I know this method isn't typical, however, it would be the most efficient way to address.
Note: A big downside to this is also the fact that we will have multiple pages targeting the same keyword, however, since internally and externally, the main category pages are getting more link love, would it still hurt to have all those subsequent pages getting indexed?
-
Ahh... the ultimate IA question that still doesnt have a clear anwer from the search engines. A ton of talk about this at the recent SMX Advanced at Seattle (as is with almost every one). I will try and summarize the common sentiment that i gathered from other pros. I will not claim that this is the correct way, but for now this is what i heard a bunch of people agree on:
- No index, follow the pagination links for all except page 1
- Do not block/hand it with robots.txt (in your case, you realyl cant since you have no identifying parameters in your url)
- If you had paginated parameters in the url you can also manage those in the Google & Bing WMT by telling the SE to ignore those certain parameters.
- Canonical to page 1 was a strategy that some retailers were using, and other want to try. Google reps tried to say this is not the way to do it, but others claim success from it.
- If you have a "View All" link that would display all the products in a longer form on a single page, canonical to that page (if its reasonable)
Notes: Depending on how your results/pages are generated, you will need to remember that they probably arent passing "juice". Any dynamic content is usually not "flow through" links from an SEO perspective (or even crawled sometimes).
The better approach to not orphaning your product pages is finding ways to link to them from other sources besides the results pages. For larger sites, its a hassle, buts thats a challenge we all face
Here are some SEO tips for attacking the "orphan" issue:
- If you have product feeds, create a "deal" or "price change" feed. Create a twitter account that people can sign up for to follow these new deals or price changes on products. Push in your feed into tweets, and these will link to your product page, hence creating an in-link for search engines to follow.
- Can do the same with blogs or facebook, but not on a mass scale. Something a bit more useful for users like "top 10 deals of the week) and link to 10 products, or "Favorites for gifts" or something. over time, you can keep track of which product you recommend, and make sure you eventually hit all your products. Again, the point is creating at least 1 inbound link for search engines to follow.
- Create a static internal "product index page" (this is not for your sitemap page FYI) where either by category or some other structure, you make a static link to every product page you have on the site. Developers can have these links dynamically updated/inserted with some extra effort which will avoid manually needing to be updated.
- Create a xml sitemap index. Instead of everything being clumped into 1 xml sitemap for your site, try creating a sitemap index and with your product pages in their own sitemap. This may help with indexing those pages.
Hope that helps? Anyone else want to chime in?
-
I think that generally speaking you want to block search engines from indexing your category pages (use your sitemap and robots.txt to do this). I could be totally wrong here but that is how I setup my sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Application & understanding of robots.txt
Hello Moz World! I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled. I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt). I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s? This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses. Best Regards, Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Changing my pages URL name - HELP NEEDED FAST
Hello, I need to change the URL name for a few pages on my site. The site was launched just recently, so it has no obvious ranking and traffic. My question is, what is the best practice for changing/deleting the page name? after deleting the page, should I go to Google webmaster tool and use URL Removal and remove the old page? I know that I have to also create a new XML sitemap file, but not sure about the old pages in google search result Thanks!
Intermediate & Advanced SEO | | mdmoz0 -
Wordpress - Dynamic pages vs static pages
Hi, Our site has over 48,000 indexed links, with a good mix of pages, posts and dynamic pages. For the purposes of SEO and the recent talk of "fresh content" - would it be better to keep dynamic pages as they are or manually create static pages/ subpages. The one noticable downside with dynamic pages is that they arent picked up by any sitemap plugins, you need to manually create a separate sitemap just for these dynamic links. Any thoughts??
Intermediate & Advanced SEO | | danialniazi1 -
I have two sitemaps which partly duplicate - one is blocked by robots.txt but can't figure out why!
Hi, I've just found two sitemaps - one of them is .php and represents part of the site structure on the website. The second is a .txt file which lists every page on the website. The .txt file is blocked via robots exclusion protocol (which doesn't appear to be very logical as it's the only full sitemap). Any ideas why a developer might have done that?
Intermediate & Advanced SEO | | McTaggart0 -
Should we move a strong category page, or the whole domain to new domain?
We are debating moving a strong category page (and subcategory, product pages) from our current older domain to a new domain vs just moving the whole domain. The older domain has DA 40+, and the category page has PA 40+. Anyone with experience on how much PR etc will get passed to a virgin domain if we just redirect olddomain/strongcategorypage/ to newdomain.com? If the answer is little to none, we might consider just moving the whole site since the other categories are not that strong anyway. We will use 301 approach either way. Thanks!
Intermediate & Advanced SEO | | Durand0 -
Issue with Robots.txt file blocking meta description
Hi, Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)? A description for this result is not available because of this site's robots.txt – learn more. Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list. Here is the current Robots.txt file: # XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins: WooCommerce All-in-One SEO Pack Google Analytics for WordPress XML Sitemap Google News Feeds Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above). Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently. One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs. Could it just be that this will take time, to review the new sitemap and re-index the new site? If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
Intermediate & Advanced SEO | | HiddenPeak0 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0