Robots.txt & Duplicate Content
-
In reviewing my crawl results I have 5666 pages of duplicate content. I believe this is because many of the indexed pages are just different ways to get to the same content. There is one primary culprit. It's a series of URL's related to CatalogSearch - for example; http://www.careerbags.com/catalogsearch/result/index/?q=Mobile
I have 10074 of those links indexed according to my MOZ crawl. Of those 5349 are tagged as duplicate content. Another 4725 are not.
Here are some additional sample links:
http://www.careerbags.com/catalogsearch/result/index/?dir=desc&order=relevance&p=2&q=Amy
http://www.careerbags.com/catalogsearch/result/index/?color=28&q=bellemonde
http://www.careerbags.com/catalogsearch/result/index/?cat=9&color=241&dir=asc&order=relevance&q=baggalliniAll of these links are just different ways of searching through our product catalog. My question is should we disallow - catalogsearch via the robots file? Are these links doing more harm than good?
-
For product pages, I would canonical the page with the most descriptive URL.
For category pages, I agree with you, I would noindex them.
I think I just answered my own question!!
-
Oke, the question concerning rel="canonical" is which URL becomes the canonical version? Since there is no page on the website which would be appropiate (as far as i've seen) i recommended the meta robots tag.
I do agree that rel="canonical" is the preferred option, but in this situation i can't see a way to implement it properly. Which page would you highlight as the canonical?
-
I agree entirely that "Search result pages are too varied to be included in the index".
That said, my understanding is that if you canonical a page, it doesn't get indexed. So we wouldn't have to worry about the appearance / user-friendliness of the URL. But (again, in my opinion) we should still worry about link equity being passed, and that won't happen if you noindex.
This gets complicated fast. I like your solution b/c it's a lot cleaner and easier to implement. Still not convinced it's the "best" way to go though.
-
Where is the evidence that these work? I have never seen them work. Google totally ignores the URL parameters tools in GWTs.
-
I do agree that a rel="canonical" is good option for the problem that's at hand.
As jeremy has stated however the link we are referring to in the href section redirects to the home page. http://www.careerbags.com/catalogsearch/result/index/In my original answer i did not test this. I assumed there would be a list of all products here not filtered by search results. Since this is not the case and this page in fact does not exist it's hard to point at a url to be canonical.
Therefor i changed my answer to include the robots meta tag. This would indeed remove the search pages from the search index. I do think this is a positive thing though.
Look at the following url: http://www.careerbags.com/catalogsearch/result/?q=rolling+laptop+bags
Not really the type of URL i would click on in the search results. The following URL however is something i would want to click on: http://www.careerbags.com/laptop-bags/women-s/rolling-laptop-bags.html
Search result pages are too varied to be included in the index to my opinion.
Hope you agree with this, if not then i would like to hear your thoughts on this.
-
Simon, Wesley, Michael...
These customer facing search result pages are the ones often bookmarked and shared by site visitors. How worried does one need to be about losing link equity? I realize every site is going to be different and social shares don't have link equity - at least for now - but this could add up over time. The rel canonical will enable capture of link equity whereas the robots noindex will not.
Am I over thinking this?
-
In this case you could add the meta robots tag on the search result pages like this:
content="noindex, follow">
Search results can indeed spawn an infinite amount of different URL's. This can be avoided by making sure they are not included in the index but are followed.
-
Webmaster guidelines specifically request that you prevent crawling of search results pages using a robots.txt file. The relevant section reads: "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
-
There are 2 distinct possible issues here
1. Search results are creating duplicate content
2. Search results are creating lots of thin content
You want to give the user every possibility of finding your products, but you don't want those search results indexed because you should already have your source product page indexed and aiming to rank well. If not see last paragraph.
I slightly misread your post and took the URLs to be purely filtered. You should add disallow /catalogsearch to your robots.txt and if any are indexed you can remove the directory in Webmaster Tools > Google Index > Remove URLs > Reason: Remove Directory. This from Google - http://www.mattcutts.com/blog/search-results-in-search-results/
If your site has any other parameters not in that directory you can add them in Webmaster Tools > Crawl > URL Parameters > Let Googlebot Decide. Google will understand they are not the main URLs and treat them accordingly.
As a side issue with your search results it would be a good idea to analyse them in Analytics. You might find you have a trend, maybe something searched for or not the perfect match for the returned result, where you can create new more targeted content.
-
I'm not sure this is the right approach. The catalog search is based on the search box on the website. The query parameter can be anything the customer enters. Are you suggesting that the backend code be modified to always return the in every result?
And why that page because that URL just redirects to the home page because there is no query parameter provided for the search.
In terms o losing link equity, how much equity do they have it they are duplicate content?
-
Hi Jeremy.
Yours is a common problem. The best way to deal with it is, as Wesley mentions, by putting canonical tags on all the duplicate pages - the one you want indexed and to show up in search results AND all the others that you can arrive at via catalog search or any other means of navigation.
Michael's suggestion will prevent the duplicate pages from getting indexed by Google. Unfortunately you lose any link equity going that route, so I'd suggest starting with canonical tags first.
-
To back up the detail Wesley gave you, you can also add URL parameters in Google Webmaster Tools
-
You could add a canonical tag to link to the default page. This way Google will know that it should only index that.
The code for this would be:This should be placed in the section of your HTML code.
Some more resources on the subject:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Could duplicate (copied) content actually hurt a domain?
Hi 🙂 I run a small wordpress multisite network where the main site which is an informative portal about the Langhe region in Italy, and the subsites are websites of small local companies in the tourism and wine/food niche. As an additional service for those who build a website with us, I was thinking about giving them the possibility to use some ouf our portal's content (such as sights, events etc) on their website, in an automatic way. Not as an "SEO" plus, but more as a service for their current users/visitors base: so if you have a B&B you can have on your site an "events" section with curated content, or a section about thing to see (monuments, parks, museums, etc) in that area, so that your visitors can enjoy reading some content about the territory. I was wondering if, apart from NOT being benefical, it would be BAD from an SEO point of view... ie: if they could be actually penlized by google. Thanks 🙂 Best
Intermediate & Advanced SEO | | Enrico_Cassinelli0 -
Supplier Videos & Duplicate Content
Hi, We have some supplier videos the product management want to include on these product pages. I am wondering how detrimental this is for SEO & the best way to approach this. Do we simply embed the supplier YouTube videos, or do we upload them to our YouTube - referencing the original content & then embed our YouTube videos? Thank you!
Intermediate & Advanced SEO | | BeckyKey0 -
Duplicate content in external domains
Hi,
Intermediate & Advanced SEO | | teconsite
I have been asking about this case before, but now my question is different.
We have a new school that offers courses and programs . Its website is quite new (just a five months old) It is very common between these schools to publish the courses and programs in training portals to promote those courses and to increase the visibility of them. As the website is really new, I found when I was doing the technical audit, that when I googled a text snipped from the site, the new school website was being omitted, and instead, the course portals are being shown. Of course, I know that the best recommendation would be to create a different content for that purpose, but I would like to explore if there is more options. Most of those portals doesn't allow to place a link to the website in the content and not to mention canonical. Of course most of them are older than the new website and their authority is higher. so,... with this situation, I think the only solution is to create a different content for the website and for the portals.
I was thinking that maybe, If we create the content first in the new website, send it to the index, and wait for google to index it, and then send the content to the portals, maybe we would have more opportunites to not be ommited by Google in search results. What do you think? Thank you!0 -
How to avoid duplicate content with e-commerce and multiple stores?
We are currently developing an e-commerce platform that will feed multiple stores. Each store will have its own domain and URL, but all stores will offer products that come from the same centralized database. That means all products will have the same image, description and title across all stores. What would be the best practice to avoid getting stores penalized for duplicate content?
Intermediate & Advanced SEO | | Agence_Bunji0 -
K3 duplicate page content and title tags
I'm running a Joomla site, have just installed k2 as our blogging platform. Our Crawl Report with SEOMOZ shows a good bit of duplicate content and duplicate title tags with our K2 blog. We've installed sh404SEF. Will I need to go into sh404SEF each time we generate a blog entry to point the titles to one URL? If there is something simpler please advise. Thank you, Don
Intermediate & Advanced SEO | | donaldmoore0 -
How to Best Establish Ownership when Content is Duplicated?
A client (Website A) has allowed one of their franchisees to use some of the content from their site on the franchisee site (Website B). This franchisee lifted the content word for word, so - my question is how to best establish that Website A is the original author? Since there is a business relationship between the two sites, I'm thinking of requiring Website B to add a rel=canonical tag to each page using the duplicated content and referencing the original URL on site A. Will that work, or is there a better solution? This content is primarily informational product content (not blog posts or articles), so I'm thinking rel=author may not be appropriate.
Intermediate & Advanced SEO | | Allie_Williams0 -
Duplicate Content Question
My understanding of duplicate content is that if two pages are identical, Google selects one for it's results... I have a client that is literally sharing content real-time with a partner...the page content is identical for both sites, and if you update one page, teh otehr is updated automatically. Obviously this is a clear cut case for canonical link tags, but I'm cuious about something: Both sites seem to show up in search results but for different keywords...I would think one domain would simply win out over the other, but Google seems to show both sites in results. Any idea why? Also, could this duplicate content issue be hurting visibility for both sites? In other words, can I expect a boost in rankings with the canonical tags in place? Or will rankings remain the same?
Intermediate & Advanced SEO | | AmyLB0 -
Duplicate content for area listings
Hi, I was slightly affected by the panda update on the 14th oct generaly dropping by about 5-8 spots in the serps for my main keywords, since then I've been giving my site a good looking over. On a site I've got city listings urls for certain widget companys, the thing is many areas and thus urls will have the same company listed. What would be the best way of solving this duplicate content as google may be seeing it? I was thinking of one page per company and prominenly listing the areas they operate so still hopefully get ranked for area searches. But i'd be losing the city names in the url as I've got them now for example: mywidgetsite.com/findmagicwidgets/new-york.html mywidgetsite.com/findmagicwidgets/atlanta.html Any ideas on how best to proceed? Cheers!
Intermediate & Advanced SEO | | NetGeek0