Duplicate Content Question
-
Currently, we manage a site that generates content from a database based on user search criteria such as location or type of business. ..Although we currently rank well -- we created the website based on providing value to the visitor with options for viewing the content - we are concerned about duplicate content issues and if they would apply.
For example, the listing that is pulled up for the user upon one search could have the same content as another search but in a different order. Similar to hotels who offer room booking by room type or by rate.
Would this dynamically generated content count as duplicate content?
The site has done well, but don't want to risk a any future Google penalties caused by duplicate content. Thanks for your help!
-
Thank you for your provided example, that's exactly what I meant.
You have the following "default" display:
http://www.neworleansrestaurants.com/restaurants/
and the following one which is a "variant" of the first one:
http://www.neworleansrestaurants.com/restaurants/?loc=all
You are actually showing the "same" listings ordered differently... so, a rel=canonical in my opinion will put you safe:
-
I can't give you the specific example of the site because it's undergoing redesign
However, we have a similar issue on a sister site. It has 2 separate pages with the same listings but different categories:
By location: http://www.neworleansrestaurants.com/restaurants/?loc=all
By type of restaurant http://www.neworleansrestaurants.com/restaurants/
Thanks for the feed and information Fabrizo.
-
I don't understand why the content of those 2 pages are the same if they show different categories... are same listings ordered differently? Can we have a look at those pages?
-
On our site only difference is that different pages show up different results. I.e., the page with results A will have a title tag and content related to page A. The page results for page B will also have a unique page with a unique title tag. In that case, the listings are the same.. but they appear on two pages, each with a unique category that should have its own page. In this case, the categories are “location” and “type.”
-
I would need to have a look at your website to understand how it is structured, but I have a very similar case on my site virtualsheetmusic.com, and I think it is a common case for e-commerce websites in general as well, and I think the best way to avoid any issues is to use a rel=canonical tag.
For example, if your page URL for a search can vary in the following way:
http://www.yoursite.com/search.php [assuming this is the "default" page display]
http://www.yoursite.com/search.php?sort=title
http://www.yoursite.com/search.php?sort=title&filter=NY
I would put a rel-canonical like:
pointing to the "default" version of the page. That would avoid any duplicate issues very easily!
Also, if you have paginated content (2 or more pages results) you may want to add the rel=prev and rel=next definitions as suggested by Google:
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
I hope this helps.
-
Hi, if you're trying to make your website better for the end user you almost can't lose. Google wants what the end-user wants fast page loads times relevant content and easy navigation to name a few of the things that are important to Google & visitors. You'll find that if you match the two you will almost always get it right.
I hope this is been of help sincerely,
Thomas
-
Thanks for the feedback Thomas. I should note that this situation is all on one website.
-
I believe the easiest way to answer this. Is if you have website A & B. Well I get the exact same answer if I query whatever the keyword "example" is? From both websites? If so I always get the same answer?
If the answer to that is yes I will get the same answer to make this query the same.
From each website then I would say will have trouble with the content however if the answer is no I would say you want to examine further for how much of is identical.
I'm not fan of having identical content especially when you control it. if it is the same result. Then yes you'll get to content issues with Google and I would not recommend creating an additional website to order content from the same database because it sounds to me like you will be getting identical answers for queries is this correct?
Do understand how your gathering content from the database so it would have to be identical right? If this is the case then I would not create additional website I would created this website if you need to do different subject but if you have one already just focus on creating a better version of that website.
I hope this is of help,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content Strategy/Duplicate Content Issue, rel=canonical question
Hi Mozzers: We have a client who regularly pays to have high-quality content produced for their company blog. When I say 'high quality' I mean 1000 - 2000 word posts written to a technical audience by a lawyer. We recently found out that, prior to the content going on their blog, they're shipping it off to two syndication sites, both of which slap rel=canonical on them. By the time the content makes it to the blog, it has probably appeared in two other places. What are some thoughts about how 'awful' a practice this is? Of course, I'm arguing to them that the ranking of the content on their blog is bound to be suffering and that, at least, they should post to their own site first and, if at all, only post to other sites several weeks out. Does anyone have deeper thinking about this?
Intermediate & Advanced SEO | | Daaveey0 -
Duplicate content question
Hi there, I work for a Theater news site. We have an issue where our system creates a chunk of duplicate content in Google's eyes and we're not sure how best to solve. When an editor produces a video, it simultaneously 1) creates a page with it's own static URL (e.g. http://www.theatermania.com/video/mary-louise-parker-tommy-tune-laura-osnes-and-more_668.html); and 2) displays said video on a public index page (http://www.theatermania.com/videos/). Since the content is very similar, Google sees them as duplicate. What should we do about this? We were thinking that one solution would to be dynamically canonicalize the index page to the static page whenever a new video is posted, but would Google frown on this? Alternatively, should we simply nofollow the index page? Lastly, are there any solutions we may have missed entirely?
Intermediate & Advanced SEO | | TheaterMania0 -
Best strategy for duplicate content?
Hi everyone, We have a site where all product pages have more or less similar text (same printing techniques, etc.) The main differences are prices and images, text is highly similar. We have around 150 products in every language. Moz's algorithm tells me to do something about duplicate content, but I don't really know what we could do, since the descriptions can't be changed to be very different. We essentially have paper bags in different colors and and from different materials.
Intermediate & Advanced SEO | | JaanMSonberg0 -
Magento Duplicate Content Recovery
Hi, we switched platforms to Magento last year. Since then our SERPS rankings have declined considerably (no sudden drop on any Panda/Penguin date lines). After investigating, it appeared we neglected to No index, follow all our filter pages and our total indexed pages rose sevenfold in a matter of weeks. We have since fixed the no index issue and the pages indexed are now below what we had pre switch to Magento. We've seen some positive results in the last week. Any ideas when/if our rankings will return? Thanks!
Intermediate & Advanced SEO | | Jonnygeeuk0 -
Duplicate Page Content / Titles Help
Hi guys, My SEOmoz crawl diagnostics throw up thousands of Dup Page Content / Title errors which are mostly from the forum attached to my website. In-particular it's the forum user's profiles that are causing the issue, below is a sample of the URLs that are being penalised: http://www.mywebsite.com/subfolder/myforum/pop_profile.asp?mode=display&id=1308 I thought that by adding - http://www.mywebsite.com/subfolder/myforum/pop_profile.asp to my robots.txt file under 'Ignore' would cause the bots to overlook the thousands of profile pages but the latest SEOmoz crawl still picks them up. My question is, how can I get the bots to ignore these profile pages (they don't contain any useful content) and how much will this be affecting my rankings (bearing in mind I have thousands of errors for dup content and dup page titles). Thanks guys Gareth
Intermediate & Advanced SEO | | gaz33420 -
Duplicate Content Question
My client's website is for an organization that is part of a larger organization - which has it's own website. We were given permission to use content from the larger organization's site on my client's redesigned site. The SEs will deem this as duplicate content, right? I can "re-write" the content for the new site, but it will still be closely based on the original content from the larger organization's site, due to the scientific/medical nature of the subject material. Is there a way around this dilemma so I do not get penalized? Thanks!
Intermediate & Advanced SEO | | Mills1 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0 -
Duplicate Content on Blog
I have a blog I'm setting up. I would like to have a mini-about block set up on every page that gives very brief information about me and my blog, as well as a few links to the rest of the site and some social sharing options. I worry that this will get flagged as duplicate content because a significant amount of my pages will contain the same information at the top of the page, front and center. Is there anything I can do to address this? Is it as much of a concern as I am making it? Should I work on finding some javascript/ajax method for loading that content into the page dynamically only for normal browser pageviews? Any thoughts or help would be great.
Intermediate & Advanced SEO | | grayloon0