Glossary index and individual pages create duplicate content. How much might this hurt me?
-
I've got a glossary on my site with an index page for each letter of the alphabet that has a definition. So the M section lists every definition (the whole definition).
But each definition also has its own individual page (and we link to those pages internally so the user doesn't have to hunt down the entire M page).
So I definitely have duplicate content ... 112 instances (112 terms). Maybe it's not so bad because each definition is just a short paragraph(?)
How much does this hurt my potential ranking for each definition? How much does it hurt my site overall?
Am I better off making the individual pages no-index? or canonicalizing them?
-
Thanks, Ryan!
-
From here: http://moz.com/messages/write to Dirk's username: DC1611. There used to be a button in profiles, but it looks like it got shuffled in the redesign.
-
PM? Does Moz offer that function?
-
It's a bit difficult to assess which of the pages is more important without knowing the site. Having a lot of content is good - but if the only link between the content is that they all start with the same letter it could be pretty weak or pretty strong depending on the situation:
I'll give 2 examples :
Suppose that the index is on First names starting with S - in this case this page is a valuable one because a lot of people are searching for it - and the search volume is potentially bigger than the number of people that are looking for first name steve (= one specific item)
Suppose the index is about Illnesses starting with S - in this case the index page has very little value for a searcher, because people are searching illnesses based the symptoms -the fact that illnesses start with S doesn't link them together.
It could be helpful if you send me the actual url's via PM if you don't want to disclose them here.
rgds
Dirk
-
Oops. Sorry. Poor wording there. Meant to say ...
Definitely not concerned that the M index page and the M* definition** page BOTH show up in the search results.
We definitely do want at least one of the pages to not only show up in the rankings, but to rank highly. I'm guessing the M index page would actually have a chance of ranking high because it will have so many long tails related to our short-tail.
But it would seem weird to put a no-index on the M* definition** page ... since we have multiple internal links to those pages.
Thanks again for your patience. Really appreciate the feedback.
Steve
-
That's exactly what I am saying - your index page with all the definitions is from Google perspective completely different from the detailed definition page (the first one being much richer in content than the 2nd one). If getting these pages ranked is the least of concerns - you can keep it as it is. If you want to play on the safe side, you can put a noindex on the index page.
rgds,
Dirk
-
Just having a bit of a dilemma. Trying to make it easier for people who come to the glossary and then go to ... say ... the M page. Don't have to keep clicking away to see the definitions. Result: More user-friendly
But we also want to have a very specific definition page so that when we link from an article to the definition, the user doesn't have to see all of the M definitions. Result: More user-friendly.
Definitely not concerned that both the M index page and the M* definition** page show up in the search results. That would actually be swell. Just more concerned that our overall site ranking or domain authority will somehow suffer.
If you're saying that the M index page and the M* page** are dramatically different (because the M index page is much, much longer) and so I shouldn't worry, that's great. (Hope that's what you're saying.)
Thanks!
-
Hi,
As far as I understand it's not really a question of duplicate content in the SEO meaning. Although all the definitions starting with M are on the M-index page this page is quite different to the pages that contain the individual definitions of the terms that start with M.
A problem on many sites is that the pages that only contain the explanation of one term are very light in terms of content, and that the page with is listing all these terms is generally not very interesting from a user (and search perspective). I don't know your site, so difficult to assess if this is the case
You could make the index page noindex/follow - and just list the terms, linking to the explanation pages. For the explanation pages which are probably the most interesting for users & search engines: try to enrich them by adding more content, like links to articles on your site that use the term, or have more information on the term
Hope this helps,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing duplicate content
Due to URL changes and parameters on our ecommerce sites, we have a massive amount of duplicate pages indexed by google, sometimes up to 5 duplicate pages with different URLs. 1. We've instituted canonical tags site wide. 2. We are using the parameters function in Webmaster Tools. 3. We are using 301 redirects on all of the obsolete URLs 4. I have had many of the pages fetched so that Google can see and index the 301s and canonicals. 5. I created HTML sitemaps with the duplicate URLs, and had Google fetch and index the sitemap so that the dupes would get crawled and deindexed. None of these seems to be terribly effective. Google is indexing pages with parameters in spite of the parameter (clicksource) being called out in GWT. Pages with obsolete URLs are indexed in spite of them having 301 redirects. Google also appears to be ignoring many of our canonical tags as well, despite the pages being identical. Any ideas on how to clean up the mess?
Intermediate & Advanced SEO | | AMHC0 -
Removing pages from index
My client is running 4 websites on ModX CMS and using the same database for all the sites. Roger has discovered that one of the sites has 2050 302 redirects pointing to the clients other sites. The Sitemap for the site in question includes 860 pages. Google Webmaster Tools has indexed 540 pages. Roger has discovered 5200 pages and a Site: query of Google reveals 7200 pages. Diving into the SERP results many of the pages indexed are pointing to the other 3 sites. I believe there is a configuration problem with the site because the other sites when crawled do not have a huge volume of redirects. My concern is how can we remove from Google's index the 2050 pages that are redirecting to the other sites via a 302 redirect?
Intermediate & Advanced SEO | | tinbum0 -
Opinion on Duplicate Content Scenario
So there are 2 pest control companies owned by the same person - Sovereign and Southern. (The two companies serve different markets) They have two different website URLs, but the website code is actually all the same....the code is hosted in one place....it just uses an if/else structure with dynamic php which determines whether the user sees the Sovereign site or the Southern site....know what I am saying? Here are the two sites: www.sovereignpestcontrol.com and www.southernpestcontrol.com. This is a duplicate content SEO nightmare, right?
Intermediate & Advanced SEO | | MeridianGroup0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
News sites & Duplicate content
Hi SEOMoz I would like to know, in your opinion and according to 'industry' best practice, how do you get around duplicate content on a news site if all news sites buy their "news" from a central place in the world? Let me give you some more insight to what I am talking about. My client has a website that is purely focuses on news. Local news in one of the African Countries to be specific. Now, what we noticed the past few months is that the site is not ranking to it's full potential. We investigated, checked our keyword research, our site structure, interlinking, site speed, code to html ratio you name it we checked it. What we did pic up when looking at duplicate content is that the site is flagged by Google as duplicated, BUT so is most of the news sites because they all get their content from the same place. News get sold by big companies in the US (no I'm not from the US so cant say specifically where it is from) and they usually have disclaimers with these content pieces that you can't change the headline and story significantly, so we do have quite a few journalists that rewrites the news stories, they try and keep it as close to the original as possible but they still change it to fit our targeted audience - where my second point comes in. Even though the content has been duplicated, our site is more relevant to what our users are searching for than the bigger news related websites in the world because we do hyper local everything. news, jobs, property etc. All we need to do is get off this duplicate content issue, in general we rewrite the content completely to be unique if a site has duplication problems, but on a media site, im a little bit lost. Because I haven't had something like this before. Would like to hear some thoughts on this. Thanks,
Intermediate & Advanced SEO | | 360eight-SEO
Chris Captivate0 -
Duplicate blog content and NOINDEX
Suppose the "Home" page of your blog at www.example.com/domain/ displays your 10 most recent posts. Each post has its own permalink page (where you have comments/discussion, etc.). This obviously means that the last 10 posts show up as duplicates on your site. Is it good practice to use NOINDEX, FOLLOW on the blog root page (blog/) so that only one copy gets indexed? Thanks, Akira
Intermediate & Advanced SEO | | ahirai0 -
How do I index these parameter generated pages?
Hey guys, I've got an issue with a site I'm working on. A big chunk of the content (roughly 500 pages) is delivered using parameters on a dynamically generated page. For example: www.domain.com/specs/product?=example - where "example' is the product name Currently there is no way to get to these pages unless you enter the product name into the search box and access it from there. Correct me if I'm wrong, but unless we find some other way to link to these pages they're basically invisible to search engines, right? What I'm struggling with is a method to get them indexed without doing something like creating a directory map type page of all of the links on it, which I guess wouldn't be a terrible idea as long as it was done well. I've not encountered a situation like this before. Does anyone have any recommendations?
Intermediate & Advanced SEO | | CodyWheeler0