Why do I get duplicate content errors just for tags I place on blog entries?
-
I the SEO MOZ crawl diagnostics for my site, www.heartspm.com, I am getting over 100 duplicate content errors on links built from tags on blog entries. I do have the original base blog entry in my site map not referencing the tags.
Similarly, I am getting almost 200 duplicate meta description errors in Google Webmaster Tools associated with links automatically generated from tags on my blog. I have more understanding that I could get these errors from my forum, since the forum entries are not in the sitemap, but the blog entries are there in the site map.
I thought the tags were only there to help people search by category. I don't understand why every tag becomes its' own link. I can see how this falsely creates the impression of a lot of duplicate data.
As seen in GWT:
Pages with duplicate meta descriptions Pages [ Customer concerns about the use of home water by pest control companies.](javascript:dropInfo('zip_0div', 'none', document.getElementById('zip_0zipimg'), 'none', null);)
/category/job-site-requirements
/tag/cost-of-water
/tag/irrigation-usage
/tag/save-water
/tag/standard-industry-practice
/tag/water-use
6 [ Pest control operator draws analogy between Children's Day and the state of the pest control industr](javascript:dropInfo('zip_1div', 'none', document.getElementById('zip_1zipimg'), 'none', null);)
/tag/children-in-modern-world
/tag/children
/tag/childrens-day
/tag/conservation-medicine
/tag/ecowise-certified
/tag/estonia
/tag/extermination-service
/tag/exterminator
/tag/green-thumb
/tag/hearts-pest-management
/tag/higher-certification
/tag/higher-education
/tag/tartu
/tag/united-states
-
Nice if you can get your tags to reflect the broad range of topics covered by your site, but I think sometimes it can give a visitor the perception that your content is a little thin when you only have one article against each topic.
Say you're reading an article, you see that it's tagged with the topic you're interested in, click on that tag and only get the one article that you've just read.
When deciding on tags to use, try and think of how your visitors may wish to explore your content. Remember to try and maintain an external perspective and use tags that are meaningful to your visitors.
If you've got the time and enough traffic to give you the info, you could use your analytics data to see how people are using tags vs. site search for example.
Of course, as you point out, this doesn't have anything to do with the duplicate content problem.
-
Doug, thank you so much for responding quickly. I will have to explore the archive. I really haven't ever explored this section. You've given me a new place to research and I appreciate that.
As far as reducing the number of tags, that I don't understand as being an issue. Because if I am doing something wrong, i.e. presenting the whole article via tags rather than references to it, that is the problem, whether I have 2 tags or 20 tags. I want to find a solution that will solve the core issue regardless of the number of tags created.
Is that a different reason that you suggest reducing the number of tags? I have heard around town that it is good not to have so many tags, but shouldn't I create them if a post covers a wide range of topics. For example, if I wrote one post about a trip to Europe, chosing not to divide it into 15 posts for 15 countries, wouldn't I want to create a tag for each of the countries I mention in the post?
Doug, thanks again.
Gerry Weitz
-
As you mention, you use tags to give visitors the ability to browse articles with that cover the same subject matter. Normal practice would be to give the visitor a list of articles that have been tagged with this term.
A list of articles shouldn't flag up as a duplicate of any of the other article pages.
However, I notice that on a couple of the tag pages I took a look at that I just get one article being displayed, and being displayed in full. This is obviously going to be a duplicate of the original article.
I suspect that this is because for a number of the tags there is only one article to be returned.
I would look at how you are presenting articles when listed in the "tag archive" view and perhaps only display the introduction to the page with a link to the full article.
You may also want to think about the tags you are using so that you have fewer tags with only one article.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Brainstorming
Hi, New here in the SEO world. Excellent resources here. We have an ecommerce website that sells presentation templates. Today our templates come in 3 flavours - for PowerPoint, for Keynote and both - called Presentation Templates. So we've ended up with 3 URLS with similar content. Same screenshots, similar description.. Example: https://www.improvepresentation.com/keynote-templates/social-media-keynote-template https://www.improvepresentation.com/powerpoint-templates/social-media-powerpoint-template https://www.improvepresentation.com/presentation-templates/social-media-presentation-template I know what you're thinking. Why not make a website with a template and give 3 download options right? But what about https://www.improvepresentation.com/powerpoint-templates/ https://www.improvepresentation.com/keynote-templates/ These are powerfull URL's in my opinion taking into account that the strongest keyword in our field is "powerpoint templates" How would you solve this "problem" or maybe there is no problem at all.
Technical SEO | | slidescamp0 -
Looking for a technical solution for duplicate content
Hello, Are there any technical solutions to duplicate content similar to the nofollow tag? A tag which can indicate to Google that we know that this is duplicate content but we want it there because it makes sense to the user. Thank you.
Technical SEO | | FusionMediaLimited0 -
Duplicate Content
Hi, I'm working on a site and I'm having some issues with its structure causing duplicate content. The first issue is that the search pages will show up as duplicates.
Technical SEO | | OOMDODigital
A search for new inventory may be new.aspx
The duplicate may be something like new.aspx=page1, or something like that and so on. The second issue is with inventory. When new inventory gets put into the stock of the store, a new page for that item will be populated with duplicate content. There appears to be no canonical source for that page. How can I fix both of these? Thanks!0 -
Duplicate Content - Captcha on Contact Form
I am going to be working on a site where the contact form is being flagged as duplicate content the URL is the same apart from having: /contact/10119 contact/31010 ...at the end of it. The only difference in the content of the page that I can see is the Captcha numbers? Is there a way to overcome this to stop duplicate content? Thanks in advance
Technical SEO | | J_Sinclair0 -
What could be the cause of this duplicate content error?
I only have one index.htm and I'm seeing a duplicate content error. What could be causing this? IUJvfZE.png
Technical SEO | | ScottMcPherson1 -
Duplicate Footer Content
A client I just took over is having some duplicate content issues. At the top of each page he has about 200 words of unique content. Below this is are three big tables of text that talks about his services, history, etc. This table is pulled into the middle of every page using php. So, he has the exact same three big table of text across every page. What should I do to eliminate the dup content. I thought about removing the script then just rewriting the table of text on every page... Is there a better solution? Any ideas would be greatly appreciated. Thanks!
Technical SEO | | BigStereo0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10