Duplicate content + wordpress tags
-
According to SEOMoz platform, one of my wordpress websites deals with duplicate content because of the tags I use. How should I fix it? Is it loyal to remove tag links from the post pages?
-
William
I wouldn't do that - make sure to just noindex tags, and also noindex subpages of archives. Categories can stay indexed, it's usually the subpages that cause issues (ie: /page/2/ etc).
-Dan
-
Hey There
I wrote a post which walks through how to decide what to do with tags. Essentially, here's what I would do;
(This assumes you have Yoast SEO for your SEO plugin).
- Noindex tags by default.
- Except set only the ones receiving traffic to index (this is the part you need Yoast for).
- Do not overlap your categories and tags (keep them different).
- Minimize tag usage in general - put a few on each post, but not too many more.
- You can use for navigation (like a tag cloud) if you think your readers will find them useful.
- Do not include them in your XML sitemap.
Hope that helps!
I also did this post on setting up WordPress for the Moz blog which you may find helpful.
-Dan
-
I also have the same problem, with categories and tags.
Should I add www.site.com/categories/ to the robots.txt or is that a bad idea?
-
I also wanted to mention...you might want to read this post. Cyrus Shepard suggested when I was asking a similar question and I think it really helps..
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
If you're receiving very good amount of traffic then it will be a very bad idea to no-index tags, because you'll not receive any more traffic if you do so. i think you understood what it means by no-index, search engines will not index your tags.
For any site there would be 2 types of navigation:
-
through categories.
-
through tags.
Most People recommend using only one as a primary navigation and de-index other, since indexing both will cause duplicate content. There are some gurus like Yoast.com who recommend de-indexing both, which is the rule i'm following now.
So, i explained you what happens, if you feel that you're getting more traffic through tags, then you should de-index categories or viceversa.
If you need to index both, then i recommend you to wait for other pro members suggestion.
-
-
I'm dealing with similar issues on another platform (as are many others I'm sure) I would think twice before deleting them, especially if you are getting traffic from them. You have to weigh the advantages and disadvantages and realize it will probably never be "perfect"...something I personally have a hard time coming to terms with!
If you aren't using excerpts, (and instead, showing entire articles on index pages) that has helped immensely on a couple of websites I've helped in reducing duplicate content.
-
Thank you for your reply. Currently, I'm receiving good amounts of traffic to some www.domain.com/tag/blah-blah pages, will those pages be harmed if I make tags noindex?
-
Hi Giankar,
If Tags is not the Primary way of Navigation for your Blog, Then You can remove all your tags. I mean delete them. Otherwise you can no-index your tags so that it will not cause any duplicate content issues.
I hope this helps !
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL slash creating duplicate content
Hi All, I currently have an issue whereby by domain name (just homepage) has: mydomain.com and: mydomain.com/ Moz crawler flags this up as duplicate content - does anyone know of a way I can fix this? Thanks! Jack
Technical SEO | | Jack11660 -
Non-standard HTML tags in content
I had coded my website's article content with a non-standard tag <cnt>that surrounded other standard tags that contained the article content, I.e.</cnt> , . The whole text was enclosed in a div that used Schema.org markup to identify the contents of the div as the articleBody. When looking at scraped data for stories in Webmaster Tools, the content of the story was there and identified as the articleBody correctly. It's recently been suggested by someone else that the presence of the non-standard <cnt>tags were actually making the content of the article uncrawlable by the Googlebot, this effectively rendering the content invisible. I did not believe this to be true, since the content appeared to be correctly indexed in Webmaster Tools, but for the sake of a test I agreed to removing them. In the last 6 weeks since they were removed, there have been no changes in impressions or traffic from organic search, which leads me to believe that the removal of the <cnt>tags actually had no effect, since the content was already being indexed successfully and nothing else has changed.</cnt></cnt> My question is whether or not an encapsulating non-standard tag as I've described would actually make the content invisible to Googlebot, or if it should not have made any difference so long as the correct Schema.org markup was in place? Thank you.
Technical SEO | | dlindsey0 -
Duplicate page content & titles on the same domain
Hey, My website: http://www.electromarket.co.uk is running Magento Enterprise. The issue I'm running into is that the URLs can be shortened and modified to display different things on the website itself. Here's a few examples. Product Page URL: http://www.electromarket.co.uk/speakers-audio-equipment/dj-pa-speakers/studio-bedroom-monitors/bba0051 OR I could remove everything in the URL and just have: http://www.electromarket.co.uk/bba0051 and the link will work just as well. Now my problem is, these two URL's load the same page title, same content, same everything, because essentially they are the very same web page. But how do I tell Google that? Do I need to tell Google that? And would I benefit by using a redirect for the shorter URLs? Thanks!
Technical SEO | | tomhall900 -
Testing for duplicate content and title tags
Hi there, I have been getting both Duplicate Page content and Duplicate Title content warnings on my crawl diagnostics report for one of my campaigns. I did my research, and implemented the preferred domain setting in Webmaster Tools. This did not resolve the crawl diagnostics warnings, and upon further research I discovered the preferred domain would only be noted by Google and not other bots like Roger. My only issue was that when I ran an SEOmoz crawl test on the same domain, I saw none of the duplicate content or title warnings yet they still appear on my crawl diagnostics report. I have now implemented a fix in my .htaccess file to 301 redirect to the www. domain. I want to check if it's worked, but since the crawl test did not show the issue last time I don't think I can rely on that. Can you help please? Thanks, Claire
Technical SEO | | SEOvet0 -
Duplicate title-tags with pagination and canonical
Some time back we implemented the Google recommendation for pagination (the rel="next/prev"). GWMT now reports 17K pages with duplicate title-tags (we have about 1,1m products on our site and about 50m pages indexed in Google) As an example we have properties listed in various states and the category title would be "Properties for Sale in [state-name]". A paginated search page or browsing a category (see also http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970) would then include the following: The title for each page is the same - so to avoid the duplicate title-tags issue, I would think one would have the following options: Ignore what Google says Change the canonical to http://www.site.com/property/state.html (which would then only show the first XX results) Append a page number to the title "Properties for Sale in [state-name] | Page XX" Have all paginated pages use noindex,follow - this would then result in no category page being indexed Would you have the canonical point to the individual paginated page or the base page?
Technical SEO | | MagicDude4Eva2 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Getting rid of duplicate content with rel=canonical
This may sound like a stupid question, however it's important that I get this 100% straight. A new client has nearly 6k duplicate page titles / descriptions. To cut a long story short, this is mostly the same page (or rather a set of pages), however every time Google visits these pages they get a different URL. Hence the astronomical number of duplicate page titles and descriptions. Now the easiest way to fix this looks like canonical linking. However, I want to be absolutely 100% sure that Google will then recognise that there is no duplicate content on the site. Ideally I'd like to 301 but the developers say this isn't possible, so I'm really hoping the canonical will do the job. Thanks.
Technical SEO | | RiceMedia0 -
Duplicate content
I am getting flagged for duplicate content, SEOmoz is flagging the following as duplicate: www.adgenerator.co.uk/ www.adgenerator.co.uk/index.asp These are obviously meant to be the same path so what measures do I take to let the SE's know that these are to be considered the same page. I have used the canonical meta tag on the Index.asp page.
Technical SEO | | IPIM0