Duplicate Content
-
The crawl shows a lot of duplicate content on my site. Most of the urls its showing are categories and tags (wordpress).
so what does this mean exactly? categories is too much like other categories? And how do i go about fixing this the best way.
thanks
-
Greg
Thanks so much for helping out! If you don't mind I'm just going to correct a few finer details so people don't confuse anything
"Essentially the tags display the exact content as the original URL so the pages are identical but the URL is different."
Its totally true that this happens, but this is not what causes the duplicate content error in the crawl report. The errors are usually from sub-pages of any given tag archive having the same title tag.
"Remove the tags"
By this I'm sure you just mean noindex tags. You don't need to remove them from the site altogether, just remove them from the index.
"If you want the Tags and Categories for user experience, Install Yoast SEO plugin which allows you to insert a canonical URL on the duplicate category pages."
You should leave categories indexed and noindex tags. Yoast does canonicals no matter what, you don't need to think about them and they are not what handles duplicate category pages.
Everything else stated is more or less ok but I just don't people to be confused.
Thanks again!
-Dan
-
Justin
Sorry to hear of your trouble with making the new settings. For one, my guide on SEOmoz about setting up WordPress for SEO should be helpful. I'd recommend familiarizing yourself with that.
In these cases - the "duplicate content" is usually not the page its self but rather usually just the title tags.
This is because, imagine you have tag archives like this;
- mydomain.com/tag/pink-elephants/
- mydomain.com/tag/pink-elephants/page/2/
- mydomain.com/tag/pink-elephants/page/3/
Usually the title tags respectably end up being the same;
- Pink Elephants | My Domain
- Pink Elephants | My Domain <-- title tag for page 2
- Pink Elephants | My Domain <-- title tag for page 3
For every single tag "subpage".
Normally, the protocol would be to;
- Noindex subpages
- Noindex tags
- Noindex dated archives
- Disable author archives (single author blog only)
- Index categories
You can still link to tag pages and use tags within the site all you want, but you just don't want to index them.
These are just default settings. Its impossible to know exactly what you should be doing without seeing your site, but I hope all of that gets you in the right direction!
-Dan
-
You should only no-follow your tags and archives and not your categories...
In the plugin settings, under permalinks, there is an option
"Strip the category base (usually
/category/
) from the category URL." this will just stop the duplicate pages from appearing,Blocking the category's must have caused the drop.
Greg
-
Changed to Yoast. I ticked no follow on archives, categories, and tags. One hour later, website went from #7 to page four.
-
Well, the duplicate content is causing issues alone.. Google does not like duplicate pages at all...
If you select which are your primary pages, and tell google to ignore the rest, it can only help your ranking.
With the Yoast SEO plugin, all you need to do is set tags to no-follow and no-index, and also strip the category from the URL. (it redirects automatically, as well)
Greg
-
Thanks for the reply. Would this affect ranking or can it be left alone ?
-
Wordpress does this when you use tags....
Essentially the tags display the exact content as the original URL so the pages are identical but the URL is different.
2 Options that i can think of.
1.) Remove the tags and strip the category segment in the URL and stop using them in future. This will require redirects from duplicate URL"s to the main article (this will take planning, allot of time and is quite complicated)
2.) If you want the Tags and Categories for user experience, Install Yoast SEO plugin which allows you to insert a canonical URL on the duplicate category pages. This tells Google were the original page can be found. Tags are only their for user experience so you can set these to no-follow and no-index.
Greg
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have duplicate content but // are causing them
I have 3 pages duplicated just by a / Example: https://intercallsystems.com/intercall-nurse-call-systems**//**
Technical SEO | | Renalynd
https://intercallsystems.com/intercall-nurse-call-systems**/** What would cause this?? And how would I fix it? Thanks! Rena0 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
How do I deal with Duplicate content?
Hi, I'm trying SEOMOZ and its saying that i've got loads of duplicate content. We provide phone numbers for cities all over the world, so have pages like this... https://www.keshercommunications.com/Romaniavoipnumbers.html https://www.keshercommunications.com/Icelandvoipnumbers.html etc etc. One for every country. The question is, how do I create pages for each one without it showing up as duplicate content? Each page is generated by the server, but Its impossible to write unique text for each one. Also, the competition seem to have done the same but google is listing all their pages when you search for 'DID Numbers. Look for DIDWW or MyDivert.
Technical SEO | | DanFromUK0 -
Over 700+ duplicate content pages -- help!
I just signed up for SEO Moz pro for my site. The initial report came back with over 700+ duplicate content pages. My problem is that while I can see why some of the content is duplicated on some of the pages I have no idea why it's coming back as duplicated. Is there a tutorial for a novie on how to read the duplicate content report and what steps to take? It's an e-commerce website and there is some repetitive content on all the product pages like our "satisfaction guaranteed" text and the fabric material... and not much other text. There's not a unique product description because an image speaks for itself. Could this be causing the problem? I have lots of URLs with over 50+ duplicates. Thx for any help.
Technical SEO | | Santaur0 -
Duplicate content - Quickest way to recover?
We've recently been approached by a new client who's had a 60%+ drop in organic traffic. One of the major issues we found was around 60k+ pages of content duplicated across 3 seperate domains. After much discussion and negotiation with them; we 301'd all the pages across to the best domain but traffic is increasing very slowly. Given that the old sites are 60k+ pages each and don't get crawled very often, is it best to notify the domain change through Google Webmaster Tools to try and give Google a 'nudge' to deindex the old pages and hopefully recover from the traffic loss as quickly and as much as possible?
Technical SEO | | Nathan.Smith0 -
Cross-domain duplicate content issue
Hey all, Just double-checking something. Here's the issue, briefly. One of my clients is a large law firm. The firm has a main site, and an additional site for an office in Atlanta. On the main site, there is a list of all attorneys and links to their profiles (that they wrote themselves). The Atlanta site has this as well, but lists only the attorneys located in that office. I would like to have the profiles for the Atlanta lawyers on both sites. Would rel=canonical work to avoid a dupe-content smackdown? The profiles should rank for Atlanta over the main site. This just means that G will drop the main site's profiles (for those attorneys) from their index, correct? No other weird side effects? I hope I worded all that clearly!
Technical SEO | | LCNetwork0 -
Root domain not resolving to www. Duplicate content?
Hi, I'm working with a domain that stays on the root domain if the www is not included. But if the www is included, it stays with the www. LIke this: example.com
Technical SEO | | HardyIntl
or
www.example.com Of course, they are identical and both go to the same IP. Do search engines consider that to be duplicate content? thanks,
michael0 -
How do i deal with duplicate content on the same domain?
I'm trying to find out if there's a way we can combat similar content on different pages on the same site, without having to re write the whole lot? Any ideas?
Technical SEO | | indurain0