WordPress Duplicate Content Issues
-
Everyone knows that WordPress has some duplicate content issues with tags, archive pages, category pages etc...
My question is, how do you handle these issues?
Is the smart strategy to use robots meta and add no follow/ no index category pages, archive pages tag pages etc?
By doing this are you missing out on the additional internal links to your important pages from you category pages and tag pages?
I hope this makes sense.
Regards,
Bill
-
Hey Bill
I like to start with this standard setup (image/chart from my wordpress post on moz);
Pages, Posts, Categories - Index
Tags, Dated Archives, Subpages, Author Archives - noindex
You can check out the full post - I will be updating the Yoast Screenshots very soon!
-Dan
-
Thanks for article,
Now 2 years ahead, are there any important updates for preventing duplicate content/titles?
-
Most of the Plugins for wordpress use canonical urls.
-
Unless I'm missing something here, wouldn't it be easier to set the canonical tag for the main post? There are also plugins like SEO Ultimate that handle this automatically.
-
I posted this article I wrote the other day for someone asking a similar question.
With the Yoast SEO Plugin I no-index everything except Categories. You can see how I set mine up under section 3. Indexation.
Here is the original question that Sha submitted:
http://www.seomoz.org/q/what-is-with-wordpress-dupe-issues -
Bill-
There are several SEO plugs available for WP that will handle these issues. Yes, you are right that adding "noindex" will be beneficial on tag, category, and archive pages. The idea here is avoiding duplicate content issues. BTW, check out: Yoast SEO for Wordpress.
Here is how the values for the robots meta tag work:
- noindex will keep a page from being crawled
- nofollow will prevent a page's links from being followed
I agree with noindex'ing these pages; though I would argue that a nofollow is still worth leaving out. If these pages have any juice you want to allow this to flow to the other links on the page.
-
The WP on my blog is set up as follows (this is a blog that gets between four and ten short posts per day - about two to four sentences, each post linking to an article or other content on a topic-related website)
Homepage: Full text of the most recent 25 posts are displayed. Pagination pages are not indexed (blocked by robots.txt).
Post Pages: Full text is displayed and the title plus a few words of 20 related posts are displayed.
Category Pages: I have over 100 categories and each post is placed into at least two categories (one by location and one by topic). Some posts go into three or four categoreis - sometimes more. Each category page displays the full text of the most recent 25 posts. Categories do not have pagination pages (blocked by robots.txt).
All of the above pages are fully indexed and a long list of category pages appears in the left-side navigation. I don't use tag pages or archive pages. There is a lot of dupe content in this system but so far I am lucky that it does not cause a problem. The category pages pull a lot of organic search traffic.
In January of each year I delete all of the posts that are over a year old. Before doing that I identify those that are pulling reasonable traffic and either redirect them to a permanent page about same topic, write an article about that topic and redirect, or recycle that post. All the rest are redirected to the homepage of the blog.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Tricky Duplicate Content Issue
Hi MOZ community, I'm hoping you guys can help me with this. Recently our site switched our landing pages to include a 180 item and 60 item version of each category page. They are creating duplicate content problems with the two examples below showing up as the two duplicates of the original page. http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=180&p=1 http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=60&p=1 The original page is http://www.uncommongoods.com/fun/wine-dine/beer-gifts I was just going to do a rel=canonical for these two 180 item and 60 item pages to the original landing page but then I remembered that some of these landing pages have page 1, page 2, page 3 ect. I told our tech department to use rel=next and rel=prev for those pages. Is there anything else I need to be aware of when I apply the canonical tag for the two duplicate versions if they also have page 2 and page 3 with rel=next and rel=prev? Thanks
Technical SEO | | znotes0 -
Problem with duplicate content
Hi, My problem is this: SEOmoz tells me I have duplicate content because it is picking up my index page in three different ways: http://www.web-writer-articles.co.uk http://www.web-writer-articles.co.uk/ and http://www.web-writer-articles.co.uk/index.php Can someone give me some advice as to how I can deal with this issue? thank you for your time, louandel15
Technical SEO | | louandel150 -
Press Releases & Duplicate Content
How do you do press releases without duplicating the content? I need to post it on my website along with having it on PR websites. But isn't that considered bad for SEO since it's duplicate content?
Technical SEO | | MercyCollege0 -
Duplicate Content Issue
Hi Everyone, I ran into a problem I didn't know I had (Thanks to the seomoz tool) regarding duplicate content. my site is oxford ms homes.net and when I built the site, the web developer used php to build it. After he was done I saw that the URL's looking like this "/blake_listings.php?page=0" and I wanted them like this "/blakes-listings" He changed them with no problem and he did the same with all 300 pages or so that I have on the site. I just found using the crawl diagnostics tool that I have like 3,000 duplicate content issues. Is there an easy fix to this at all or does he have to go in and 301 Redirect EVERY SINGLE URL? Thanks for any help you can give.
Technical SEO | | blake-766240 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0 -
Is this considered as duplicate content?
One of my clients has a template page they have used repeatedly each time they have a new news item. The template includes a two-paragraph customer quote/testimonial for the company. So, they now have 100+ pages with the same customer quote. The rest of the page content / body copy is unique. Is there any likelihood of this being considered duplicate content?
Technical SEO | | bjalc20110 -
Solution for duplicate content not working
I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
Technical SEO | | Joeuspe
The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks0