Duplicate Content in Wordpress.com
-
Hi Mozers!
I have a client with a blog on wordpress.com.
http://newsfromtshirts.wordpress.com/
It just had a ranking drop because of a new Panda Update, and I know it's a Dupe Content problem.
There are 3900 duplicate pages, basically because there is no use of noindex or canonical tag, so archives, categories pages are totally indexed by Google.
If I could install my usual SEO plugin, that would be a piece of cake, but since Wordpress.com is a closed environment I can't.
How can I put a noindex into all category, archive and author peges in wordpress.com?
I think this could be done by writing a nice robot.txt, but I am not sure about the syntax I shoud use to achieve that.
Thank you very much,
DoMiSol Rossini
-
Hi Mike,
I have been looking for the solution since a week almost .
My website http://mysay.in has over 3000 duplicate content issue and equal number of warnings and notices .. I am very new to the Moz world and SEO knowledge is close to negligible . My sit is hosted on wordpress.com as well. Now , after reading the solution I did remove the Tag cloud widget .. Will that help ?? and if you could suggest ideally how many tags for a post are optimum ? Will removing tags from previous posts and removing the Tag cloud remove these duplicate pages on its own or is there anything else I am supposed to do??
I am just confused . Please assist !!
Thanks
Vikash
-
Depending on your theme (sometimes) you can change Tag Archives and Category pages to show a post summary instead of the full article... which can help but will not fully solve your problem.
If you could convince them to self-host instead of using WordPress's hosting then you'd be able to install an SEO related plugin that might help more to fix the problem.
Tag things less. There are a number of tags that have only 1 post associated with them. Take for instance the post from February 5th that has 8 tags, 5 of which go to Tag Archives that only have that one post on it.
Also, consider getting rid of the Tag Cloud because it just adds unnecessary and irrelevant links pointing to those Tag pages that are causing your dupe content problem and probably harming the flow of link equity through the site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Duplicate Content for Locations on my Directory Site
I have a pretty big directory site using Wordpress with lots of "locations", "features", "listing-category" etc.... Duplicate Content: https://www.thecbd.co/location/california/ https://www.thecbd.co/location/canada/ referring URL is www.thecbd.co is it a matter of just putting a canonical URL on each location, or just on the main page? Would this be the correct code to put: on the main page? Thanks Everyone!
Technical SEO | | kay_nguyen0 -
Does adding a noindex tag reduce duplicate content?
I've been working under the assumption for some time that if I have two (or more) pages which are very similar that I can add a noindex tag to the pages I don't need and that will reduce duplicate content. As far as I know this removes the pages with the tag from Google's index and stops any potential issues with duplicate content. It's the second part of that assumption that i'm now questioning. Despite pages having the noindex tag they continue to appear in Google Search console as duplicate content, soft 404 etc. That is, new pages are appearing regularly that I know to have the noindex tag. My thoughts on this so far are that Google can still crawl these pages (although won't index them) so shows them in GSC due to a crude issue flagging process. I mainly want to know: a) Is the actual Google algorithm sophisticated enough to ignore these pages even through GSC doesn't. b) How do I explain this to a client.
Technical SEO | | ChrisJFoster0 -
Do you think my client is being hit for duplicate content?
Wordpress website. The client's website is http://www.denenapoints.com/ The URL that we purchase so that we could setup the hosting account is http://houston-injury-lawyers.com, which shows 1 page indexed in Google when I search for site:http://houston-injury-lawyers.com On http://www.denenapoints.com/ there is <link rel="<a class="attribute-value">canonical</a>" href="http://houston-injury-lawyers.com/"> But on http://houston-injury-lawyers.com it says the same thing, <link rel="<a class="attribute-value">canonical</a>" href="http://houston-injury-lawyers.com/" /> Is this how it should be setup, assuming that we want everything to point to http://denenapoints.com/? Maybe we should do a 301 redirect to be 100% Sure? Hopefully I explained this well enough. Please let me know if anyone has any thoughts, thanks!
Technical SEO | | georgetsn0 -
Avoiding Cannibalism and Duplication with content
Hi, For the example I will use a computers e-commerce store... I'm working on creating guides for the store -
Technical SEO | | BeytzNet
How to choose a laptop
How to choose a desktop I believe that each guide will be great on its own and that it answers a specific question (meaning that someone looking for a laptop will search specifically laptop info and the same goes for desktop). This is why I didn't creating a "How to choose a computer" guide. I also want each guide to have all information and not to start sending the user to secondary pages in order to fill in missing info. However, even though there are several details that are different between the laptops and desktops, like importance of weight, screen size etc., a lot of things the checklist (like deciding on how much memory is needed, graphic card, core etc.) are the same. Please advise on how to pursue it. Should I just write two guides and make sure that the same duplicated content ideas are simply written in a different way?0 -
How damaging is duplicate content in a forum?
Hey all; I hunted around for this in previous questions in the Q&A and didn't see anything. I'm just coming back to SEO after a few years out of the field and am preparing recommendations for our web dev team. We use a custom-coded software for our forums, and it creates a giant swathe of duplicate content, as each post has its own link. For example: domain.com/forum/post_topic domain.com/forum/post_topic/post1 domain.com/forum/post_topic/post2 ...and so on. However, since every page of the forum defaults to showing 20 posts, that means that every single forum thread that's 20 posts long has 21 different pages with identical content. Now, our forum is all user-generated content and is not generally a source of much inbound traffic--with occasional exceptions--but I was curious if having a mess of duplicate content in our forums could damage our ability to rate well in a different directory of the site. I've heard that Panda is really cracking down on duplicate content, and last time I was current on SEO trends, rel="canonical" was the hot new thing that everyone was talking about, so I've got a lot of catching up to do. Any guidance from the community would be much appreciated.
Technical SEO | | TheEnigmaticT0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0