Removing Duplicate Page Content
-
Since joining SEOMOZ four weeks ago I've been busy tweaking our site, a magento eCommerce store, and have successfully removed a significant portion of the errors.
Now I need to remove/hide duplicate pages from the search engines and I'm wondering what is the best way to attack this?
Can I solve this in one central location, or do I need to do something in the Google & Bing webmaster tools?
Here is a list of duplicate content
http://www.unitedbmwonline.com/?dir=asc&mode=grid&order=name http://www.unitedbmwonline.com/?dir=asc&mode=list&order=name
http://www.unitedbmwonline.com/?dir=asc&order=name http://www.unitedbmwonline.com/?dir=desc&mode=grid&order=name http://www.unitedbmwonline.com/?dir=desc&mode=list&order=name http://www.unitedbmwonline.com/?dir=desc&order=name http://www.unitedbmwonline.com/?mode=grid http://www.unitedbmwonline.com/?mode=listThanks in advance,
Steve
-
Thank you Cyrus I will certainly read the blog post and consider the noindex, nofollow on content with a canonical tag that differs from the current served page' uri.
I am still at little confused as to why the SEOMOZ crawl is highlighting duplicate pages when the canonical tag is present and pointing to the primary content.
Take the following example page for example:-
http://www.planksclothing.com/planks-classic-t-shirt-black-multi.html
Firstly the page has a canonical tag. There is no search on the site and product is viewed a root level without directory structure, which in a Magento instance is the common problem with duplicate content...
Currently at the time of writing SEOMOZ is updating my duplicate repor, so I can't find out what is the duplicate content. Maybe it is updating to say it is not
Thanks
Amendment: After reading the supplied blog post (http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world) I have learn't that the above page is just not different and probably is in the area of "Thin Content".
-
There are many, many different types of duplicate content, and how you handle it depends on the specific type of duplicate content and your needs.
If you haven't already, I highly suggest you read Dr. Pete's excellent post on dupe content here: http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
In your specific case it looks like you have multiple parameters serving the same basic content as your homepage. Is this correct?
In this case, you should set a canonical on every page pointing to the homepage. This also has the benefit of solving the errors in the SEOmoz PRO app.
It also sounds like you've addressed the issue in Google's Webmaster Tools. Unfortunately, Google doesn't let SEOmoz sync with Webmaster Tools, so anything you set there won't show up in the Web App.
Finally, don't forget about Bing Webmaster. They have similar parameter settings you can submit.
By the way, some SEOs would suggest putting meta robots "NOINDEX, FOLLOW" tags on those duplicate pages. While this may potentially send conflicting signals when coupled with the canonical tag, it is a potentially valid approach.
Hope this helps! Best of luck with your SEO.
-
This is exactly my current situation...
As a result of the SEOMOZ Duplicate content report I set about resolving these issues...
In the first instance I configured URL parameters via Google Webmaster Tools. It instantly occurred to me that whilst this fixes these potential duplicate content in Google this configuration does not affect other search engines and the work is unlikely to be reflected in future SEOMOZ crawls of the site.
I'm interested in creating a over arching method of removing the potential duplication caused via URL parameters required to paginate, sort and filter content. The majority of these URL parameters are standardized across web applications. But is it actually required?
In my case each Magento store uses the canonical tag correctly and has an updated robots.txt to restrict the crawling of areas of the site that should be excluded... In a sense this is the over arching method of removing potential duplicate content. So why is SEOMOZ reporting duplicate content?
I suppose the big question is... Is SEOMOZ crawling the site correctly, do these results reflect robots.txt and canonical tags?
-
Thank you for your thoughts.
As mentioned in my above response, canonical tags have already been configured for the site, it's just this home page that remains the issue.
-
Thanks for your response.
I looked in URL Parameters and see dir & mode are already defined.
Then I searched the http://www.unitedbmwonline.com page source for canonical links and none are defined, though I do have canonical tags setup for the rest of the site
Any other thoughts of how to remove these duplicates?
-
You can also tell Google to ignore certain query string variables through Webmaster Tools.
For instance, indicate that "dir" and "mode" have no impact on content.
Other SE's have simular controls.
-
This is why the canonical tag was invented, to solve duplicate content issues when URL parameters are involved. Set a canonical tag on all these pages to point towards the version of the page you want to appear in search results. As long as the pages are identical, or close to it, the search engines (most likely) will respect the canonical tag, and pass along the duplicate versions link juice to the page you're pointing to.
Here's some info: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html. If you Google "canonical tag", you'll find lots more!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International SEO and duplicate content: what should I do when hreflangs are not enough?
Hi, A follow up question from another one I had a couple of months ago: It has been almost 2 months now that my hreflangs are in place. Google recognises them well and GSC is cleaned (no hreflang errors). Though I've seen some positive changes, I'm quite far from sorting that duplicate content issue completely and some entire sub-folders remain hidden from the SERP.
Intermediate & Advanced SEO | | GhillC
I believe it happens for two reasons: 1. Fully mirrored content - as per the link to my previous question above, some parts of the site I'm working on are 100% similar. Quite a "gravity issue" here as there is nothing I can do to fix the site architecture nor to get bespoke content in place. 2. Sub-folders "authority". I'm guessing that Google prefers sub-folders over others due to their legacy traffic/history. Meaning that even with hreflangs in place, the older sub-folder would rank over the right one because Google believes it provides better results to its users. Two questions from these reasons:
1. Is the latter correct? Am I guessing correctly re "sub-folders" authority (if such thing exists) or am I simply wrong? 2. Can I solve this using canonical tags?
Instead of trying to fix and "promote" hidden sub-folders, I'm thinking to actually reinforce the results I'm getting from stronger sub-folders.
I.e: if a user based in belgium is Googling something relating to my site, the site.com/fr/ subfolder shows up instead of the site.com/be/fr/ sub-sub-folder.
Or if someone is based in Belgium using Dutch, he would get site.com/nl/ results instead of the site.com/be/nl/ sub-sub-folder. Therefore, I could canonicalise /be/fr/ to /fr/ and do something similar for that second one. I'd prefer traffic coming to the right part of the site for tracking and analytic reasons. However, instead of trying to move mountain by changing Google's behaviour (if ever I could do this?), I'm thinking to encourage the current flow (also because it's not completely wrong as it brings traffic to pages featuring the correct language no matter what). That second question is the main reason why I'm looking out for MoZ's community advice: am I going to damage the site badly by using canonical tags that way? Thank you so much!
G0 -
Duplicate page wordpress title
Hi Moz Fan, I have a few question that confuse by now, According to my website used wordpress for create a blog.
Intermediate & Advanced SEO | | ASKHANUMANTHAILAND
but now i have got duplicate page title from wordpress blog and very weird because these duplicate
page title look like these term Lamborghini ASK Hanuman Club < title that was duplicate but they duplicate with the page like this
articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini-2/ , /articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini/ I don't know why Worpress create a new page that only have 1 pics, So I think that might be the effect
from Wordpress PHP Code Or wrong setting. Can anyone suggestion what should we do for these issues ?
because i have many duplicate page title from this case.1 -
What Makes Good Content for Category Pages
Hello, We're putting (roughly, depending on the category) 500 words under the products or categories under category pages. We're having a writer do these who has to learn the products from scratch. With over 100 categories, it's not possible for the client to write 500 words for each one. We're wondering, 1. What should go into a category description? 2. How do you prep a writer to write these, and is it possible to do so and get good content? I'm afraid that we're writing just words for long tails, and I know on the product pages, home page, and articles that it has to be the best content, probably written by the client himself if he is knowledgable enough. Open to your suggestions on what should be in these, how long they should be, and who should write them.
Intermediate & Advanced SEO | | BobGW0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
How to Best Establish Ownership when Content is Duplicated?
A client (Website A) has allowed one of their franchisees to use some of the content from their site on the franchisee site (Website B). This franchisee lifted the content word for word, so - my question is how to best establish that Website A is the original author? Since there is a business relationship between the two sites, I'm thinking of requiring Website B to add a rel=canonical tag to each page using the duplicated content and referencing the original URL on site A. Will that work, or is there a better solution? This content is primarily informational product content (not blog posts or articles), so I'm thinking rel=author may not be appropriate.
Intermediate & Advanced SEO | | Allie_Williams0 -
Get Duplicate Page content for same page with different extension ?
I have added a campaign like "Bannerbuzz" in SEOMOZ Pro account and before 2 or 3 days i got errors related to duplicate page content . they are showing me same page with different extension. As i mentioned below http://www.bannerbuzz.com/outdoor-vinyl-banners.html
Intermediate & Advanced SEO | | CommercePundit
&
http://www.bannerbuzz.com/outdoor_vinyl_banner.php We checked our whole source files but we didn't define php related urls in our source code. we want to catch only our .html related urls. so, Can you please guide us to solve this issue ? Thanks <colgroup><col width="857"></colgroup>
| http://www.bannerbuzz.com/outdoor-vinyl-banners.html |0 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0 -
Load balancing - duplicate content?
Our site switches between www1 and www2 depending on the server load, so (the way I understand it at least) we have two versions of the site. My question is whether the search engines will consider this as duplicate content, and if so, what sort of impact can this have on our SEO efforts? I don't think we've been penalised, (we're still ranking) but our rankings probably aren't as strong as they should be. The SERPs show a mixture of www1 and www2 content when I do a branded search. Also, when I try to use any SEO tools that involve a site crawl I usually encounter problems. Any help is much appreciated!
Intermediate & Advanced SEO | | ChrisHillfd0