How to get rid of duplicate content
-
I have duplicate content that looks like http://deceptionbytes.com/component/mailto/?tmpl=component&link=932fea0640143bf08fe157d3570792a56dcc1284 - however I have 50 of these all with different numbers on the end. Does this affect the search engine optimization and how can I disallow this in my robots.txt file?
-
Hi Michelle,
In addition to what Alan said, I might take a couple of more actions on this page. Since it sounds like you're a beginner, don't worry if you don't understand all this stuff, but I wanted to include it for anyone else reading this question.
I've also tried to include links to relevant sources where you can learn about each topic addressed.
1. Yes, add the canonical. This basically tells search engines that even those these pages all have different URL addresses, they are meant to be the same page.
http://www.seomoz.org/learn-seo/canonicalization
2. The "numbers at the end" are called URL parameters, and there is a setting in Google Webmaster Tools that you can use to tell them to ignore parameter settings. This is advanced stuff, and Google does a pretty good job these days of figuring this stuff out on their own, so it's best not to adjust these settings unless you're comfortable doing so.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
3. Honestly, there's no reason for this page to appear in search results, or waste search engine resources crawling the page. So, if possible, I'd add a meta robots "NO INDEX, FOLLOW" tag to the head element of the HTML.
http://www.robotstxt.org/meta.html
4. Additionally, I'd slap a nofollow on any links pointing these pages, and/or block crawling of this page via robots.txt, because there is no reason to waste your search engine crawl allowance on these pages.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
5. And finally, I think it's perfectly legitimate to block these thowaway pages using robots.txt. Alan has good point about link juice - it's usually best not to block pages using robots.txt, but in this particular case I think it would be fine.
http://www.seomoz.org/learn-seo/robotstxt
Honestly, addressing all of these issues in this particular case probably won't make a huge impact on your SEO. But as you can see, there are multiple ways of dealing with the problem that touch on many of the fundamental techniques of Search Engine Optimization.
Finally, to answer your question in a straitforward answer, to dissallow this directory in robots.txt, your file would look something like this.
User-agent: *
Disallow: *mailto/Which will block anything in the /mailto/ directory.
Hope this helps. Best of luck with your SEO!
-
Michelle,
I agree with Alan, if your confused with the Rel=cannonical tag, I recommend your read the SEOmoz beginners guide to seo. More specifically this page: http://www.seomoz.org/beginners-guide-to-seo/search-engine-tools-and-services, the whole book/guide goes through a lot of best practices, and even advanced SEOs can kind of use this guide as a "bible"
Hope this helps
-
100% best move forward
-
Link juice flows though links only if the linked page is in the index, if not then the link juice just goines up in smoke, it is wasted, so you dont want to link to a page that is not indexed.
A canonical tag tells the search engine to give the credit to teh page in the canonical tag.
so with a canonical tag pointing to page.html from page.html?id5 with tell the search engine they are the same page, and to give credit to teh canonical.
this is how to createa canonical tag
http://mycanonialpage.com/page.html/" /> -
link juice leaks?? canonical tag? ummmmm I thought I was farily smart until just this minute- I have NO idea what you are talking about
-
dont use robots.txt
You will cause link juice leaks for each link that points to a page behind a rebots.txt exclude
The best thing to do is use a canonical tag pointing to http://deceptionbytes.com/component/mailto
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content
I have one client with two domains, identical products to appear on both domains. How should I handle this?
Technical SEO | | Hazel_Key0 -
Duplicate Content on a Page Due to Responsive Version
What are the implications if a web designer codes the content of the site twice into the page in order to make the site responsive? I can't add the url I'm afraid but the H1 and the content appear twice in the code in order to produce both a responsive version and a desktop version. This is a Wordpress site. Is Google clever enough to distinguish between the 2 versions and treat them individually? Or will Google really think that the content has been repeated on the same page?
Technical SEO | | Wagada0 -
How to avoid duplicate content when blogging from a site
I have a wordpress plastic surgery website. I have a wordpress blog on the site. My concern is avoiding duplicate content penalties when I blog. I use my blog to add new information about procedures that have pages on the same topic on the main site. Invariably same keywords and phrases can appear in the blog-will this be considered Duplicate content? Also is it black hat to insert anchor text in a blog linking back to site content-ie internal link or is one now and then helpful
Technical SEO | | wianno1680 -
Duplicate Content - Products
When running a report it says we have lots of duplicate content. We are a e-commerce site that has about 45,000 sku's on the site. Products can be in multiple departments on the site. So the same products can show up on different pages of the site. Because of this the reports show multiple products with duplicate content. Is this an issue with google and site ranking? Is there a way to get around this issue?
Technical SEO | | shoedog1 -
Duplicate content by php id,page=... problem
Hi dear friends! How can i resolve this duplicate problem with edit the php code file? My trouble is google find that : http://vietnamfoodtour.com/?mod=booking&act=send_booking&ID=38 and http://vietnamfoodtour.com/.....booking.html are different page, but they are one but google indexed both of them. And the Duplcate content is raised 😞 how can i notice to google that they are one?
Technical SEO | | magician0 -
Duplicate content issue
Hi everyone, I have an issue determining what type of duplicate content I have. www.example.com/index.php?mact=Calendar,m57663,default,1&m57663return_id=116&m57663detailpage=&m57663year=2011&m57663month=6&m57663day=19&m57663display=list&m57663return_link=1&m57663detail=1&m57663lang=en_GB&m57663returnid=116&page=116 Since I am not an coding expert, to me it looks like it is a URL parameter duplicate content. Is it? At the same time "return_id" would makes me think it is a session id duplicate content. I am confused about how to determine different types of duplicate content, even by reading articles on Seomoz about it: http://www.seomoz.org/learn-seo/duplicate-content. Could someone help me on how to recognize different types of duplicate content? Thank you!
Technical SEO | | Ideas-Money-Art0 -
Strange duplicate content issue
Hi there, SEOmoz crawler has identified a set of duplicate content that we are struggling to resolve. For example, the crawler picked up that this page www. creative - choices.co.uk/industry-insight/article/Advice-for-a-freelance-career is a duplicate of this page www. creative - choices.co.uk/develop-your-career/article/Advice-for-a-freelance-career. The latter page's content is the original and can be found in the CMS admin area whilst the former page is the duplicate and has no entry in the CMS. So we don't know where to begin if the "duplicate" page doesn't exist in the CMS. The crawler states that this page www. creative-choices.co.uk/industry-insight/inside/creative-writing is the referrer page. Looking at it, only the original page's link is showing on the referrer page, so how did the crawler get to the duplicate page?
Technical SEO | | CreativeChoices0 -
Duplicate content?
I have a question regarding a warning that I got on one of my websites, it says Duplicate content. I'm canonical url:s and is also using blocking Google out from pages that you are warning me about. The pages are not indexed by Google, why do I get the warnings? Thanks for great seotools! 3M5AY.png
Technical SEO | | bnbjbbkb0