Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do permanent redirect solve the issue of duplicate content?
Hi, I have a product page on my site as below. www.mysite.com/Main-category/SubCatagory/product-page.html This page was accessible in both ways as below. 1. www.mysite.com/Main-category/SubCatagory/product-page.html 2. www.mysite.com/Main-category/product-page.html This was causing duplicate title issue. So i permanently redirected one to other. But after more than a month and after many crawls, webmaster tools html improvement still shows duplicate title issue. My question is that do permanent redirect solve duplicate content issue or something i am missing here?
On-Page Optimization | | Kashif-Amin0 -
What to do about resellers duplicating content?
Just went through a big redevelopment for a client and now have fresh images and updated content but now all the resellers have just grabbed the new images/content and pasted them on their own site. My client is a manufacture that sells directly online and over the phone for large orders. I'm just not sure how to handle the resellers duplicate content. Any thoughts on this? Am I being silly for worrying about this?
On-Page Optimization | | ericnkatz0 -
Duplicate Content aka 301 redirect from .com to .com/index.html
Moz reports are telling me that I have duplicate content on the home page because .com and .com/index.html are being seen as two pages. I have implemented 301 redirect using various codes I found online, but nothing seems to work. Currently I'm using this code. RewriteEngine On
On-Page Optimization | | omakad
RewriteBase /
RewriteCond %{HTTP_HOST} ^jacksonvilleacservice.com
RewriteRule ^index.html$ http://www.jacksonvilleacservice.com/ [L,R=301] Nothing is changing. What am I doing wrong? I have given it several weeks but report stays the same. Also according to webmasters tools they can't see this as duplicate content. What am I doing wrong?0 -
Duplicate Content
Hi I am new to SEO and at the moment looking at warnings from the crawl diagnostics report. When I have looked at the content from the urls given I cant see anything obvious that relates to duplicate content. Whats the best way to find out the problem please?
On-Page Optimization | | Pauline080 -
Duplicate content - Opencart
In my last report I have a lot of duplicate content. Duplicate pages are: http://mysite.com/product/search&filter_tag=Сваров�% http://mysite.com/product/search&filter_tag=бижу http://mysite.com/product/search&filter_tag=бижузо�%8 And a lot of more, starting with -- http://mysite.com/product/search&filter_tag= Any ideas? Maybe I should do something in robots.txt, but please tell me the exact code. Best Regards, Emil
On-Page Optimization | | famozni0 -
Help: my WordPress Blog generates too many onpage links and duplicate content
I have a WordPress Blog since November last year (so I'm pretty new to WordPress) and the effects on ranking for some keywords are really good. So I thought tag clouds are good. Crawl Diagnostics tell me now that I have too many onpage links for example my author page breaks the record: 256
On-Page Optimization | | inlinear
http://inlinear.com/blog/author/inlinear/ I think thats because there are links for each word in the tag cloud generated ... On this page (and many other pages) WordPress displays (teasers) the beginning of each post (read more ...) producing duplicate content and even new canonical tags.... The page titles are also too long because I installed "All in One SEO Pack" and now this plugin and wordpress itself mixes titles together ... But what can I do to avoid all this. Is there a PlugIn that can help... I think millions of blogs will have the same problems... I my blog yet has very few content. Thanks for your answers :))0 -
Duplicate Page Titles
I have over 200 duplicate page titles on a site that I am working on. Does putting a date at the end of some of them make it a unique enough title?
On-Page Optimization | | SavingSense0 -
Duplicate content issue with dynamically generated url
Hi, For those who have followed my previous question, I have a similar one regarding dynamically generated urls. From this page http://www.selectcaribbean.com/listing.html the user can make a selection according to various criteria. 6 results are presented and then the user can go to the next page. I know I should probably rewrite url's such as these: http://www.selectcaribbean.com/listing.html?pageNo=1&selType=&selCity=&selPrice=&selBeds=&selTrad=&selMod=&selOcean= but since all the results presented are basically generated on the fly for the convenience of the user, I am afraid google my consider this as an attempt to generate more pages as there are pages for each individual listing. What is my solution for this? Nofollow these pages? Block them thru robots txt?
On-Page Optimization | | multilang0