Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with near duplicate content
Hi Mozzers, I need your opinion on the following. Imagine that we have a product X (brand Sony for example), so if we sell parts for different models of items of this product X, we then have numerous product pages with model number. Sony camera parts for Sony Camera XYZ parts for Sony Camera XY etc. So the thing is that these pages are very very similar, like 90% duplicate and they do duplicate pages for Panasonic, Canon let's say with small tweaks in content. I know that those are duplicates and I would experiment removing a category for one brand only (least seached for), but at the same time I cannot remove for the rest as they convert a lot, being close to the search query of the customer (customer looks for parts for Sony XYZ, lands on the page and buys, insteading of staying on a page for Sony parts where should additionally browse for model number). What would you advise to make as unique as possible these pages, I am thinking about: change page titles. meta descriptions tweak the content as much as I can (very difficult, there is nothing fancy or different in those :(() i will start with top top pages that really drive traffic first and see how it goes. I will remove least visited pages and prominently put the model number in Sony parts page to see how it goes in terms of organic and most importantly conversions Any other ideas? I am really concerned about dupes and a penalty, but I try to think of solutions in order not to kill conversions at this point. Have a lovely Monday
On-Page Optimization | | SammyT0 -
Empty public profiles are viewed as duplicate content. What to do?
Hi! I manage a social networking site. We have a lot of public user profiles that are viewed as duplicate content. This is because these users haven't filled out any public profile info and thus the profiles are "empty" (except for the name). Is this something I should worry about? If yes, what are my options to solve this? Thanks!
On-Page Optimization | | thomasvanderkleij0 -
Index Page Content
Mozers, I am of the believe and as a person who puts the utmost emphasis on the index page of any website I am trying to rank, especially with a new domain ... insuring content is relevant, structured, optimized and we have some link juice flowing in. I find once we get the index page ranked, Google's little bots then start to index and rank accordingly the rest of the website ... and we start producing results. We also develop websites (dare I say its where we expertise in) and unexpectantly the client has asked us to carry out SEO work additionally to their web development. Problem lies here, their index page, has absolutely no written content at all, just one large image with a logo (Fashion Website) ...Which I identify as a huge issue as per my explanation is paragraphs one or two. I am sure withe the many more qualified SEO experts and gurus within the SEOmoz community, you have also come across this issue So a few questions, if you don't mind adding advice. 1 - Am I putting too much emphasize on content within the index page, in terms of indexing and actually ranking ...yes I appreciate that terms within the website will be ranked against other pages other than the index page, but will it harm us for having no content at all within the index page 2 - If so, and yes is the answer to above, how do we handle it, we have spoke with the client and he is pretty adamant that he want the index page as is, he has been through out the whole website building process. As suggested, any advice would be really appreciated, its a difficult market to rank within a it is, and i can only see this index page making the task a lot more difficult Cheers John
On-Page Optimization | | Johnny4B0 -
Duplicate content issue
Hello, I got duplicate content issue on my home page : examplesite.com
On-Page Optimization | | digitalkiddie
examplesite.com/index.html Those page urls are with duplicate content. If in index.html i use 301 redirect like that : Header( "HTTP/1.1 301 Moved Permanently" );
Header( "Location: http://examplesite.com" );
?> would i loose any page authority ? sorry for the newbie question0 -
Thin content and tabs on page
I am reviewing a site, and the web designer used tabs to impart information. I think the tabs idea looks great, but it leaves the page looking thin. Here is a link to a product page, could anyone chime in please? http://www.aireindustrial.net/spill-berms/foam-berm-drive-over-berms.asp Thanks in advance for your opinion!
On-Page Optimization | | drufast10 -
How dangerous are duplicate page titles
We ran a SEO crawl and on our report it flag up duplicate pages titles, we investigate further and found that these were page titles from the same product line that had more than one page, e.g 1-50 (products) 51-100 (products) with a next button to move to the following 50 products. These where flagged as duplicate page titles ".../range-1/page-1" and ".../range-1/page-2" These titles are obviously being read as duplicates but because they are the same range we do not know what the best course of action is. We want to know how detrimental these page titles will be to our SEO if at all. If anyone could shed some light on this issue it would be a massive help. Thanks
On-Page Optimization | | SimonDixon0 -
Duplicate content
crawler shows following links as duplicate http://www.mysite.com http://mysite.com http://www.mysite.com/ http://mysite.com. http://mysite.com/index.html How can i solve this issue?
On-Page Optimization | | bhanu22170 -
Duplicate content
Hi everybody, I am thrown into a SEO project of a website with a duplicate content problem because of a version with and a version without 'www' . The strange thing is that the version with www. has got more than 10 times more Backlings but is not in the organic index. Here are my questions: 1. Should I go on using the "without www" version as the primary resource? 2. Which kind of redirect is best for passing most of the link juice? Thanks in advance, Sebastian
On-Page Optimization | | Naturalmente0