Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Identifying Duplicate Page Title
Moz weekly reports, among other things, the "Duplicate Page Title". How can I identify which two urls/pages have duplicate page titles? Is there any simple way to trace?
On-Page Optimization | | Sequelmed0 -
I'm looking to put a quite length FAQs tab on product pages on an ecommerce site. Am I likely to have duplicate content issues?
On an ecommerce site we have unique content on the product pages (i.e. descriptions), as well as the usual delivery and returns tabs for customer convenience. From this we haven't had any duplicate content issues or warnings, which seems to be the case industry-wide. However, we're looking to add a more lengthy FAQs tab which is still highly relevant to the customer but contains a lot more text than the other tabs. The product descriptions are also relatively small. Do you think this will cause potential duplicate content issues or should it be treated the same as a delivery tab, for instance?
On-Page Optimization | | creativemay0 -
Noindex child pages (whose content is included on parent pages)?
I'm sorry if there have been questions close to this before... I've using WordPress less like a blogging platform and more like a CMS for years now... For content management purposes we organize a lot of content around Parent/Child page (and custom-post-type) relationships; the Child pages are included as tabbed content on the Parent page. Should I be noindexing these child pages, since their content is already on the site, in full, on their Parent pages (ie. duplicate content)? Or does it not matter, since the crawlers may not go to all of the tabbed content? None of the pages have shown up in Moz's "High Priority Issues" as duplicate content but it still seems like I'm making the Parent pages suffer needlessly... Anything obvious I'm not taking into consideration? By the by, this is my first post here @ Moz, which I'm loving; this site and the forums are such a great resource! Anyways, thanks in advance!
On-Page Optimization | | rsigg0 -
Duplicate Content on Category Pages
Hi Everyone, I have a few category pages within a category for my eCommerce store and I've recently started writing a short description for each. However a lot of these paragraphs can be replicated for the same category. For instance '1 Inch thickness' I'll show all the information, and it'll be very similar to '2 inch thickness' but obviously one is 1 inch and one is 2 inch so I would only be changing one keyword and that is the thickness. I feel that this is helping customers because it has all the information in each category e.g. how to filter your choices. But it might be duplicate content. What would you recommend?
On-Page Optimization | | EcomLkwd0 -
Similar content multiple pages
I have run in to a situation on an e-commerce store where products from a certain manufacturer require a fairly large chunk of corporate information to be posted underneath the product description: I.E. Trademark information, etc. This information happens to be close to half the size of the product description information. Am I at risk of getting hit negatively for this portion of text duplicated across multiple products? I was considering putting a link to a separate informational page with this information but am not sure if it even matters? What are your recommendations brilliant SEO'erz?
On-Page Optimization | | wishmedia0 -
Duplicate page title - blogs
Hope someone can help me, I am a total SEO noivce so please be gentle. My first report shows that I have duplicate page titles. I have been through and changed all of these so they are different and after my latest crawl they are still showing as duplicates. I am wondering if this is because it;s a blog, here is one of the duplicates: http://www.cottagesoapcompany.co.uk/blog/?row=1 Hope you can help!
On-Page Optimization | | emmamoulden0 -
E-commerce store having same content different language pages
Hello, I have an e-commerce store operating on PrestaShop. I have four languages Fr, De, En and Nl. The url for each page changes like Example.com/en/product1 Example.com/fr/Product1 Example.com/de/product1 Example.com/nl/Product1 All these pages have same content about product1 but translated in respective language. Is it considered as duplicate content? Should I suppose to write different content for each page?
On-Page Optimization | | MozAddict0 -
Duplicate Content - Meta Data for International Site Roll Out
Hi All, We have a site targeting Ireland, so all on-page SEO is completed and launched on the Irish site. We are now rolling out this site to the UK...how much of this content & SEO meta data has to be changed for Google to not recognise it as duplicate content? Site structure is as follows: http://www.domain.com/ie-en/ - Irish site http://www.domain.com/uk-en/ - UK site Or will it even be considered duplicate content as we have the uk and Irish signals in the subfolders, will be using geo targeting on webmasters, and will have UK specific addresses and phone numbers? We will be rolling this site out to may more countries so would be great to get this straight from the start so we don't waste time creating many versions of the meta data unnecessarily! Many thanks Emma
On-Page Optimization | | john_Digino0