Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google ranking content for phrases that don't exist on-page
I am experiencing an issue with negative keywords, but the “negative” keyword in question isn’t truly negative and is required within the content – the problem is that Google is ranking pages for inaccurate phrases that don’t exist on the page. To explain, this product page (as one of many examples) - https://www.scamblermusic.com/albums/royalty-free-rock-music/ - is optimised for “Royalty free rock music” and it gets a Moz grade of 100. “Royalty free” is the most accurate description of the music (I optimised for “royalty free” instead of “royalty-free” (including a hyphen) because of improved search volume), and there is just one reference to the term “copyrighted” towards the foot of the page – this term is relevant because I need to make the point that the music is licensed, not sold, and the licensee pays for the right to use the music but does not own it (as it remains copyrighted). It turns out however that I appear to need to treat “copyrighted” almost as a negative term because Google isn’t accurately ranking the content. Despite excellent optimisation for “Royalty free rock music” and only one single reference of “copyrighted” within the copy, I am seeing this page (and other album genres) wrongly rank for the following search terms: “free rock music”
On-Page Optimization | | JCN-SBWD
“Copyright free rock music"
“Uncopyrighted rock music”
“Non copyrighted rock music” I understand that pages might rank for “free rock music” because it is part of the “Royalty free rock music” optimisation, what I can’t get my head around is why the page (and similar product pages) are ranking for “Copyright free”, “Uncopyrighted music” and “Non copyrighted music”. “Uncopyrighted” and “Non copyrighted” don’t exist anywhere within the copy or source code – why would Google consider it helpful to rank a page for a search term that doesn’t exist as a complete phrase within the content? By the same logic the page should also wrongly rank for “Skylark rock music” or “Pretzel rock music” as the words “Skylark” and “Pretzel” also feature just once within the content and therefore should generate completely inaccurate results too. To me this demonstrates just how poor Google is when it comes to understanding relevant content and optimization - it's taking part of an optimized term and combining it with just one other single-use word and then inappropriately ranking the page for that completely made up phrase. It’s one thing to misinterpret one reference of the term “copyrighted” and something else entirely to rank a page for completely made up terms such as “Uncopyrighted” and “Non copyrighted”. It almost makes me think that I’ve got a better chance of accurately ranking content if I buy a goat, shove a cigar up its backside, and sacrifice it in the name of the great god Google! Any advice (about wrongly attributed negative keywords, not goat sacrifice ) would be most welcome.0 -
Duplicate Page Title issues
Hello, I have a duplicate page title problem: Crawl Diagnostics Reported that my website got **sample URLs with this Duplicate Page Title **between:
On-Page Optimization | | JohnHuynh
http://www.vietnamvisacorp.com/faqs.html and these URLs below:http://www.vietnamvisacorp.com/faqs/page-2
http://www.vietnamvisacorp.com/faqs/page-3
http://www.vietnamvisacorp.com/faqs/page-4
http://www.vietnamvisacorp.com/faqs/page-5 I don't know why, because I have already implemented rel=”next” and rel=”prev” to canonical pages. Please give me an advice!0 -
Duplicate Content: Snippets from Blog on Website
Our company has a website and a blog, each on a separate domain. On our website's home page, we include snippets from some of our blog posts and links to such posts. The snippets include the same photo, title, and first line or so of text. We have been told and am concerned that search engines will categorize this as duplicate content. Any recommendation as to how we can get around this issue aside from not including the blog posts on our site? I imagine we are not the first company to want to do this. www.angelicolaw.com Thanks,
On-Page Optimization | | BrazilLaw
Greg0 -
Similar content multiple pages
I have run in to a situation on an e-commerce store where products from a certain manufacturer require a fairly large chunk of corporate information to be posted underneath the product description: I.E. Trademark information, etc. This information happens to be close to half the size of the product description information. Am I at risk of getting hit negatively for this portion of text duplicated across multiple products? I was considering putting a link to a separate informational page with this information but am not sure if it even matters? What are your recommendations brilliant SEO'erz?
On-Page Optimization | | wishmedia0 -
On-Page Report Card: Whats up with the TITLE of the page?
Started to fix the SEO issues on a customers website. When I run a "On-Page Report Card" It says that the title of the webpage:
On-Page Optimization | | maklarlabbet
www.visbymaklarna.se/visbymaklarna.html Is "visbymäklarna - Ditt förstahandsval på gotland." But if I check in the source code of the webbrowser the title should be:
name="title" content="Vi är mäklarna på Gotland som sätter människan i första rummet" /> (Actually this is with special encoding for the swedish characters. The title in coded text is: "Vi är mäklarna på Gotland som sätter människan i första rummet") Anyway the title of the webpage source code and the title of what SEOmoz reports is different. Why is that?0 -
Duplicate content list by SEOMOZ
Hi Friends, I am seeing lot of duplicate (about 10%) from the crawl report of SEOMOZ. The report says, "Duplicate Page Content" But the urls it listed have different title, different url and also different content. I am not sure how to fix this issue.. My site has both Indian cinema news and photo gallery. The problme mainly coming in photo gallery posts. for example: this is the main url of a post. apgossips.com/2012/12/18/telugu-actress-poonam-kaur-photos . But in this post, each image is a link to its enlarged images (default wordpress). The problem is coming with each individual image with in this post. examples of SEOMOZ report 3 individual urls as duplicate content...from the same above post.: http://apgossips.com/2012/12/18/telugu-actress-poonam-kaur-photos/poonam-kaur-hot-photo-shoot-stills-4 http://apgossips.com/2012/12/18/telugu-actress-poonam-kaur-photos/poonam-kaur-hot-photo-shoot-stills-3 http://apgossips.com/2012/12/18/telugu-actress-poonam-kaur-photos/poonam-kaur-hot-photo-shoot-stills-2 Some body please advise me.. Appreciate your help.
On-Page Optimization | | ksnath0 -
Duplicate Content- Best Practise Usage of the canonical url
Canonical urls stop self competition - from duplicate content. So instead of a 2 pages with a rank of 5 out of 10, it is one page with a rank of 7 out of 10.
On-Page Optimization | | WMA
However what disadvantages come from using canonical urls. For example am I excluding some products like green widet, blue widget. I have a customer with 2 e-commerce websites(selling different manufacturers of a type jewellery). Both websites have massive duplicate content issues.
It is a hosted CMS system with very little SEO functionality, no plugins etc. The crawling report- comes back with 1000 of pages that are duplicates. It seems that almost every page on the website has a duplicate partner or more. The problem starts in that they have 2 categorys for each product type, instead of one category for each product type.
A wholesale category and a small pack category. So I have considered using a canonical url or de-optimizing the small pack category as I believe it receives less traffic than the whole category. On the original website I tried de- optimizing one of the pages that gets less traffic. I did this by changing the order of the meta title(keyword at the back, not front- by using small to start of with). I also removed content from the page. This helped a bit. Or I was thinking about just using a canonical url on the page that gets less traffic.
However what are the implications of this? What happens if some one searches for "small packs" of the product- will this no longer be indexed as a page. The next problem I have is the other 1000s of pages that are showing as duplicates. These are all the different products within the categories. The CMS does not have a front office that allows for canonical urls to be inserted. Instead it would have to be done going into the html of the pages. This would take ages. Another issue is that these product pages are not actually duplicate, but I think it is because they have such little content- that the rodger(seo moz crawler, and probably googles one too) cant tell the difference.
Also even if I did use the canonical url - what happened if people searched for the product by attributes(the variations of each product type)- like blue widget, black widget, brown widget. Would these all be excluded from Googles index.
On the one hand I want to get rid of the duplicate content, but I also want to have these pages included in the search. Perhaps I am taking too idealistic approach- trying to optimize a website for too many keywords. Should I just focus on the category keywords, and forget about product variations. Perhaps I look into Google Analytics, to determine the top landing pages, and which ones should be applied with a canonical. Also this website(hosted CMS) seems to have more duplicate content issues than I have seen with other e-commerce sites that I have applied SEO MOZ to On final related question. The first website has 2 landing pages- I think this is a techical issue. For example www.test.com and www.test.com/index. I realise I should use a canonical url on the page that gets less traffic. How do I determine this? (or should I just use the SEO MOZ Page rank tool?)0 -
Where to Put Content For Product Pages - How To Structure Website?
Currently we have 300+ products. We do not have a CMS or Ecommerce site at the time being for certain reasons. Currently our site is set up with content on almost every single page. The main catagory page, explains everything on the main page, then our products page has a lot of text too. But right now, it seems as if our main pages are only ranking. In the near future I will be using a cms and purchasing a template. I noticed most Ecommerce style websites have just the product with the name and price, then when they click on the product it brings you to that page with a brief product description and some photos. My question is, does each page need content? Or can just the product page itself have content? For example, say we have a link to SHOES. Then the shoes page displays dress, casual and athletic. Then the athletic page brings you to a page with, running, tennis, cross training shoes, and so forth. Is it best to write content on this main catagory page? If so, how much? Or should we focus on putting content on the actual page of the individual product? Along with pictures and specifications? I know Content is Key and we are doing pretty well at that, however, I am starting to wondering if we have to much content or too similar content. What is the best structure to try and recieve GREAT organic rankings?
On-Page Optimization | | hfranz0