Duplicate Content Reports
-
Hi
Dupe content reports for a new client are sjhowing very high numbers (8000+) main of them seem to be for sign in, register, & login type pages, is this a scenario where best course of action to resolve is likely to be via the parameter handling tool in GWT ?
Cheers
Dan
-
Cool - Many thanks Kurt !
All Best
Dan
-
You don't absolutely have to do both, but by doing the parameter handling you are sending another signal to Google of what you want them to do (keep in mind that both canonical and parameters are only considered suggestions by Google). It's pretty simple to setup the parameter handling, so if you are really concerned about the duplicate content issues, why not do both?
Also, technically, the canonical tag tells Google which URL they've crawled to give prominence to when they are duplicate content, whereas my understanding is that parameter handling (when Google follows your suggestions) actually prevents Google from even crawling URLs with those parameters. In other words, canonical tags tell Google what to do with URLs they've crawled and parameter handling tells Google what URLs not to even crawl.
-
Thanks Kurt
and what about the parameter handling tool ? if canonical tag method you mention will deal with this then is there any need to do anything with parameter handling tool ?
cheers
dan
-
I would answer the same as Kurt for the install. You put the noindex tag in the header of the core page and so when all the other pages are generated with the parameters it will be added to those pages automatically. Once you get the pages out of the index, then I would nofollow links or use robots.txt to those pages to keep the bots out to start with.
-
Hi Dan,
I mean both. The canonical tag will help with duplicate content issues and the parameter handling will help with indexing.
Setting up the canonical tag shouldn't be an issue. If the same page content is being displayed and the only difference is that the URL has some parameters in it, then the canonical tag should naturally be included with the rest of the page's code. Since the canonical tag doesn't change, it should work perfectly.
For example, if you have a page, login.php, and that page always has a parameter, ?visitor=### (where ### is a random number), then you simply put the canonical tag in the head of the login.php page (). That canonical tag will always be in the login.php page no matter whether the URL is login.php?visitor=123 or login.php?visitor=56, etc. It will always tell the search engines that the original page login.php.
-
Thanks Clever PHD
So is there a way of setting a general rule to apply noindex to all of these duplicates or do you mean to the main actual sign in/login pages which will hence apply to all new, sessions specific, duplicate versions of the main sign-in/log-in pages etc when generated ?
Cheers
Dan
-
HI Kurt
Do you mean both or one or the other ?
Isn't setting up canonical tags on all the possible dynamically generated login, sign up and registration type pages impossible e can you set up some sort of rule that applies to those unpredictable (since we dont know what they are until they are generated by a user session etc) pages ?
Cheers
Dan
-
You can also noindex those pages to simply take them out of the index and then later nofollow links to them.
-
You can use the parameter handling and setup canonical tags on the pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tricky Duplicate Content Issue
Hi MOZ community, I'm hoping you guys can help me with this. Recently our site switched our landing pages to include a 180 item and 60 item version of each category page. They are creating duplicate content problems with the two examples below showing up as the two duplicates of the original page. http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=180&p=1 http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=60&p=1 The original page is http://www.uncommongoods.com/fun/wine-dine/beer-gifts I was just going to do a rel=canonical for these two 180 item and 60 item pages to the original landing page but then I remembered that some of these landing pages have page 1, page 2, page 3 ect. I told our tech department to use rel=next and rel=prev for those pages. Is there anything else I need to be aware of when I apply the canonical tag for the two duplicate versions if they also have page 2 and page 3 with rel=next and rel=prev? Thanks
Technical SEO | | znotes0 -
Duplicate Content on a Page Due to Responsive Version
What are the implications if a web designer codes the content of the site twice into the page in order to make the site responsive? I can't add the url I'm afraid but the H1 and the content appear twice in the code in order to produce both a responsive version and a desktop version. This is a Wordpress site. Is Google clever enough to distinguish between the 2 versions and treat them individually? Or will Google really think that the content has been repeated on the same page?
Technical SEO | | Wagada0 -
How to fix duplicate content caused by tags?
I use SEMRush, and the issue they are finding is I have 30 duplicate content issues. All seem to be caused by the tags I add in my portfolio pieces. I have looked at my SEO settings (taxonomies, etc) in the Wordpress site, and don't know what I am doing wrong....any advice how to fix? I have attached a screen shot VsYv2wY
Technical SEO | | cschwartzel0 -
Duplicate Content - Just how killer is it?
Yesterday I received my ranking report and was extremely disappointed that my high-priority pages dropped in rank for a second week in a row for my targeted keywords. This is after running them through the gradecard and getting As for each of them on the keywords I wanted. I looked at my google webmaster tools and saw new duplicate content pages listed, which were the ones I had just modified to get my keyword targeting better. In my hastiness to work on getting the keyword usage up, I neglected to prevent these descriptions from coming up when viewing the page with filter parameters, sort parameters and page parameters... so google saw these descriptions as duplicate content (since myurl.html and myurl.html?filter=blah are seen as different). So my question: is this the likely culprit for some pretty drastic hits to ranking? I've fixed this now, but are there any ways to prevent this in the future? (I know _of _canonical tags, but have never used them, and am not sure if this applies in this situation) Thanks! EDIT: One thing I forgot to ask as well: has anyone inflicted this upon themselves? And how long did it take you to recover?
Technical SEO | | Ask_MMM0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10 -
How can i see the pages that cause duplicate content?
SEOmoz PRO is giving me back duplicate content errors. However, i don't see how i can get a list of pages that are duplicate to the one shown. If i don't know which pages/urls cause the issue i can't really fix it. The only way would be placing canonical tags but that's not always the best solution. Is there a way to see the actual duplicate pages?
Technical SEO | | 5MMedia0 -
Duplicate content across multiple domains
I have come across a situation where we have discovered duplicate content between multiple domains. We have access to each domain and have recently within the past 2 weeks added a 301 redirect to redirect each page dynamically to the proper page on the desired domain. My question relates to the removal of these pages. There are thousands of these duplicate pages. I have gone back and looked at a number of these cached pages in google and have found that the cached pages that are roughly 30 days old or older. Will these pages ever get removed from google's index? Will the 301 redirect even be read by google to be redirected to the proper domain and page? If so when will that happen? Are we better off submitting a full site removal request of the sites that carries the duplicate content at this point? These smaller sites do bring traffic on their own but I'd rather not wait 3 months for the content to be removed since my assumption is that this content is competing with the main site. I suppose another option would be to include no cache meta tag for these pages. Any thoughts or comments would be appreciated.
Technical SEO | | jmsobe0 -
Avoiding duplicate content/same pages
hi I have been checking through all the Q and A but i i'm still not sure how you get http://www.domain.co.uk/index.html to be just http://www.domain.co.uk/? Do you add canonical to the index page to point to the page you prefer and then add a 301 redirect? thanks
Technical SEO | | challen0