Duplicate Content Reports
-
Hi
Dupe content reports for a new client are sjhowing very high numbers (8000+) main of them seem to be for sign in, register, & login type pages, is this a scenario where best course of action to resolve is likely to be via the parameter handling tool in GWT ?
Cheers
Dan
-
Cool - Many thanks Kurt !
All Best
Dan
-
You don't absolutely have to do both, but by doing the parameter handling you are sending another signal to Google of what you want them to do (keep in mind that both canonical and parameters are only considered suggestions by Google). It's pretty simple to setup the parameter handling, so if you are really concerned about the duplicate content issues, why not do both?
Also, technically, the canonical tag tells Google which URL they've crawled to give prominence to when they are duplicate content, whereas my understanding is that parameter handling (when Google follows your suggestions) actually prevents Google from even crawling URLs with those parameters. In other words, canonical tags tell Google what to do with URLs they've crawled and parameter handling tells Google what URLs not to even crawl.
-
Thanks Kurt
and what about the parameter handling tool ? if canonical tag method you mention will deal with this then is there any need to do anything with parameter handling tool ?
cheers
dan
-
I would answer the same as Kurt for the install. You put the noindex tag in the header of the core page and so when all the other pages are generated with the parameters it will be added to those pages automatically. Once you get the pages out of the index, then I would nofollow links or use robots.txt to those pages to keep the bots out to start with.
-
Hi Dan,
I mean both. The canonical tag will help with duplicate content issues and the parameter handling will help with indexing.
Setting up the canonical tag shouldn't be an issue. If the same page content is being displayed and the only difference is that the URL has some parameters in it, then the canonical tag should naturally be included with the rest of the page's code. Since the canonical tag doesn't change, it should work perfectly.
For example, if you have a page, login.php, and that page always has a parameter, ?visitor=### (where ### is a random number), then you simply put the canonical tag in the head of the login.php page (). That canonical tag will always be in the login.php page no matter whether the URL is login.php?visitor=123 or login.php?visitor=56, etc. It will always tell the search engines that the original page login.php.
-
Thanks Clever PHD
So is there a way of setting a general rule to apply noindex to all of these duplicates or do you mean to the main actual sign in/login pages which will hence apply to all new, sessions specific, duplicate versions of the main sign-in/log-in pages etc when generated ?
Cheers
Dan
-
HI Kurt
Do you mean both or one or the other ?
Isn't setting up canonical tags on all the possible dynamically generated login, sign up and registration type pages impossible e can you set up some sort of rule that applies to those unpredictable (since we dont know what they are until they are generated by a user session etc) pages ?
Cheers
Dan
-
You can also noindex those pages to simply take them out of the index and then later nofollow links to them.
-
You can use the parameter handling and setup canonical tags on the pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does adding a noindex tag reduce duplicate content?
I've been working under the assumption for some time that if I have two (or more) pages which are very similar that I can add a noindex tag to the pages I don't need and that will reduce duplicate content. As far as I know this removes the pages with the tag from Google's index and stops any potential issues with duplicate content. It's the second part of that assumption that i'm now questioning. Despite pages having the noindex tag they continue to appear in Google Search console as duplicate content, soft 404 etc. That is, new pages are appearing regularly that I know to have the noindex tag. My thoughts on this so far are that Google can still crawl these pages (although won't index them) so shows them in GSC due to a crude issue flagging process. I mainly want to know: a) Is the actual Google algorithm sophisticated enough to ignore these pages even through GSC doesn't. b) How do I explain this to a client.
Technical SEO | | ChrisJFoster0 -
Home page duplicate content...
Hello all! I've just downloaded my first Moz crawl CSV and I noticed that the home page appears twice - one with an appending forward slash at the end: http://www.example.com
Technical SEO | | LiamMcArthur
http://www.example.com/ For any of my product and category pages that encounter this problem - it's automatically resolved with a canonical tag. Should I create the same canonical tag for my home page? rel="canonical" href="http://www.example.com" />0 -
Issue with duplicate content
Hello guys, i have a question about duplicate content. Recently I noticed that MOZ's system reports a lot of duplicate content on one of my sites. I'm a little confused what i should do with that because this content is created automatically. All the duplicate content comes from subdomain of my site where we actually share cool images with people. This subdomain is actually pointing to our Tumblr blog where people re-blog our posts and images a lot. I'm really confused how all this duplicate content is created and what i should do to prevent it. Please tell me whether i need to "noindex", "nofollow" that subdomain or you can suggest something better to resolve that issue. Thank you!
Technical SEO | | odmsoft0 -
Duplicate Content in Wordpress.com
Hi Mozers! I have a client with a blog on wordpress.com. http://newsfromtshirts.wordpress.com/ It just had a ranking drop because of a new Panda Update, and I know it's a Dupe Content problem. There are 3900 duplicate pages, basically because there is no use of noindex or canonical tag, so archives, categories pages are totally indexed by Google. If I could install my usual SEO plugin, that would be a piece of cake, but since Wordpress.com is a closed environment I can't. How can I put a noindex into all category, archive and author peges in wordpress.com? I think this could be done by writing a nice robot.txt, but I am not sure about the syntax I shoud use to achieve that. Thank you very much, DoMiSol Rossini
Technical SEO | | DoMiSoL0 -
Category URL Duplicate Content
I've recently been hired as the web developer for a company with an existing web site. Their web architecture includes category names in product urls, and of course we have many products in multiple categories thus generating duplicate content. According to the SEOMoz Site Crawl, we have roughly 1600 pages of duplicate content, I expect primarily from this issue. This is out of roughly 3600 pages crawled. My questions are: 1. Fixing this for the long term will obviously mean restructuring the URLs for the site. Is this worthwhile and what will the ramifications be of performing such a move? 2. How can I determine the level and extent of the effects of this duplicated content? 3. Is it possible the best course of action is to do nothing? The site has many, many other issues, and I'm not sure how highly to prioritize this problem. In addition, the IT man is highly doubtful this is causing an SEO issue, and I'm going to need to be able to back up any action I request. I do feel I will need to strongly justify any possible risks this level of site change could cause. Thanks in advance, and please let me know if any more information is needed.
Technical SEO | | MagnetsUSA0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Duplicate page content issue needs resolution.
After my last "crawl" report, I received a warning about "duplicate page content". One page was: http://anycompany.com and the other was: http://anycompany.com/home.html How do I correct this so these pages aren't competing with each other or is this a problem?
Technical SEO | | JamesSagerser0 -
Duplicate Content Home Page
Hello, I am getting Duplicate Content warning from SEOMoz for my home page: http://www.teacherprose.com http://www.teacherprose.com/index html I tried code below in .htaccess: redirect 301 /index.html http://www.teacherprose.com This caused error "too many re-directs" in browser Any thoughts? Thank You, Eric
Technical SEO | | monthelie10