Should we use the rel-canonical tag?
-
We have a secure version of our site, as we often gather sensitive business information from our clients.
Our https pages have been indexed as well as our http version.
-
Could it still be a problem to have an http and an https version of our site indexed by Google? Is this seen as being a duplicate site?
-
If so can this be resolved with a rel=canonical tag pointing to the http version?
Thanks
-
-
Agreed - this is generally an issue with relative paths, and job one is to fix it. In most cases, you really don't want these crawled at all. I do think rel=canonical is a good bet here - 301 redirects can get really tricky with http/https, and you can end up creating loops. It can be done right, but it's also easy to screw up, in my experience.
-
-
Yes, having 2 versions of the same content can be seen duplicate content and could cause issues.
-
Yes, include a canonical tag in the header (assuming both http & https pages are close to identical). This will help Google's crawler figure out which version of the page to show in the search results.
-
-
Yes, would suggest canonical as the easiest resolution -
And Irving is right PDF's are most definitely indexed, I am not sure how they are interpreted and if they would specifically count a dup content, but not sure this idea would EVER be something i would suggest as it it seems to have lots of negative repercussions.
I would most definitely agree that relative links is probably your issue, and if you canonical and remove inline relative links and make them http absolute this should resolve itself in a month or so.
-
I disagree
a) pdfs are both indexed AND read by crawlers.
b) even if you don't have navigation to the file sometimes Google can find it if it's in a folder that you are not blocking in robots.txt.
c) if someone links to it once on the web it's getting crawled and indexed.
If you have a https section that content should be behind a login and not accessible to the engines. Your problem sounds like your https pages have relative links on them and Google is crawling the https page and then following the relative links staying on https so you need to fix that and this will fix your site getting http pages indexed as dupe https.
Absolute http canonical tags will help but it not the solution. you need to fix the https leaking on your secure pages.
.
-
You can "no-index" them within the html - but if you really want a fun trick - when and if you are not able to get around mass amount of duped content and it isn't for the sake of rankings - example, MLS listings, etc
Change the content into a pdf - or file format - thus not being able to be crawled.
Once again - it will NOT be crawled - so don't go doing this to an entire site
But maybe your clients confidential data - can be submitted this way - and it will not get indexed - except for the subpage - but then you can no index that subpage.
Hope this helps.
Your pal
Chenzo
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate pages and Canonicals
Hi all, Our website has more than 30 pages which are duplicates. So canonicals have been deployed to show up only 10 of these pages. Do more of these pages impact rankings? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Pagination & Canonicals
Hi I've been looking at how we paginate our product pages & have a quick question on canonicals. Is this the right way to display.. Or should the canonical point to the main page http://www.key.co.uk/en/key/euro-containers-stacking-containers, so Google doesn't pick up duplicate meta information? Thanks!
Intermediate & Advanced SEO | | BeckyKey0 -
Rel=canonical on pre-migration website
I have an e-commerce client that is migrating platforms. The current structure of their existing website has led to what I would believe to be mass duplicate content. They have something north of 150,000 indexed URLs. However, 143,000+ of these have query strings and the content is identical to pages without any query string. Even so, the site does pretty well from an organic stand point compared to many of its direct competitors. Here is my question: (1) I am assuming that I should go into WMT (Google/Bing) and tell both search engines to ignore query strings. (2) In a review of back links, it does appear that there is a mish mash of good incoming links both to the clean and the dirty URLs. Should I add a rel=canonical via a script to all the pages with query strings before we make our migration and allow the search engines some time to process? (3) I'm assuming I can continue to watch the indexation of the URLs, but should I also tell search engines to remove the URLs of the dirty URLs? (4) Should I do Fetch in WMT? And if so, what sequence should I do for 1-4. How long should I wait between doing the above and undertaking the migration?
Intermediate & Advanced SEO | | ExploreConsulting0 -
Use Nonindex or Canonical on product tags of a e-commerce site
I run a e-commerce site and we have many product tags. These product tags come up as "Duplicate Page Content" when Moz does it's crawl. I was wondering if I should use Nonindex or Canonical? The tags all go to the same product when used so I figure I would just nonindex them but was wondering what's the best for SEO?
Intermediate & Advanced SEO | | EmmettButler1 -
Is a 301 Redirect and a Canonical Tag on Uppercase to Lowercase Pages Correct?
We have a medium size site that lost more than 50% of its traffic in July 2013 just before the Panda rollout. After working with a SEO agency, we were advised to clean up various items, one of them being that the 10k+ urls were all mixed case (i.e. www.example.com/Blue-Widget). A 301 redirect was set up thereafter forcing all these urls to go to a lowercase version (i.e. www.example.com/blue-widget). In addition, there was a canonical tag placed on all of these pages in case any parameters or other characters were incorporated into a url. I thought this was a good set up, but when running a SEO audit through a third party tool, it shows me the massive amount of 301 redirects. And, now I wonder if there should only be a canonical without the redirect or if its okay to have tens of thousands 301 redirects on the site. We have not recovered yet from the traffic loss yet and we are wondering if its really more of a technical problem than a Google penalty. Guidance and advise from those experienced in the industry is appreciated.
Intermediate & Advanced SEO | | ABK7170 -
Duplicate Title tags even with rel=canonical
Hello, We were having duplicate content in our blog (a replica of each post automatically was done by the CMS), until we recently implemented a rel=canonical tag to all the duplicate posts (some 5 weeks ago). So far, no duplicate content were been found, but we are still getting duplicate title tags, though the rel=canonical is present. Any idea why is this the case and what can we do to solve it? Thanks in advance for your help. Tej Luchmun
Intermediate & Advanced SEO | | luxresorts0 -
Question about using abbreviation
Hello, I have this abbreviation inside my domain name, ok? now for a page URL name, do you recommend me to use the actual word (which shortened form of it is inside domain name) in a page name? Or when have abbreviation in domain name, then using its actual word in a page name is not good? It's all about how much google recognize abbreviation as the actual word and gives the same value of word to it? do I risk not using the actual word? Hope made myself clear ) thanks.
Intermediate & Advanced SEO | | mdmoz0 -
Canonical referencing and aspx
The following pages of my website all end up at the same place:
Intermediate & Advanced SEO | | IPROdigital
http://example.com/seo/Default.aspx
http://example.com/SEO/
http://example.com/seo
http://example.com/sEo
http://example.com/SeO but we have a really messy URL structure throughout the website. I would like to have a neat URL structure, including for offline marketing so customers can easily memorize or even guess the URL. I'm thinking of duplicating the pages and canonical referencing the original ones with the messy URLs instead of a 301 redirect (done for each individual page of course), because the latter will likely result in a traffic drop. We've got tens of thousands of URLs; some active and some inactive. Bearing in mind that thousands of links already point in to the site and even a small percentage drop in traffic would be a serious problem given low industry margins and high marketing spend, I'd love to hear opinions of people who have encountered this issue and found it problematic or successful. @randfish to the rescue. I hope.0