Duplicate content and http and https
-
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's.
For example:
http://www.bigcompany.com/accomodations
https://www.bigcompany.com/accomodations
The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information.
Advice or suggestions are welcome! How do I solve this issue?
THANKS MOZZERS
-
Hard to tell without knowing the site, but it's possible there are external links to "https" versions of the pages. At this point, Google is going to increase the pressure to secure sites, and later this year Chrome will start warning users about all non-secure pages, so it may be worth making the move.
-
I'm reading this response and this is happening on my site as well. How did this happen in the first place? I have duplicate content because of https and http copies of all my web pages. If I type https://www.mywebsite.com I can't get to my site. Could this be coming from my hosting company? I've set up my site to simply be http://www.mywebsite.com. I'm a little worried to change my robots.txt and I would love to know how this happened in the first place.
-
If Google detects both http: and https: versions, they've started to automatically pick the https: version, but that's not consistent yet. In general, I think it's still important to set strong canonicalization signals. Google still separates your http: and https: sites in Google Search Console, too, so even they haven't quite made up their minds.
In general, Google is pushing sites toward https:, but that's a somewhat complex decision that depends on more than just SEO. If you're using https: and the https: URLs are indexed, then you should treat those as canonical and suppress the http: URLs, in most cases.
-
Hate to respond to a 3 year old thread. But does this solution needs to be updated?
Is there any change in response now, as Google is favoring https for most pages. Does google still consider http and https as two different sites? If so which one should be suppressed - http or https?
Aji
-
Hi,
I'm still having problems with redirecting. I only have 1 duplicate page with https and http, that I want to redirect but it's the homepage.
i want to redirect: https://www.domain.com to http://www.domain.com
But keep the rest of the pages the same (half http and the other half https).
How do i do this?
-
Anytime Rand! I only have two simple rules:
1. Talking business on ski days is not allowed
2. Entry into Vermont requires a pound of Seattle's best french roast coffee. In return, you receive some fantastic Vermont maple syrup.
Simple rules to live by LOL
Thanks again for all of your help...
Peter
-
Thanks dude! If I make it to Vermont, I might look you up
-
Thanks James..
Sorry, I was using Big Company as an example and just being generic.
The real URL if interested is www.hawkresort.com
-
I would personally like to thank everyone that responded with an answer. Man O Man, the best part of belonging to SEOMOZ is the community forum. It's incredibly valuable, being able to ask a question and reach out to such talent as all of you.
If anyone ever gets up to Killington or Okemo skiing, the beer is on me! I live right between both ski areas, about 8 miles to either mountain..
Thanks again.
-
I think Harald and James covered the bases here, but a couple of comments on Harald's reply:
(1) Definitely check this. A common cause of indexed https: pages is that a secure section of your site is being crawled (like a shopping cart), and you're using relative navigation links (like ) - when a crawler or visitor hits the nav link from a secure page, the relative link grabs the https: In most cases, you may want to NOINDEX secure pages. Shopping carts and checkout pages have no business in the search index, IMO.
[(2)-(5) I believe this does work, but it's very tricky, so please be careful. If anyone has linked to the https: pages, you'll lose the link-juice this way (you'll just cut those pages off). I honestly don't think it's a good choice for most sites.
(8) I actually believe the 301-redirect is simpler in most cases.
As James said, sitewide canonical tags (or on the affect pages, if they're isolated) will also work.](/contact.php)
-
Hi Serge, I came to know about the "robots_ssl.txt" from the website http://www.seoworkers.com/seo-articles-tutorials/robots-and-https.html
-
I would check your server for a https folder.
add a robots.txt file in the root of the https folder:
User-agent: *
Disallow:/My guess is that the spider is following a link somewhere within your site that links to a https:// url. The spider is than re-indexing the entire site using https://
My 2 cents for what its worth.
-
Harald, " robots_ssl.txt " where did you get that?
-
Hello Hawkvt1, Fisrt of all I want to tell you that the protocols (http/https) are different, they are considered two separate sites, so there’s a good chance to get penalized for duplicate content. If the search engine discovers two identical pages, generally it would take the page it saw first and ignore the other pages.The solutions are described below:
S__olutions:
- Be smart about the site structure: to keep the engines from crawling and indexing HTTPS pages, structure the website so that HTTPs are only accessible through a form submission (log-in, sign-up, or payment pages). The common mistake is making these pages available via a standard link (happens when you are either ignorant or not aware that the secure version of the site is being crawled and indexed).
- Use Robots.txt file to control which pages will be crawled and indexed
- Use.htaccess file. Here’s how to do this:
- Create a file names robots_ssl.txt in your root.
- Add the following code to your .htaccessRewriteCond %{SERVER_PORT} 443 [NC]RewriteRule ^robots.txt$ robots_ssl.txt [L]
- Remove yourdomain.com:443 from the webmaster tools if the pages have already been crawled
- For dynamic pages like php, try< ?phpif ($_SERVER["SERVER_PORT"] == 443){echo “< meta name=” robots ” content=” noindex,nofollow ” > “;}?>
- Dramatic solution (may not always be possible): 301 redirect the HTTPS pages to the HTTP pages – with hopes that the link juice will transfer over.
For more information please refer to this link :
http://www.seomoz.org/ugc/solving-duplicate-content-issues-with-http-and-https
I'm sure that your problem is solved.
-
You could implement the canonical tag onto the HTTP version of the website.
Another problem when having a quick look at this website is that all your title tags are the same with the brand term at the front, this is not advisable at all you want to put the brand term at the end of the title and your generic terms first.
I would look at getting an SEO audit done to fix the issues with the website.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do you still loose 15% of value of inbound links when you redirect your site from http to https (so all inbound links to http are being redirected to https version)?
I know when you redesign your on website, you loose about 15% internally due to the 301 redirects (see moz article: https://moz.com/blog/accidental-seo-tests-how-301-redirects-are-likely-impacting-your-brand), but I'm wondering if that also applies to value of inbound links when you redirect your http://www.sitename.com to https://www.sitename.com. I appreciate your help!
Technical SEO | | JBMediaGroup0 -
Woocommerce Duplicate Page Content Issue
Hi, I'm receiving a duplicate content error. It says that this url: https://kidsinministry.org/childrens-ministry-curriculum/?option=com_content&task=view&id=20&Itemid=41 is a duplicate of this: http://kidsinministry.org/childrens-ministry-curriculum I'm using wordpress, woocommerce, and not really sure how to even address this. I tried adding this to .htaccess but it didn't redirect the url: 301 Redirects Redirect 301 https://kidsinministry.org/childrens-ministry-curriculum/?option=com_content&task=view&id=20&Itemid=41 http://kidsinministry.org/childrens-ministry-curriculum/ Anyone have any ideas? Thanks!
Technical SEO | | a_toohill0 -
Https vs http sitemap
I have a site that does a 301 redirect from http to https I currently have a sitemap auto submitted to google webmaster tools using the http pages. (because i didnt have https before) should I disable that sitemap for http and create one for the https only?
Technical SEO | | puremobile0 -
An odd duplicate content issue...
Hi all, my developers have just assured me that nothing has changed form last week but in the today's crawl I see all the website duplicated: and the difference on the url is the '/' so basically the duplicated urls are: htts://blabla.bla/crop htts://blabla.bla/crop/ Any help in understanding why is much appreciated. thanks
Technical SEO | | LeadGenerator0 -
Similar Content vs Duplicate Content
We have articles written for how to setup pop3 and imap. The topics are technically different but the settings within those are very similar and thus the inital content was similar. SEOMoz reports these pages as duplicate content. It's not optimal for our users to have them merged into one page. What is the best way to handle similar content, while not getting tagged for duplicate content?
Technical SEO | | Izoox0 -
Noindex duplicate content penalty?
We know that google now gives a penalty to a whole duplicate if it finds content it doesn't like or is duplicate content, but has anyone experienced a penalty from having duplicate content on their site which they have added noindex to? Would google still apply the penalty to the overall quality of the site even though they have been told to basically ignore the duplicate bit. Reason for asking is that I am looking to add a forum to one of my websites and no one likes a new forum. I have a script which can populate it with thousands of questions and answers pulled direct from Yahoo Answers. Obviously the forum wil be 100% duplicate content but I do not want it to rank for anyway anyway so if I noindex the forum pages hopefully it will not damage the rest of the site. In time, as the forum grows, all the duplicate posts will be deleted but it's hard to get people to use an empty forum so need to 'trick' them into thinking the section is very busy.
Technical SEO | | Grumpy_Carl0 -
Root domain not resolving to www. Duplicate content?
Hi, I'm working with a domain that stays on the root domain if the www is not included. But if the www is included, it stays with the www. LIke this: example.com
Technical SEO | | HardyIntl
or
www.example.com Of course, they are identical and both go to the same IP. Do search engines consider that to be duplicate content? thanks,
michael0 -
Are recipes excluded from duplicate content?
Does anyone know how recipes are treated by search engines? For example, I know press releases are expected to have lots of duplicates out there so they aren't penalized. Does anyone know if recipes are treated the same way. For example, if you Google "three cheese beef pasta shells" you get the first two results with identical content.
Technical SEO | | RiseSEO0