Https Duplicate Content
-
My previous host was using shared SSL, and my site was also working with https which I didn’t notice previously.
Now I am moved to a new server, where I don’t have any SSL and my websites are not working with https version.
Problem is that I have found Google have indexed one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content. Now there are two results one with http version and another with https version.
I searched over the internet and found 3 possible solutions.
1 No-Index https version
2 Use rel=canonical
3 Redirect https versions with 301 redirectionNow I don’t know which solution is best for me as now https version is not working.
One more thing I don’t know how to implement any of the solution. My blog is running on WordPress.
Please help me to overcome from this problem, and after solving this duplicate issue, do I need Reconsideration request to Google.
Thank you
-
Unfortunately, I think this may not only be the best option, but the only option. You can't NOINDEX or use rel=canonical if the https pages aren't resolving, so you would have to do the redirects in .htaccess.
Unfortunately, messing with .htaccess for anything sitewide can get pretty tricky, and a little dangerous. I believe this code should work, but I'm not sure about any quirks in WordPress or specifics of your site implementation:
http://www.highrankings.com/forum/index.php/topic/35833-how-do-i-redirect-https-to-http-on-my-site/
You may want a WordPress specialist to take a look.
-
Redirect is the best option for you I believe, you will pass the majority of link juice if anybody has backlinked to the https version.
You will need to contact the host where your server is in order to get this implemented. Alternatively you can just resubmit your http site for index in Google via GWT and see how that goes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Home page duplicate content...
Hello all! I've just downloaded my first Moz crawl CSV and I noticed that the home page appears twice - one with an appending forward slash at the end: http://www.example.com
Technical SEO | | LiamMcArthur
http://www.example.com/ For any of my product and category pages that encounter this problem - it's automatically resolved with a canonical tag. Should I create the same canonical tag for my home page? rel="canonical" href="http://www.example.com" />0 -
Http v https Duplicate Issues
Hello, I noticed earlier an issue on my site. http://mysite.com and https://mysite.com both had canonical links pointing to themselves so in effect creating duplicate content. I have now taken steps to ensure the https version has a canonical that points to the http version but I was wondering what other steps would people recommend? Is it safe to NOINDEX the https pages? Or block them via robots.txt or both? We are not quite ready to go fully HTTPS with our site yet (I know Google now prefers this) Any thoughts would be very much appreciated.
Technical SEO | | niallfred0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin <cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
Technical SEO | | SO_UK0 -
Duplicate content issue with trailing / ?
Hi ,I did a SEOmoz Crawl Test and found most pages show twice, for example: A: www.website.com/index.php/dog/walk B: www.website.com/index.php/dog/walk/ I've checked Google Analytics and 90% of organic search traffic arrives on the URLs with the trailing slash (B). Question 1: Can I assume I've a duplicate content problem? Question 2: Is it best to do 301 redirects from the 'non trailing slash' pages to the 'trailing slash pages'? Question 3: For some reason every web page has a '/index.php' in it (see A&B) above. No idea why. Should it be a SEO concern? Kind regards and thank you in advance Nigel
Technical SEO | | Richard5550 -
Duplicate Page Content Report
In Crawl Diagnostics Summary, I have 2000 duplicate page content. When I click the link, my Wordpress return "page not found" and I see it's not indexed by Google, and I could not find the issue in Google Webmaster. So where does this link come from?
Technical SEO | | smallwebsite0 -
Noindex duplicate content penalty?
We know that google now gives a penalty to a whole duplicate if it finds content it doesn't like or is duplicate content, but has anyone experienced a penalty from having duplicate content on their site which they have added noindex to? Would google still apply the penalty to the overall quality of the site even though they have been told to basically ignore the duplicate bit. Reason for asking is that I am looking to add a forum to one of my websites and no one likes a new forum. I have a script which can populate it with thousands of questions and answers pulled direct from Yahoo Answers. Obviously the forum wil be 100% duplicate content but I do not want it to rank for anyway anyway so if I noindex the forum pages hopefully it will not damage the rest of the site. In time, as the forum grows, all the duplicate posts will be deleted but it's hard to get people to use an empty forum so need to 'trick' them into thinking the section is very busy.
Technical SEO | | Grumpy_Carl0 -
How to Solve Duplicate Page Content Issue?
I have created one campaign over SEOmoz tools for my website. I have found 89 duplicate content issue from report. Please, look in to Duplicate Page Content Issue. I am quite confuse to resolve this issue. Can any one suggest me best solution to resolve it?
Technical SEO | | CommercePundit0 -
Mapping Internal Links (Which are causing duplicate content)
I'm working on a site that is throwing off a -lot- of duplicate content for its size. A lot of it appears to be coming from bad links within the site itself, which were caused when it was ported over from static HTML to Expression Engine (by someone else). I'm finding EE an incredibly frustrating platform to work with, as it appears to be directing 404's on sub-pages to the page directly above that subpage, without actually providing a 404 response. It's very weird. Does anyone have any recommendations on software to clearly map out a site's internal link structure so that I can find what bad links are pointing to the wrong pages?
Technical SEO | | BedeFahey0