Https-pages still in the SERP's
-
Hi all,
my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content.
Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow>
I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up.
Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index?
Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))...
thanx in advance
-
Hi Stefan,
If Google is finding those https pages, instead of a noindex, nofollow tag, I'd try on of the following:
- Redirect https pages to http via 301s (preferred)
- Add a canonical tag pointing to the http version (as Malcolm's suggested)
By using these methods, you have the best chance of preserving your rankings for any of the https that appear in the SERPS, and you also preserve any link equity that is flowing through them. If Google is finding https pages of your site, then there is the possibility that some link juice is currently flowing through them.
This also solves the problem of any visitors accidentally landing on https that you don't want to be there. Although in reality, there is nothing wrong with this. Today, entire sites are https and rank quite well.
It can take a long, long time for Google to remove URLs from their results. Before you can request removal, the URL either has to return a 404 or a 410 status code, or be blocked by robots.txt. Since neither of these are a good option for you, I'd stick with the 301 or the canonical solution.
Best of luck with your SEO!
-
I think I answered the same question you posted before, but did you try and use canonical tags? As I said using noindex and nofollow is probably not the best option if Google doesn't know that it should show http: instead of https.
Have you tried searching for http versions of your site listed? Are there any pages that are just http listed?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
My Home Page meta title on Google isn't what it should be
Hey guys My website is http://www.oxfordmeetsfifth.com According to SEOcentro, my website should appear to Google as Fashion Tips for Women | Oxford Meets Fifth. I have used the Yoast plugin and force rewrote titles to ensure that is the home page meta title. It also appears correctly in browser. Could anyone advise why this is the case? Thanks in advance!
Technical SEO | | OxfordMeetsFifth0 -
Could using our homepage Google +1's site wide harm our website?
Hello Moz! We currently have the number of Google +1's for our homepage displaying on all pages of our website. Could this be viewed as black hat/manipulative by Google, and result in harming our website? Thanks in advance!
Technical SEO | | TheDude0 -
Are the duplicate content and 302 redirects errors negatively affecting ranking in my client's OS Commerce site?
I am working on an OS Commerce site and struggling to get it to rank even for the domain name. Moz is showing a huge number of 302 redirects and duplicate content issues but the web developer claims they can not fix those because ‘that is how the software in which your website is created works’. Have you any experience of OS Commerce? Is it the 302 redirects and duplicate content errors negatively affecting the ranking?
Technical SEO | | Web-Incite0 -
Medium sizes forum with 1000's of thin content gallery pages. Disallow or noindex?
I have a forum at http://www.onedirection.net/forums/ which contains a gallery with 1000's of very thin-content pages. We've currently got these photo pages disallowed from the main googlebot via robots.txt, but we do all the Google images crawler access. Now I've been reading that we shouldn't really use disallow, and instead should add a noindex tag on the page itself. It's a little awkward to edit the source of the gallery pages (and keeping any amends the next time the forum software gets updated). Whats the best way of handling this? Chris.
Technical SEO | | PixelKicks0 -
Unnatural Link Warning Removed - WMT's
Hi, just a quick one. We had an unnatural link warning for one of our test sites, the message appeared on the WMT's dashboard. The message is no longer there, has it simply expired or could this mean that Google no longer sees an unatural backlink profile? Hoping it's the latter but doubtful as we haven't tried to remove any links.. as I say it's just a test site. Thanks in advance!
Technical SEO | | Webpresence0 -
404 errors on a 301'd page
I current have a site that when run though a site map tool (screaming frog or xenu) returns a 404 error on a number of pages The pages are indexed in Google and when visited they do 301 to the correct page? why would the sitemap tool be giving me a different result? is it not reading the page correctly?
Technical SEO | | EAOM0 -
Duplicate content and URL's
Hi Guys, Hope you are all well. Just a quick question which you will find nice and easy 🙂 I am just about to work through duplicate content pages and URL changes. Firstly, With the duplicate content issue i am finding the seo friendly URL i would normally direct to in some cases has less links, authority and root domain to it than some of the unseo friendly URL's. will this harm me if i still 301 redirect them to the seo friendly URL. Also, With the url changed it is going to be a huge job to change all the url so they are friendly and the CMS system is poor. Is there a better way of doing this? It has been suggested that we create a new webpage with a friendly URL and redirect all the pages to that. Will this lose all the weight as it will be a brand new page? Thank you for your help guys your legends!! Cheers Wayne
Technical SEO | | wazza19850