Website Displayed by Google as Https: when all Secure Content is Blocked - Causing Index Prob.
-
Basically, I have no inbound likes going to https://www.mysite.com , but google is indexing the Homepage only as https://www.mysite.com
In June, I was re included to the google index after receiving a penalty... Most of my site links recovered fairly well. However my homepage did not recover for its top keywords.
Today I notice that when I search for my site, its displayed as https://
Robots.txt blocks all content going to any secure page. Leaving me sort of clueless what I need to do to fix this. Not only does it pose a problem for some users who click, but I think its causing the homepage to have an indexing problem.
Any ideas? Redirect the google bot only? Will a canonical tag fix this?
Thx
-
Yeah, I have all of that in place. I found 1 external link from an https , and 1 on my blog that was just an error one of my employees made. 2 Links total, at least thats what I found. Robots blocking everything you mentioned. My header uses absolute paths.
I do agree with you on one thing, once kicked, the little things that may not have mattered over the past 15 years all the sudden pop up as problems... At the same time I have heard the complete opposite, people are kicked and then they are right back where they used to be a few weeks after being included.
Competitive sabotage is positively happening, unless a random person who happens to live in the same city my competitor is located just went awol and decided they wanted to spam my offsite forums, attempt to hack the website multiple times, and add me to a spam link rink.
Anyway a webmaster says he has changed the canonical on their end to http , although it hasnt changed yet. I'm sure this could take a few days or longer to take place. Hopefully that is the fix, we'll see though and thanks for the advise!
-
Someone could probably have an answer to you within minutes if they had the domain URL available.
RE: Competitive sabotage, I very highly doubt it.
RE: Having just occurred - That is often a sticking-point for no good reason. Do not be concerned so much as to why it wasn't an issue before and focus on how to fix it now. Google's algorithm changes all the time. Your standing in the algorithm changes all the time. Trust can be lost if you get a penalty, even if you get out of it. One external link too many going to https, or one change in the crawl path so Googlebog ends up on the https site via a relative path link... Things can suddenly change for a variety of reasons. However, if you do what is being suggested you are very likely to put this issue behind you.
Here is what I do with eCommerce sites, typically:
- Rel canonical both versions to the http version
- Add a robots.txt block and robots meta noindex tag to shopping cart pages
- Use absolute paths, if possible (e.g. http://www.domain.com/file/ instead of .../file/), especially in your primary navigation and footer links.
If that doesn't work please let us know and we can evaluate the site for you.
Good luck!
-
Hmm, see no major changes have been made to the cart. The website has ranked for 15 years, so the https thing just popped up after the penalty/ re inclusion.
I'm wondering, since the canonical tag was added fairly recently. Do you think I should just fetch the homepage and submit again? Or even add a new page, and fetch/crawl/submit that?
Just to get a fresh crawl? Crawl stats show about 2250 on average daily, so I was expecting this https thing to be gone by now... Regardless of why they chose it to index over my normal link.
thx for the input
-
How about changing all of your links from relative to absolute in the HTML? If they're truly only getting there from navigation internally after visiting the shopping cart, this would solve that, yes? Just a thought.
-
If that is the case, then your shopping cart is not "acting right". Https will exist for every page in your site and it shouldn't. What cart are you using? I would redirect everything outside of the payment, cart, and contact pages to non secure. There is a disconnect from what robots files actually do and what people think they do. They are a suggestion, no index means not to add it to the index, but it does not mean don't go on that page. I have spiders on pages that are blocked from them all of the time.
-
My only concern with doing a redirect is this. The shopping cart is https: , so if you start the checkout process you will enter https:
If person decides to continue shopping... They will stay in the https, but since the checkout page is restricted to bots, essentially https doesnt exist and shouldnt show on any searches.
The sitemaps is clean, and a canonical is in place...
I have been having some issues with a competitor, is it possible they submitted https://www.mysite.com/ version of my website knowing that google will prefer this version?
thx for the advise
-
I would redirect the https version to http. Then I would make sure that there is a cannonical tag in place, next I would go over my site map and make sure that there isn't an link to the https page in there. After that you should be set, I wouldn't put it in the robots.txt though.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Thin Content to Quality Content
How should i modify content from thin to high quality content. Somehow i realized that my pages where targetted keywords didn't had the keyword density lost a massive ranking after the last update whereas all pages which had the keyword density are ranking good. But my concern is all pages which are ranking good had all the keyword in a single statement like. Get ABC pens, ABC pencils, ABC colors, etc. at the end of a 300 word content describing ABC. Whereas the pages which dropped the rankings had a single keyword repeated just twice in a 500 word article. Can this be the reason for a massive drop. Should i add the single statement like the one which is there on pages ranking good? Is it good to add just a single line once the page is indexed or do i need to get a fresh content once again along with a sentence of keyword i mentioned above?
Intermediate & Advanced SEO | | welcomecure1 -
Can you no index a page in Wordpress from just Google news?
I'm trying to find a plugin for Wordpress that enables you to no-index an individual page from Google news but not from Google search results. We want to remove some of our pages from Google news without hurting others.
Intermediate & Advanced SEO | | uSw0 -
Website losing rank positions after https
Hi there.. we were having a good increase in rankings since using Moz and then we purchased an ssl certificate so our website would run under https... to make sure our website is only accessed through https, we have used a method on .htaccess (appache) to make sure the website is only reached through https, using: RewriteEngine on
Intermediate & Advanced SEO | | mszeer
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R,QSA] after doing this, from our last crawl perspective, we lost several positions that we were increasing.. what have we done wrong ? we have to fix this asap, please help us...0 -
Malicious site pointed A-Record to my IP, Google Indexed
Hello All, I launched my site on May 1 and as it turns out, another domain was pointing it's A-Record to my IP. This site is coming up as malicious, but worst of all, it's ranking on keywords for my business objectives with my content and metadata, therefore I'm losing traffic. I've had the domain host remove the incorrect A-Record and I've submitted numerous malware reports to Google, and attempted to request removal of this site from the index. I've resubmitted my sitemap, but it seems as though this offending domain is still being indexed more thoroughly than my legitimate domain. Can anyone offer any advice? Anything would be greatly appreciated! Best regards, Doug
Intermediate & Advanced SEO | | FranGen0 -
How to make Google include our recipe pages in its main index?
We have developed a recipe search engine www.edamam.com and serve the content of over 500+ food bloggers and major recipe websites. Our legal obligations do not allow us to show the actual recipe preparation info (e.g. the most valuable from the content), we can only show a few images, the ingredients and nutrition information. Most of the unique content goes to the source/blog. By submitting XML sitemaps on GWT we now have around 500K pages indexed, however only a few hundred appear in Google's main index and we are looking for a solution to include all of them in the index. Also good to know is that it appears that all our top competitors are in the exactly same situation, so it is a challenging question. Any ideas will be highly appreciated! Thanks, Lily
Intermediate & Advanced SEO | | edamam0 -
Changing Hosting Companies - Site Downtime - Google Indexing Concern
We are getting ready to switch to a new hosting company. When we make the switchover, our sites will be offline for a couple of hours and in some cases perhaps as long as 12 hours while DNS is configured -- should we be worried about Google trying to index pages and finding them unavailable? Any fear of Google de-indexing pages. Our guess was that Google would not de-index anything after just a short period of not being able to find pages -- it would have to be over an extended period of time before GOOGLE or BING would de-index pages -- CORRECT? Just want to gut check this before pulling the trigger on switch over to new hosting company. We appreciate input on this and/or any other thoughts regarding the switch over to new hosting company that we may not have thought of. Thanks, Matt
Intermediate & Advanced SEO | | MWM37720 -
Duplicate Content http://www.website.com and http://website.com
I'm getting duplicate content warnings for my site because the same pages are getting crawled twice? Once with http://www.website.com and once with http://website.com. I'm assuming this is a .htaccess problem so I'll post what mine looks like. I think installing WordPress in the root domain changed some of the settings I had before. My main site is primarily in HTML with a blog at http://www.website.com/blog/post-name BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
Intermediate & Advanced SEO | | thirdseo
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress0