Moving from http to https: image duplicate issue?
-
Hello everyone,
We have recently moved our entire website virtualsheetmusic.com from http:// to https:// and now we are facing a question about images.
Here is the deal: All webpages URLs are properly redirected to their corresponding https if they are called from former http links. Whereas, due to compatibility issues, all images URLs can be called either via http or https, so that any of the following URLs work without any redirect:
http://www.virtualsheetmusic.com/images/icons/ResponsiveLogo.png
https://www.virtualsheetmusic.com/images/icons/ResponsiveLogo.png
Please note though that all internal links are relative and not absolute.
So, my question is: Can that be a problem from the SEO stand point? In particular: We have thousands of images indexed on Google, mostly images related to our digital sheet music preview image files, and many of them are ranking pretty well in the image pack search results. Could this change be detrimental in some way? Or doesn't make any difference in the eyes of Google? As I wrote above, all internal links are relative, so an image tag like this one:
Hasn't changed at all, it is just loaded in a https context.
I'll wait for your thoughts on this. Thank you in advance!
-
No problem
-
Great! Glad to know that. Thank you Dimitrii, I appreciated your help very much!
-
Oh, I see. Yeah, there shouldn't be any problems, if someone else links to your images with http. And yes, your assumption is correct
-
Thank you Dimitrii to clarifying, actually all our webpages now load images only via the https://, but since many external websites are hard-linking to many of our images via the regular http:// protocol, I was thinking to allow linking to them the "insecure" way if requested. Do you see my point? So... to better clarify my initial question, let's say Google is spidering one of those external affiliates and finds an image tag like this:
Will Google consider the image found at:
http://www.virtualsheetmusic.com/image.jpg
a duplicate of:
https://www.virtualsheetmusic.com/image.jpg
?? This was my original question...
In any case, I have made some testings today, and I have been able to redirect all images via .htaccess permanently (301) to https:// and looks like even if an image is requested with the http:// from the browser, it shows up correctly because the web browser handles redirects for images in the same way it handles them for the web page itself.
So... my concern should be solved this way. But in case, for any reason, I need to be able to serve the same image from both protocols (http or https) it is my understand that that shouldn't be an issue anyway. Is my assumption correct?
Thanks again.
-
I did quick search, and there are lots of good articles about why images are not duplicate content: http://bfy.tw/9Qy4
-
So, the reason I recommend having images loading only through one resource is the "insecurity" of https connection, if any resources are loaded not over https. You might have seen that sometimes instead of green lock in a browser bar, it can show yellow exclamation mark - that's one of the reasons. And also it's just cleaner, if everything is loaded the same way.
Here is a link to resource about mixed content: https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content
-
Thank you Dimitrii for your reply.
Well, your two statements above contradicts each other, in my opinion. You see, what really concerns me is your last suggestion:
"it's better to make sure that images (and all the other resources) available only through one protocol - http or https."
And hence my original concern. Why should we make sure that images are available only through one protocol if you say first that there isn't such thing as duplicate content for images? Why should we concern about that then?
Sorry for my further request for clarification. I really appreciated your help!
-
Howdy.
As far as I understand, there is no such thing as duplicate content just for images. Duplicate content is more for the page as a whole. Especially, since you guys redirected all the links, you shouldn't have any problems, since google will simply "realize" the change.
Now, it's better to make sure that images (and all the other resources) available only through one protocol - http or https.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Post migration issues - #11 + configuration issue
Hello Moz community. I'm keen to find out your experiences on the following: Have you ever experienced a migration whereby a large % of keywords are stuck in position #11 - post migration? The keywords do not move up or down (whilst competitors jump from 13 to 9 and vice versa) over a three month period. Please see the % difference in the attached e-mail. (sample 1,000 keyword terms) Question: Has anyone ever experienced this type of phenomenon before? If so - what was the root cause of this and did this happen post migration? What solution did you use to rectify this? Have you ever seen a cross-indexing issue between two domains (each domain serves a different purpose) post migration, which impacts the performance of the main brand domain? I will explain a little further - say you have www.example.com (brand site) and www.example-help.com (customer service site) and the day the brand website is migrated (same domain - just different file structure), www.example-help.com points to the same server that www.example.com is on (with a different file structure) and starts to inherit the legacy file structure. For example, the following is implemented on migration day: I will explain a little further - say you have www.example.com (brand site) and www.example-help.com (customer service site) and the day the brand website is migrated (same domain - just different file structure), www.example-help.com points to the same server that www.example.com is on (with a different file structure) and starts to inherit the legacy file structure. For example, the following is implemented on migration day: For example, the following is implemented on migration day: www.example.com/fr/widgets-purple => 301s to www.example.com/fr/widgets/purple But www.example-help.com now points to the same server where the customer service content is now hosted. So although the following is rendered: So although the following is rendered correctly: www.example-help.com/how-can-we-help We also have the following indexed in Google.fr - competing for the same keyword terms and the main brand website has dropped in rankings: www.example-help.com/fr/widgets-purple [legacy content from main brand website] Even when legacy content is 301 redirected from www.example-help.com to www.example.com, the authority isn't passed across and we now have www.example.com (as per Q1) a lot lower in Google than pre-migration. Question: Have you ever experienced a cross-indexing issue like above whereby Google potentially isn't passing authority across from legacy to the new setup? I'm very keen to hear your experiences on these two subjects and whether you have had similar problems on some of your domains. E0hbb
Intermediate & Advanced SEO | | SMVSEO0 -
Pagination & duplicate meta
Hi I have a few pages flagged for duplicate meta e.g.: http://www.key.co.uk/en/key/workbenches?page=2
Intermediate & Advanced SEO | | BeckyKey
http://www.key.co.uk/en/key/workbenches I can;t see anything wrong with the pagination & other pages have the same code, but aren't flagged for duplicate: http://www.key.co.uk/en/key/coshh-cabinets http://www.key.co.uk/en/key/coshh-cabinets?page=2 I can't see to find the issue - any ideas? Becky0 -
Faulty title, meta description and version (https instead of http) on homepage
Hi there, I am working on a client (http://minibusshuttle.com/) whose homepage is not indexed correctly by Google. In details, the title & meta description are taken from another website (http://planet55.co.uk/). In addition, homepage is indexed as https instead of http. The rest of the URIs are correctly indexed (titles, meta descriptions, http etc). planet55.co.uk used to be hosted on the same server as minibusshuttle.com and an SSL certificate was activated for that domain. I have tried several times to manually "fetch by Google" the homepage, to no avail. The rest of the pages are indexed/refreshed normally and Google responds very fast when I perform any kind of changes there. Any suggestions would be highly appreciated. Kind regards, George
Intermediate & Advanced SEO | | gpapatheodorou0 -
Duplicated privacy policy pages
I work for a small web agency and I noticed that many of the sites that we build have been using the same privacy policy. Obviously it can be a bit of a nightmare to write a unique privacy policy for each client so is Google likely to class this as duplicate content and result in a penalty? They must realise that privacy policies are likely to be the same or very similar as most legal writing tends to be! I can block the content in robots.txt or meta no-index it if necesarry but I just wanted to get some feedback to see if this is necessary!
Intermediate & Advanced SEO | | Jamie.Stevens1 -
Penalized for Duplicate Page Content?
I have some high priority notices regarding duplicate page content on my website www.3000doorhangers.com Most of the pages listed here are on our sample pages: http://www.3000doorhangers.com/home/door-hanger-pricing/door-hanger-design-samples/ On the left side of our page you can go through the different categories. Most of the category pages have similar text. We mainly just changed the industry on each page. Is this something that google would penalize us for? Should I go through all the pages and use completely unique text for each page? Any suggestions would be helpful Thanks! Andrea
Intermediate & Advanced SEO | | JimDirectMailCoach0 -
Duplicate Content
http://www.pensacolarealestate.com/JAABA/jsp/HomeAdvice/answers.jsp?TopicId=Buy&SubtopicId=Affordability&Subtopicname=What%20You%20Can%20Afford http://www.pensacolarealestate.com/content/answers.html?Topic=Buy&Subtopic=Affordability I have no idea how the first address exists at all... I ran the SEOMOZ tool and I got 600'ish DUPLICATE CONTENT errors! I have errors on content/titles etc... How do I get rid of all the content being generated from this JAABA/JSP "jibberish"? Please ask questions that will help you help me. I have always been 1st on google local and I have a business that is starting to hurt very seriously from being number three 😞
Intermediate & Advanced SEO | | JML11790 -
Could this URL issue be affecting our rankings?
Hi everyone, I have been building links to a site for a while now and we're struggling to get page 1 results for their desired keywords. We're wondering if a web development / URL structure issue could be to blame in what's holding it back. The way the site's been built means that there's a 'false' 1st-level in the URL structure. We're building deeplinks to the following page: www.example.com/blue-widgets/blue-widget-overview However, if you chop off the 2nd-level, you're not given a category page, it's a 404: www.example.com/blue-widgets/ - [Brings up a 404] I'm assuming the web developer built the site and URL structure this way just for the purposes of getting additional keywords in the URL. What's worse is that there is very little consistency across other products/services. Other pages/URLs include: www.example.com/green-widgets/widgets-in-green www.example.com/red-widgets/red-widget-intro-page www.example.com/yellow-widgets/yellow-widgets I'm wondering if Google is aware of these 'false' pages* and if so, if we should advise the client to change the URLs and therefore the URL structure of the website. This is bearing in mind that these pages haven't been linked to (because they don't exist) and therefore aren't being indexed by Google. I'm just wondering if Google can determine good/bad URL etiquette based on other parts of the URL, i.e. the fact that that middle bit doesn't exist. As a matter of fact, my colleague Steve asked this question on a blog post that Dr. Pete had written. Here's a link to Steve's comment - there are 2 replies below, one of which argues that this has no implication whatsoever. However, 5 months on, it's still an issue for us so it has me wondering... Many thanks!
Intermediate & Advanced SEO | | Gmorgan0 -
Duplicate Content http://www.website.com and http://website.com
I'm getting duplicate content warnings for my site because the same pages are getting crawled twice? Once with http://www.website.com and once with http://website.com. I'm assuming this is a .htaccess problem so I'll post what mine looks like. I think installing WordPress in the root domain changed some of the settings I had before. My main site is primarily in HTML with a blog at http://www.website.com/blog/post-name BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
Intermediate & Advanced SEO | | thirdseo
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress0