Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
High resolution (retina) images vs load time
-
I have an ecommerce website and have a product slider with 3 images.
Currently, I serve them at the native size when viewed on a desktop browser (374x374).
I would like to serve them using retina image quality (748px).
However how will this affect my ranking due to load time?
Does Google take into account image load times even though these are done asynchronously? Also as its a slider, its only the first image which needs to load. Do the other images contribute at all to the page load time?
-
"Large pictures tend to be bad for user experience."
I disagree. I think what you mean is slower loading is bad for the user experience. Higher quality pictures are better for the user experience.
I've been looking into deferring loading of the additional slider images. That should definitely improve load time as all the bandwidth can be used to download the first slider image.
Also the first slider image if you use a progressive format should show something quickly and then improve over time.
-
You also have to keep in mind that users will access your site from mobile devices and that the larger the page the longer it takes to load fully. You may lose some people during the time it takes to load the page. My website used to have a slider with three images. i removed the slider and replaced it with one static image. Large pictures tend to be bad for user experience.
-
Hey Dwayne
They are big images but from experience I have never seen a meaningful impact from these kind of changes (in around 15 years). Maybe work on optimising the images themselves as best as possible to bring the overall size down as much as possible. Sure, if your site is a slow loading nightmare and this is just the final straw then it may be an issue but by the sounds of it you are already taking that into consideration and your site is well hosted and performs better than most of everything else out there.
But, as ever in this game, my advice would be to be aware of possible implications, weigh up the pros and cons and then test extensively. If you see an impact in your loading time and search results (and more importantly in user interaction, bounce etc) after changing this one factor then you know you can roll it back.
Hope that helps
Marcus
-
Hi,
Its not that small a change...the size of each image will quadruple from around 10kb to 40kb. As there are three images thats 90kb more data. Which is around 20% of the total page size.
That's interesting what you mention about the first byte load time. I would have thought that was overly simple and would definitely have assumed Google would actually be more concerned with how long it takes for the page "to load" (e.g. using their pagespeed metrics).
I've optimized my site extensively and have pagespeed score of 95% and I use the amazon AWS servers.
I agree with your idea about doing what's right for my users. But if Google includes the image load time then my site will rank poorly and then I won't have any users!
In summary, I think what this question really comes down to is how does Google calculate page load times and does this include image load time and does it include load time for all images (even ones which aren't being rendered in the slider).
Thanks,
Dwayne
-
Hey
I think this is such a small issue overall that you should not worry about a slight increase in image sizes damaging your SEO (assuming everything else is in place).
I would ask myself the questions:
- Is this better for my site users?
- does the seriously impact load times (and therefore usability / user experience)?
If you believe it creates a better experience and does not impact loading times in a meaningful way then go for it and don't worry about a likely negligible impact on loading times.
A few things I would do:
- test average loading times with a tool like pingdom: http://tools.pingdom.com/fpt/
- replace your images and test again
- look at other areas where you can speed up loading times
- make sure your hosting does not suck
For reference there was a post here a while back re the whole loading times / SEO angle that determined it was time to first byte (response time) rather than total loading time that had the impact - this would make total loading time academic from a pure SEO perspective but... it's really not about SEO, it's about your site users and whether this makes things better (improved images) or worse (slow loading) for them.
Seriously - don't worry about this small change too much from an SEO perspective. Use it as an excuse to improve loading time as that is a good exercise for lots of reasons but go with what is right for your users.
Hope that helps
MarcusRef
http://moz.com/blog/how-website-speed-actually-impacts-search-rankinghttp://moz.com/blog/improving-search-rank-by-optimizing-your-time-to-first-byte
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are my site images hosted by secureservercdn.net?
All of my image links are hosted on secureservercdn.net. for example, if i go to a webpage, mydomain.com/blog/blog-post and right click any image with a "copy image address" the images are all linking to secureservercdn.net/blablabla rather than mydomain.com/wp-uploads/blalblabla. this cannot be good for SEO. Any ideas why this would be? My site is hosted through GoDaddy, is it on their end? Thanks, Ryan
Intermediate & Advanced SEO | | RyanMeighan0 -
Text over image
Hello, I am creating an overlay on a image. Is it ok to write on this overlay in html or it is better to have the text not on a image for google and other search engines ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Null Alt Image Tags vs Missing Alt Image Tags
Hi, Would it be better for organic search to have a null alt image tag programatically added to thousands of images without alt image tags or just leave them as is. The option of adding tailored alt image tags to thousands of images is not possible. Is having sitewide alt image tags really important to organic search overall or what? Right now, probably 10% of the sites images have alt img tags. A huge number of those images are pages that aren Thanks!
Intermediate & Advanced SEO | | 945010 -
SEO time
I wanto to be in the top of the google search. I am usiing a lot of SEO tools but... I have done it during one month. Do I have to wait more?
Intermediate & Advanced SEO | | CarlosZambrana0 -
Images Returning 404 Error Codes. 301 Redirects?
We're working with a site that has gone through a lot of changes over the years - ownership, complete site redesigns, different platforms, etc. - and we are finding that there are both a lot of pages and individual images that are returning 404 error codes in the Moz crawls. We're doing 301 redirects for the pages, but what would the best course of action be for the images? The images obviously don't exist on the site anymore and are therefore returning the 404 error codes. Should we do a 301 redirect to another similar image that is on the site now or redirect the images to an actual page? Or is there another solution that I'm not considering (besides doing nothing)? We'll go through the site to make sure that there aren't any pages within the site that are still linking to those images, which is probably where the 404 errors are coming from. Based on feedback below it sounds like once we do that, leaving them alone is a good option.
Intermediate & Advanced SEO | | garrettkite0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Shopify Product Variants vs Separate Product Pages
Let's say I have 10 different models of hats, and each hat has 5 colors. I have two routes I could take: a) Make 50 separate product pages Pros: -Better usability for customer because they can shop for just masks of a specific color. We can sort our collections to only show our red hats. -Help SEO with specific kw title pages (red boston bruins hat vs boston bruins hat). Cons: -Duplicate Content: Hat model in one color will have almost identical description as the same hat in a different color (from a usability and consistency standpoint, we'd want to leave descriptions the same for identical products, switching out only the color) b) Have 10 products listed, each with 5 color variants Pros: -More elegant and organized -NO duplicate Content Cons: -Losing out on color specific search terms -Customer might look at our 'red hats' collection, but shopify will only show the 'default' image of the hat, which could be another color. That's not ideal for usability/conversions. Not sure which route to take. I'm sure other vendors must have faced this issue before. What are your thoughts?
Intermediate & Advanced SEO | | birchlore0 -
Deferred javascript loading
Hi! This follows on from my last question. I'm trying to improve the page load speed for http://www.gear-zone.co.uk/. Currently, Google rate the page speed of the GZ site at 91/100 – with the javascript being the only place where points are being deducated. The only problem is, the JS relates to the trustpilot widget, and social links at the bottom of the page – neither of which work when they are deferred. Normally, we would add the defer attribute to the script tags, but by doing so it waits until the page is fully loaded before executing the scripts. As both the js I mentioned (reviews and buttons) use the document.Write command, adding this would write the code off the page and out of placement from where they should be. Anyone have any ideas?
Intermediate & Advanced SEO | | neooptic0