Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
High resolution (retina) images vs load time
-
I have an ecommerce website and have a product slider with 3 images.
Currently, I serve them at the native size when viewed on a desktop browser (374x374).
I would like to serve them using retina image quality (748px).
However how will this affect my ranking due to load time?
Does Google take into account image load times even though these are done asynchronously? Also as its a slider, its only the first image which needs to load. Do the other images contribute at all to the page load time?
-
"Large pictures tend to be bad for user experience."
I disagree. I think what you mean is slower loading is bad for the user experience. Higher quality pictures are better for the user experience.
I've been looking into deferring loading of the additional slider images. That should definitely improve load time as all the bandwidth can be used to download the first slider image.
Also the first slider image if you use a progressive format should show something quickly and then improve over time.
-
You also have to keep in mind that users will access your site from mobile devices and that the larger the page the longer it takes to load fully. You may lose some people during the time it takes to load the page. My website used to have a slider with three images. i removed the slider and replaced it with one static image. Large pictures tend to be bad for user experience.
-
Hey Dwayne
They are big images but from experience I have never seen a meaningful impact from these kind of changes (in around 15 years). Maybe work on optimising the images themselves as best as possible to bring the overall size down as much as possible. Sure, if your site is a slow loading nightmare and this is just the final straw then it may be an issue but by the sounds of it you are already taking that into consideration and your site is well hosted and performs better than most of everything else out there.
But, as ever in this game, my advice would be to be aware of possible implications, weigh up the pros and cons and then test extensively. If you see an impact in your loading time and search results (and more importantly in user interaction, bounce etc) after changing this one factor then you know you can roll it back.
Hope that helps
Marcus
-
Hi,
Its not that small a change...the size of each image will quadruple from around 10kb to 40kb. As there are three images thats 90kb more data. Which is around 20% of the total page size.
That's interesting what you mention about the first byte load time. I would have thought that was overly simple and would definitely have assumed Google would actually be more concerned with how long it takes for the page "to load" (e.g. using their pagespeed metrics).
I've optimized my site extensively and have pagespeed score of 95% and I use the amazon AWS servers.
I agree with your idea about doing what's right for my users. But if Google includes the image load time then my site will rank poorly and then I won't have any users!
In summary, I think what this question really comes down to is how does Google calculate page load times and does this include image load time and does it include load time for all images (even ones which aren't being rendered in the slider).
Thanks,
Dwayne
-
Hey
I think this is such a small issue overall that you should not worry about a slight increase in image sizes damaging your SEO (assuming everything else is in place).
I would ask myself the questions:
- Is this better for my site users?
- does the seriously impact load times (and therefore usability / user experience)?
If you believe it creates a better experience and does not impact loading times in a meaningful way then go for it and don't worry about a likely negligible impact on loading times.
A few things I would do:
- test average loading times with a tool like pingdom: http://tools.pingdom.com/fpt/
- replace your images and test again
- look at other areas where you can speed up loading times
- make sure your hosting does not suck
For reference there was a post here a while back re the whole loading times / SEO angle that determined it was time to first byte (response time) rather than total loading time that had the impact - this would make total loading time academic from a pure SEO perspective but... it's really not about SEO, it's about your site users and whether this makes things better (improved images) or worse (slow loading) for them.
Seriously - don't worry about this small change too much from an SEO perspective. Use it as an excuse to improve loading time as that is a good exercise for lots of reasons but go with what is right for your users.
Hope that helps
MarcusRef
http://moz.com/blog/how-website-speed-actually-impacts-search-rankinghttp://moz.com/blog/improving-search-rank-by-optimizing-your-time-to-first-byte
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing Toxic Back Links Targeting Obscure URL or Image
There are 2 or 3 URLs and one image file that dozens of toxic domains are linking to on our website. Some of these pages have hundreds of links from 4-5 domains. Rather than disavowing these links, would it make sense to simply break these links, change the URL that the link to and not create a redirect? It seems like this would be a sure fire way to get rid of these links. Any downside to this approach? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan1 -
Move domain to new domain, for how much time should I keep forwarding?
I'm not sure but my website looks like is not getting it's juice as supposed to be. As we already know, google preferred https sites and this is what happened to mine, it was been crawling as https but when the time came to move my domain to new domain, I used 301 or domain forwarding service, unfortunately they didn't have a way to forward from https to new https, they only had regular http to https, when users clicked to my old domain from google search my site was returned to "site does not exist", I used hreflang at least that google would detect my new domain been forwarding and yes it worked but now I'm wondering, for how much time should I keep the forwarding the old domain to the new one, my site looks like is not going up, I have changed all the external links, any help would be appreciated. Thanks!
Intermediate & Advanced SEO | | Fulanito1 -
Lazy Loading of products on an E-Commerce Website - Options Needed
Hi Moz Fans. We are in the process of re-designing our product pages and we need to improve the page load speed. Our developers have suggested that we load the associated products on the page using Lazy Loading, While I understand this will certainly have a positive impact on the page load speed I am concerned on the SEO impact. We can have upwards of 50 associated products on a page so need a solution. So far I have found the following solution online which uses Lazy Loading and Escaped Fragments - The concern here is from serving an alternate version to search engines. The solution was developed by Google not only for lazy loading, but for indexing AJAX contents in general.
Intermediate & Advanced SEO | | JBGlobalSEO
Here's the official page: Making AJAX Applications Crawlable. The documentation is simple and clear, but in a few words the solution is to use slightly modified URL fragments.
A fragment is the last part of the URL, prefixed by #. Fragments are not propagated to the server, they are used only on the client side to tell the browser to show something, usually to move to a in-page bookmark.
If instead of using # as the prefix, you use #!, this instructs Google to ask the server for a special version of your page using an ugly URL. When the server receives this ugly request, it's your responsibility to send back a static version of the page that renders an HTML snapshot (the not indexed image in our case). It seems complicated but it is not, let's use our gallery as an example. Every gallery thumbnail has to have an hyperlink like: http://www.idea-r.it/...#!blogimage=<image-number></image-number> When the crawler will find this markup will change it to
http://www.idea-r.it/...?_escaped_fragment_=blogimage=<image-number></image-number> Let's take a look at what you have to answer on the server side to provide a valid HTML snapshot.
My implementation uses ASP.NET, but any server technology will be good. var fragment = Request.QueryString[``"_escaped_fragment_"``];``if (!String.IsNullOrEmpty(fragment))``{``var escapedParams = fragment.Split(``new``[] { ``'=' });``if (escapedParams.Length == 2)``{``var imageToDisplay = escapedParams[1];``// Render the page with the gallery showing ``// the requested image (statically!)``...``}``} What's rendered is an HTML snapshot, that is a static version of the gallery already positioned on the requested image (server side).
To make it perfect we have to give the user a chance to bookmark the current gallery image.
90% comes for free, we have only to parse the fragment on the client side and show the requested image if (window.location.hash)``{``// NOTE: remove initial #``var fragmentParams = window.location.hash.substring(1).split(``'='``);``var imageToDisplay = fragmentParams[1]``// Render the page with the gallery showing the requested image (dynamically!)``...``} The other option would be to look at a recommendation engine to show a small selection of related products instead. This would cut the total number of related products down. The concern with this one is we are removing a massive chunk of content from he existing pages, Some is not the most relevant but its content. Any advice and discussion welcome 🙂0 -
Images with a token in the url, in Drupal. How does it affect to SEO?
Hi everyone! I am checking now a website that works with Drupal, and I found that images have urls like this... http://www.brandname.com/sites/default/files/styles/directory_xyz/public/name-of-the-picture.png?itok=T89RpzrK I was wondering how an URL like that with the token at the and, can affect to SEO. I cound't find anything. Anyone knows? Thank you!
Intermediate & Advanced SEO | | teconsite0 -
H2 vs. H3 Tags for Category Navigation
Hey, all. I have client that uses tags in the navigation for its blog. For example, tags might appear around "Library," "Recent Posts," etc. This is handled through their WordPress theme. This seems fairly standard, but I wonder whether tags are semantically appropriate. Since each blog post is fairly lengthy (about 500-1000 words) with multiple tags, would it be more appropriate to use tags for this menu navigation? Are we cutting into the effectiveness of our tags by using them for menu navigation? The navigation is certainly an important page element, and it structures content, so it seems that it should use some header tag. Anyways, your thoughts are greatly appreciated. I'm a content creator, not an SEO, so this is a bit out of my skillset.
Intermediate & Advanced SEO | | Ask44435230 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Is there a limit to images file names?
Hi, I have an eCommerce site with hundreds of product images. For management reasons files are named in length to have the product details in them.
Intermediate & Advanced SEO | | BeytzNet
Is there a limit for a filename length before it is considered ambiguous or spammy etc.?
(it usually ranges 50-70 chars). Thanks0 -
How to See Image Metadata?
We sell 1000s of audiobooks and get our cover images and descriptions from the publisher’s sites. When I download a cover image such as this one (http://www.audiobooksonline.com/media/Alex-Cross-Run-James-Patterson.jpg)
Intermediate & Advanced SEO | | lbohen
I always rename and re-size it before installing at our Web store. Would this process result in any publisher’s metadata in the image we use at our Web store and/or anything else Google would not like?
Is there an online utility that would allow me to see metadata in our images?0