Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Image Height/Width attributes, how important are they and should a best practice site include this as std
-
Hi
How important are the image height/width attributes and would you expect a best practice site to have them included ?
I hear not having them can slow down a page load time is that correct ?
Any other issues from not having them ?
I know some re social sharing (i know bufferapp prefers images with h/w attributes to draw into their selection of image options when you post)
Most importantly though would you expect them to be intrinsic to sites that have been designed according to best practice guidelines ?
Thanks
-
Thanks for confirming that Paul !
Ive also noticed that when using services like Buffer etc, to post socially, that the articles image is not being displayed as an option in the images to choose from, to display as the image in the post, Instead its only showing options like the site logo etc which we don't want. I asked Buffer tech support and they said that if the images had height/width attributes then they should then be presented as image options to accompany the post
All Best
Dan
-
Image h x w attributes don't affect the actual speed of your page load much, Dan. They do strongly affect the perceived speed to the user.
If the size attributes are included, the browser can leave a correctly-sized space for each image as the page gets rendered, even if the images haven't started to download yet. Then the rest of the page content flows in around the image "placeholders". (Images are always slower than text.)
If no image size attributes are present, the browser essentially ignores the placing of the images until the image files actually download, then redraws the whole page to add the space back in for the images.
This redrawing for the images means that text and other elements will move around on the page until all the images have downloaded and it has finished rendering. This gives the user an impression of a much slower page, since they can't start to read the content until it has stopped moving around. Done properly, the visitor can start reading the top of the page even while all the images lower on the page are still downloading.
So yes, obviously including height and width attributes for images is standard best practice for designing an effective on-page user experience.
Hope that helps?
Paul
P.S. As proof, Google thinks they're such a standard requirement that they have included a check for them as part of the scoring algorithm of their Google Page Speed tool.
-
"How important are the image height/width attributes and would you expect a best practice site to have them included ?"
This is not the most important SEO thing in the world BUT according to your 2nd question
"I hear not having them can slow down a page load time is that correct ?"
That`s the point! The question related to this issue is how relevant the whole thing might be?
Modern browsers and broadband connections seem to make this insignificant but just in case there are some visitors which are not using the right settings they might get pictures unscaled and your whole site will be shown non-responsive... by the way, use responsive designs if you can to avoid that...
I
ve always been told to use these parameters . even if you don
t need it it ensures that your code is a little bit more perfect
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to refresh a webpage of a news site, SEO wise?
Hello all, we have a client which is a sports website. In fact it is a veyr big website and has a huge number of news per day. This is mostly the reason why it refreshes some of its pages with news list every 420 seconds. We currently use meta refresh. I have read here and elsewhere that meta refreshes should be avoided. But we don't do it to send to another page and pass any kind of page authority / juice. Is in this case javascript refresh better? Is there any other better way. What do you think & suggest? Thank you!
Technical SEO | | pkontopoulos0 -
Double Slash // in URL
My client is using double forward slahes in URL like this "//" is this affecting SEO?
Technical SEO | | yanaiguana1110 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
Best Practices for adding Dynamic URL's to XML Sitemap
Hi Guys, I'm working on an ecommerce website with all the product pages using dynamic URL's (we also have a few static pages but there is no issue with them). The products are updated on the site every couple of hours (because we sell out or the special offer expires) and as a result I keep seeing heaps of 404 errors in Google Webmaster tools and am trying to avoid this (if possible). I have already created an XML sitemap for the static pages and am now looking at incorporating the dynamic product pages but am not sure what is the best approach. The URL structure for the products are as follows: http://www.xyz.com/products/product1-is-really-cool
Technical SEO | | seekjobs
http://www.xyz.com/products/product2-is-even-cooler
http://www.xyz.com/products/product3-is-the-coolest Here are 2 approaches I was considering: 1. To just include the dynamic product URLS within the same sitemap as the static URLs using just the following http://www.xyz.com/products/ - This is so spiders have access to the folder the products are in and I don't have to create an automated sitemap for all product OR 2. Create a separate automated sitemap that updates when ever a product is updated and include the change frequency to be hourly - This is so spiders always have as close to be up to date sitemap when they crawl the sitemap I look forward to hearing your thoughts, opinions, suggestions and/or previous experiences with this. Thanks heaps, LW0 -
Cloaking? Best Practices Crawling Content Behind Login Box
Hi- I'm helping out a client, who publishes sale information (fashion sales etc.) In order for the client to view the sale details (date, percentage off etc.) they need to register for the site. If I allow google bot to crawl the content, (identify the user agent) but serve up a registration light box to anyone who isn't google would this be considered cloaking? Does anyone know what the best practice for this is? Any help would be greatly appreciated. Thank you, Nopadon
Technical SEO | | nopadon0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Why are old versions of images still showing for my site in Google Image Search?
I have a number of images on my website with a watermark. We changed the watermark (on all of our images) in May, but when I search for my site getmecooking in Google Image Search, it still shows the old watermark (the old one is grey, the new one is orange). Is Google not updating the images its search results because they are cached in Google? Or because it is ignoring my images, having downloaded them once? Should we be giving our images a version number (at the end of the file name)? Our website cache is set to 7 days, so that's not the issue. Thanks.
Technical SEO | | Techboy0 -
How to push down outdated images in Google image search
When you do a Google image search for one of my client's products, you see a lot of first-generation hardware (the product is now in its third generation). The client wants to know what they can do to push those images down so that current product images rise to the top. FYI: the client's own image files on their site aren't very well optimized with keywords. My thinking is to have the client optimize their own images and the ones they give to the media with relevant keywords in file names, alt text, etc. Eventually, this should help push down the outdated images is my thinking. Any other suggestions? Thanks so much.
Technical SEO | | jimmartin_zoho.com0