Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Null Alt Image Tags vs Missing Alt Image Tags
-
Hi,
Would it be better for organic search to have a null alt image tag programatically added to thousands of images without alt image tags or just leave them as is.
The option of adding tailored alt image tags to thousands of images is not possible.
Is having sitewide alt image tags really important to organic search overall or what? Right now, probably 10% of the sites images have alt img tags. A huge number of those images are pages that aren
Thanks!
-
Thanks, guys.
I've adjusted alt images tags on pages that really matter to me for organic. The tens of thousands of other images/pages are just going to have to chillax.
-
No problem at all. To be honest, it's really not a huge deal and probably not worth the dev budget or manhours required.
In most cases with a site like this, I'd be more inclined to add good alt text for all images on the most popular pages then, as you're working through other pages throughout the life of the campaign, update the alt text while you're at it.
If you're already updating the page title or content on a page, it's not that much extra effort to do the alt text while you're there.
-
Hi Eric & Chris,
Thanks for the help. Given the size of the site, tens of thousands of pages and more than one image per average page, I guess my real question is how much trouble is this worth? I don't think the image file name is really going to reliably yield alt img text. So, about the most one could do is possibly a site-wide empty tag. Is this really worth it for organic search? Seems like kind of a phony manipulation to appeal to a search algorithm in maybe some microscopic way. But, I could be wrong, so that is why I'm asking here. If it really matters, we'll do it. But if it doesn't, would rather not. Especially when you consider the next thing will be that having empty alt img tags will some day be a small negative, right? That would be so Google of them.
-
Is it possible to use a script to write? Alternative option is to run a screaming frog crawl looking for all images, download into excel, and use the image file name to help create a tag. That's assuming you've named the image with something specific instead of leaving it default (eg: image4893054893.jpg). Ideally you would want to include image alt tags, and many platforms can help make it easy. Could you give a little more information about your situation? There might be a pattern you can use to update on a large scale. I would not have the same tag applied to all images, because that really doesn't help search engines understand the photo and wouldn't be useful to users who have vision impairments. If you don't have the time to do it, then hire someone to assign alt tags (virtual assistant). Screaming Frog will make it really easy to find all the image files.
-
Naturally in the perfect world, meaningful attributes should be added. Assuming you're a mere mortal with a limited number of hours in the day... the best short-term solution to this is going to be having the alt attribute applied but empty.
To my knowledge (happy to be pointed towards data showing otherwise), there's no real ranking difference between these two options. The reason I prefer to add a blank alt in this instance is because assistive technology (like screen readers for vision impaired users) are going to have a much better experience on your site this way.
If you have a blank alt, the screen readers will essentially ignore the image since they're going to read " ". On the other hand, if you don't have an alt attribute in the , it's going to read the source instead. Even a short img src is going to be cumbersome, especially if you have an image-heavy site!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Anchor Text vs. Button Links
Hi How important are anchor text links within your own site vs. buttons for SEO? We've redesigned some of our pages from anchor text links to buttons which are just clickable images.I know historically this isn't the best way, but is it still as important as it used to be?
Intermediate & Advanced SEO | | BeckyKey0 -
Multiple H2 tags
Is it advisable to use only one H2 tag? The template designs for some reason is ended up with multiple H2 tags, I realise if any think it's that each one is that are important and it is all relative. Just trying to assess if it's worth the time and effort to rehash the template. Has anyone done any testing or got any experience? Thanks
Intermediate & Advanced SEO | | seoman101 -
Bad SEO Practice: in title tag?
Greetings, I just discovered that some of our content was produced with
Intermediate & Advanced SEO | | Eric_Lifescript
tags in the title tag. Example: <title>Diabetes Symptoms <br> In Women Over 40</title> My gut says this is bad for SEO, but I couldn't find a definitive answer on the web, so I thought I would ask the community of gurus here at Moz. 🙂 Thanks in advance for any reply. Kind regards, Eric0 -
Alt tag for src='blank.gif' on lazy load images
I didn't find an answer on a search on this, so maybe someone here has faced this before. I am loading 20 images that are in the viewport and a bit below. The next 80 images I want to 'lazy-load'. They therefore are seen by the bot as a blank.gif file. However, I would like to get some credit for them by giving a description in the alt tag. Is that a no-no? If not, do they all have to be the same alt description since the src name is the same? I don't want to mess things up with Google by being too aggressive, but at the same time those are valid images once they are lazy loaded, so would like to get some credit for them. Thanks! Ted
Intermediate & Advanced SEO | | friendoffood0 -
Does a UTM tag influence the linkvalue?
Will Google value a link with a UTM tag the same as a clean link without a UTM tag? I should say that a UTM tag link is not a natural link so the linkvalue is zero. Anyone any idea how to look at this?
Intermediate & Advanced SEO | | TT_Vakantiehuizen0 -
Best way to noindex an image?
Hi all, A client wanted a few pages noindexed, which was no problem using the meta robots noindex tag. However they now want associated images removed, some of which still appear on pages that they still want indexed. I added the images to their robots.txt file a few weeks ago (probably over a month ago actually) but they're all still showing when you do an image search. What's the best way to noindex them for good, and how do I go about implementing it? Many thanks, Steve
Intermediate & Advanced SEO | | steviephil0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1