Null Alt Image Tags vs Missing Alt Image Tags
-
Hi,
Would it be better for organic search to have a null alt image tag programatically added to thousands of images without alt image tags or just leave them as is.
The option of adding tailored alt image tags to thousands of images is not possible.
Is having sitewide alt image tags really important to organic search overall or what? Right now, probably 10% of the sites images have alt img tags. A huge number of those images are pages that aren
Thanks!
-
Thanks, guys.
I've adjusted alt images tags on pages that really matter to me for organic. The tens of thousands of other images/pages are just going to have to chillax.
-
No problem at all. To be honest, it's really not a huge deal and probably not worth the dev budget or manhours required.
In most cases with a site like this, I'd be more inclined to add good alt text for all images on the most popular pages then, as you're working through other pages throughout the life of the campaign, update the alt text while you're at it.
If you're already updating the page title or content on a page, it's not that much extra effort to do the alt text while you're there.
-
Hi Eric & Chris,
Thanks for the help. Given the size of the site, tens of thousands of pages and more than one image per average page, I guess my real question is how much trouble is this worth? I don't think the image file name is really going to reliably yield alt img text. So, about the most one could do is possibly a site-wide empty tag. Is this really worth it for organic search? Seems like kind of a phony manipulation to appeal to a search algorithm in maybe some microscopic way. But, I could be wrong, so that is why I'm asking here. If it really matters, we'll do it. But if it doesn't, would rather not. Especially when you consider the next thing will be that having empty alt img tags will some day be a small negative, right? That would be so Google of them.
-
Is it possible to use a script to write? Alternative option is to run a screaming frog crawl looking for all images, download into excel, and use the image file name to help create a tag. That's assuming you've named the image with something specific instead of leaving it default (eg: image4893054893.jpg). Ideally you would want to include image alt tags, and many platforms can help make it easy. Could you give a little more information about your situation? There might be a pattern you can use to update on a large scale. I would not have the same tag applied to all images, because that really doesn't help search engines understand the photo and wouldn't be useful to users who have vision impairments. If you don't have the time to do it, then hire someone to assign alt tags (virtual assistant). Screaming Frog will make it really easy to find all the image files.
-
Naturally in the perfect world, meaningful attributes should be added. Assuming you're a mere mortal with a limited number of hours in the day... the best short-term solution to this is going to be having the alt attribute applied but empty.
To my knowledge (happy to be pointed towards data showing otherwise), there's no real ranking difference between these two options. The reason I prefer to add a blank alt in this instance is because assistive technology (like screen readers for vision impaired users) are going to have a much better experience on your site this way.
If you have a blank alt, the screen readers will essentially ignore the image since they're going to read " ". On the other hand, if you don't have an alt attribute in the , it's going to read the source instead. Even a short img src is going to be cumbersome, especially if you have an image-heavy site!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Image upload SEO tips?
Is there anything I can do to my images other than naming them correctly to have with SEO? Size? File type? Maybe adding text on top of them to pick up OCR? Thank you,
Intermediate & Advanced SEO | | Jamesmcd030 -
How important is the file extension in the URL for images?
I know that descriptive image file names are important for SEO. But how important is it to include .png, .jpg, .gif (or whatever file extension) in the url path? i.e. https://example.com/images/golden-retriever vs. https://example.com/images/golden-retriever.jpg Furthermore, since you can set the filename in the Content-Disposition response header, is there any need to include the descriptive filename in the URL path? Since I'm pulling most of our images from a database, it'd be much simpler to not care about simulating a filename, and just reference an image id in my templates. Example: 1. Browser requests GET /images/123456
Intermediate & Advanced SEO | | dsbud
2. Server responds with image setting both Content-Disposition, and Link (canonical) headers Content-Disposition: inline; filename="golden-retriever"
Link: <https: 123456="" example.com="" images="">; rel="canonical"</https:>1 -
SEO Friendly Files Redirected From Images
I have images (.jpg's) of products that when you click them redirect you to a .pdf's containing all the products' specs, patterns, colors, etc. These are 302 redirects that open on a different window when clicked on. Is there a way to keep these redirects and maintain SEO optimization? Any advice is appreciated.
Intermediate & Advanced SEO | | SuperiorPavers0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Multi-Location SEO: Sites vs Pages
I just started with a new company that requires multi-location SEO for its niche product/service. Currently, we have a main corporate website, as well as, 40+ individual dealer websites (we host all). Keep in mind each of these dealers consist of only 1-2 people, so corporate I will be managing the site or sites and content strategy. Many of the individual dealer sites actually rank very well (#1-#3) in their areas for our targeted keywords, but they all use the same duplicate content. Also, there are many dealer sites that have dropped off the radar in last year, which is probably because of the duplicate and static content. So I'm at a crossroads... Attempt to redo all of these location sites with unique and local content for each or Create optimized unique pages for each of them on our main site and redirect their current local domains to their page on our site Any advise regarding which direction to go in and why. Why is very important. It will be very difficult to convince a dealer that is #1 with his local site that we are redirecting to our main site, so I need some good ammo and reasoning. Also, any tips toward achieving local seo success will be greatly appreciated, too! Thank you!
Intermediate & Advanced SEO | | the-coopersmith0 -
Should I block wordpress archive and tag?
I use Wodpress and Wordpress SEO by Yoast. I've set ip up to add noindex meta tag on all archive and tag pages. I don't think its useful to include thoses pages in search results because there's quite a few. Especialy the tag archive. Should I consider anything else or change my mind? What do you think? Thanks
Intermediate & Advanced SEO | | Akeif0 -
Shared Hosting Vs VPS Hosting
From an SEO perspective what are the advantages of VPS Hosting VS Shared Hosting for a local website that has less that 200 pages and gets max 2000 hits per month? Is VPS Hosting worth the extra expense for a local Real Estate Website?
Intermediate & Advanced SEO | | bronxpad0 -
Disabled/Accessibilty vs SEO?
Can anyone point me to resources that helps website owners balance these two issues? Or how to SEO a site meant for disabled users? or how to make an SEO'd site more accessible? Thanks!
Intermediate & Advanced SEO | | mjcarrjr0