Not something I've ever heard discussed before, probably still a bit too esoteric for present day, but I've always been one to be guided by where I see Google headed rather than trying to game the system as it exists now. So think about it:
- Most stock and public domain photos are used repeatedly throughout the internet.
- Google's reverse image search proves that Google can recognize when the same photo is used across dozens of sites.
- Many of those photos will have alt and/or title text that Google also has crawled. If not it has the content of the page on which the photo exists to consider for context.
So if Google has a TON of clues about what a photo is likely to be about, and can in theory aggregate those clues about a single photo from the dozens of sites using it, how might Google treat a site that mislabels it, old school "one of these things is not like the others" style?
Would a single site hosting that photo be bolstered by the additional context that the known repeated photo brings in, essentially from other sites?
If 10 sites about widgets are using the same widget photo, but the 11th uses an entirely new, never before published photo, would the 11th site then be rated better for bringing something new to the table? (I think this would be almost certainly true, drives home the importance of creating your own graphics content.)
Anyway, like I said, all theoretical and philosophical and probably not currently in play, especially since an image can be used in so many different contexts, but it's New Years and things are slow and my brain is running, so I'm curious what other folks might think about that as the future of image optimization.