If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
-
Hi MOZers,
This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers?
A) would Google totally ignore the image and the ALT tags information? OR
B) Google would consider the ALT tags information?
I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently.
Looking forward to all your responses
Malika
-
May I ask why you/your webmaster would have noindexed your images in the first place?
-
-
Hi Malika,
Blocking image directories or images themselves in robots.txt only prevents the image from being added to "image" search results. You will still get the full benefit of the alt text on the page, the image just won't appear in the image results.
How this actually works is the crawler will crawl the site and index all the text and weight (h1, h2, alt etc..) then when the crawler moves to add the image to the search cache it finds it can't access it due to robots.txt and simply ignores it and goes on.This leaves your original text as what is indexed as a search result, and nothing for image results.
If you are using Apache you may want to not use robots.txt as the method of blocking images. I would recommend using the .htaccess file with a code like this...
<filesmatch ".(bmp|gif|jpg|png|tif)$"="">Header set X-Robots-Tag "noindex"</filesmatch>
This is a blanket declaration and would prevent indexing of any images with the noted extensions on your site. This is particularly useful if you have multiple image directories. Further more if there are a few images you want indexed you could pick a particular extension like .jpeg for example (note jpeg not jpg), then just convert those few images and know they will be indexed as they are not in the exclusion list.
Another benefit of handling it this way is if you already have images that are indexed, using the noindex tag will get them out out of the image directory much faster than blocking them. The reason is you are giving Google a new directive which is "noindex", otherwise they will just treat them as inaccessible and move on, leaving any cached version to appear in the directory for some time.
Hope that makes sense and helps,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages blocked by robots
**yazılım sürecinde yapılan bir yanlışlıktı.** Sorunu hızlı bir şekilde nasıl çözebilirim? bana yardım et. ```[XTRjH](https://imgur.com/a/XTRjH)
Intermediate & Advanced SEO | | mihoreis0 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
Only 285 of 2,266 Images Indexed by Google
Only 285 of 2,266 Images Indexed by Google. Images for our site are hosted on Amazons CDN cloud based hosting service. Our Wordpress site is on a virtual private server and has its' own IP address. The number of indexed images has dropped substantially in the last year. Our site is for a real estate brokerage firm. There are about 250 listing pages set to "no-index". Perhaps these contain 400 photos, so they do not account for why so few photos have been indexed. The concern is that the low number of indexed images could be affecting overall ranking. The site URL is www.nyc-officespace-leader.com. Is this issue something that we should be concerned about? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
Index or not index Categories
We are using Yoast Seo plugin. On the main menu we have only categories which has consist of posts and one page. We have category with villas, category with villa hotels etc. Initially we set to index and include in the sitemap posts and excluded categories, but I guess it was not correct. Would be a better way to index and include categories in the sitemap and exclude the posts in order to avoid the duplicate? It somehow does not make sense for me, If the posts are excluded and the categories included, will not then be the categories empty for google? I guess I will get crazy of this. Somebody has perhaps more experiences with this?
Intermediate & Advanced SEO | | Rebeca10 -
I have two sitemaps which partly duplicate - one is blocked by robots.txt but can't figure out why!
Hi, I've just found two sitemaps - one of them is .php and represents part of the site structure on the website. The second is a .txt file which lists every page on the website. The .txt file is blocked via robots exclusion protocol (which doesn't appear to be very logical as it's the only full sitemap). Any ideas why a developer might have done that?
Intermediate & Advanced SEO | | McTaggart0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Effect duration of robots.txt file.
in my web site there is demo site in that also, index in Google but no need it now.so i have created robots file and upload to server yesterday.in the demo folder there are some html files,and i wanna remove all these in demo file from Google.but still in web master tools it showing User-agent: *
Intermediate & Advanced SEO | | innofidelity
Disallow: /demo/ How long this will take to remove from Google ? And are there any alternative way doing that ?0 -
Is it allowed to have different alt on same image on different pages?
Hi, I have images that match several different keywords and I wondered if I can give them different alts based on the page that they are displayed or will Google be angry with me? Thanks
Intermediate & Advanced SEO | | BeytzNet0