Google doesn't index image slideshow
-
Hi,
My articles are indexed and images (full size) via a meta in the body also. But, the images in the slideshow are not indexed, have you any idea? A problem with the JS
Example : http://www.parismatch.com/People/Television/Sport-a-la-tele-les-femmes-a-l-abordage-962989
Thank you in advance
Julien
-
You can do a "site:" search directly in Google like this and I currently see this --> http://screencast.com/t/ZVqq5iumQ - you can probably do a site: search on the whole domain, a subfolder or a specific page etc.
-
Ok, what is the best method that you recommend for verify images indexation directly in Google ?
I would post a message explaining the change after change sitemaps.
Thanks for all
-
Thanks! OK, yes I'd make your Sitemap and HTML image URLs the same.
Also, that's a LOT of images, so I'm not surprised Google is taking time to index them.
Also, there can sometimes be a delay in Search Console data. You can always be checking Google itself to see what files are indexed.
-
Not really, it seem be ok
-
Thanks! Hmmm did it clear Search Console without any errors? I see an error in my browser --> http://screencast.com/t/VLWhg8EyR3Dd
-
The images are here :
http://www.parismatch.com/var/exports/sitemaps/sitemap_images_parismatch-10.xml
-
Is this your current sitemap?
http://www.parismatch.com/var/exports/sitemaps/sitemap_parismatch-index.xml
What is the direct address of the image sitemap(s)?
Thanks!
-
Thanks Dan. Unfortunately, we have changed the images of host, on a different CDN...
Before the redesign, we used exactly this configuration, visible on this page (it's just an article, we don't have a slideshow example):
http://www.parismatch.com/Chroniques/Art-de-vivre/Lodge-Story-925785We have perhaps a problem with the image sitemaps because we have in Google Sitemaps:
<image: loc="">http://cdn-parismatch.ladmedia.fr/var/news/storage/images/paris-match/culture/cinema/le-fils-de-saul-la-critique-763334/8067828-1-fre-FR/Le-Fils-de-Saul-la-critique.jpg</image:>
and in the HTML source:
the perhaps should be put in the same sitempas URLs as used in HTML?
Many thanks for your help !
-
I see, thanks. Hmmm... did anything else change besides the re-design? Did the images URLs change, or did where they were being hosted change?
The current implementation doesn't show any issues, but I wonder if things were properly done in moving to the new design. Did you always have a slideshow format? Did the code change or just the design?
-
Thanks Dan !
I'm agree with you. It's problematic because since website redesign, we record a fall of images traffic by Google
-
Hi There
There does not appear to be any accessibility issues. I can crawl and access the images just fine with my crawler.
My guess is that since the images are duplicate, and they also exist on other websites, Google may be avoiding indexing them since they already are indexed and they are technically not being linked to with a normal tag.
Is this causing a particular issue for the site? Or is it just a pesky technical bug?
-
The display image is resized and indexed :
and the full size image is in META but not indexed :
-
How are your images being fed into the site? Are you using a CDN?
-Andy
-
The robots.txt file doesn't block the images, I check it. The website is under Easy Publish.
-
Hi Julien,
I always start with robots.txt in these cases, but that looks OK.
Is anything being blocked by JS? Something else to look at is if you are using something like Wordpress, there are plugins that can block access to these without you realising.
Looking at the URL of the image, this appears to be hosted on a 3rd party site?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
Content From One Domain Mysteriously Indexing Under a Different Domain's URL
I've pulled out all the stops and so far this seems like a very technical issue with either Googlebot or our servers. I highly encourage and appreciate responses from those with knowledge of technical SEO/website problems. First some background info: Three websites, http://www.americanmuscle.com, m.americanmuscle.com and http://www.extremeterrain.com as well as all of their sub-domains could potentially be involved. AmericanMuscle sells Mustang parts, Extremeterrain is Jeep-only. Sometime recently, Google has been crawling our americanmuscle.com pages and serving them in the SERPs under an extremeterrain sub-domain, services.extremeterrain.com. You can see for yourself below. Total # of services.extremeterrain.com pages in Google's index: http://screencast.com/t/Dvqhk1TqBtoK When you click the cached version of there supposed pages, you see an americanmuscle page (some desktop, some mobile, none of which exist on extremeterrain.com😞 http://screencast.com/t/FkUgz8NGfFe All of these links give you a 404 when clicked... Many of these pages I've checked have cached multiple times while still being a 404 link--googlebot apparently has re-crawled many times so this is not a one-time fluke. The services. sub-domain serves both AM and XT and lives on the same server as our m.americanmuscle website, but answer to different ports. services.extremeterrain is never used to feed AM data, so why Google is associating the two is a mystery to me. the mobile americanmuscle website is set to only respond on a different port than services. and only responds to AM mobile sub-domains, not googlebot or any other user-agent. Any ideas? As one could imagine this is not an ideal scenario for either website.
Intermediate & Advanced SEO | | andrewv0 -
Malicious site pointed A-Record to my IP, Google Indexed
Hello All, I launched my site on May 1 and as it turns out, another domain was pointing it's A-Record to my IP. This site is coming up as malicious, but worst of all, it's ranking on keywords for my business objectives with my content and metadata, therefore I'm losing traffic. I've had the domain host remove the incorrect A-Record and I've submitted numerous malware reports to Google, and attempted to request removal of this site from the index. I've resubmitted my sitemap, but it seems as though this offending domain is still being indexed more thoroughly than my legitimate domain. Can anyone offer any advice? Anything would be greatly appreciated! Best regards, Doug
Intermediate & Advanced SEO | | FranGen0 -
How to make an AJAX site crawlable when PushState and #! can't be used?
Dear Mozzers, Does anyone know a solution to make an AJAX site crawlable if: 1. You can't make use of #! (with HTML snapshots) due to tracking in Analytics 2. PushState can't be implemented Could it be a solution to create two versions of each page (one without #!, so campaigns can be tracked in Analytics & one with #! which will be presented to Google)? Or is there another magical solution that works as well? Any input or advice is highly appreciated! Kind regards, Peter
Intermediate & Advanced SEO | | ConversionMob0 -
Google & Bing not indexing a Joomla Site properly....
Can someone explain the following to me please. The background: I launched a new website - new domain with no history. I added the domain to my Bing webmaster tools account, verified the domain and submitted the XML sitemap at the same time. I added the domain to my Google analytics account and link webmaster tools and verified the domain - I was NOT asked to submit the sitemap or anything. The site has only 10 pages. The situation: The site shows up in bing when I search using site:www.domain.com - Pages indexed:- 1 (the home page) The site shows up in google when I search using site:www.domain.com - Pages indexed:- 30 Please note Google found 30 pages - the sitemap and site only has 10 pages - I have found out due to the way the site has been built that there are "hidden" pages i.e. A page displaying half of a page as it is made up using element in Joomla. My questions:- 1. Why does Bing find 1 page and Google find 30 - surely Bing should at least find the 10 pages of the site as it has the sitemap? (I suspect I know the answer but I want other peoples input). 2. Why does Google find these hidden elements - Whats the best way to sort this - controllnig the htaccess or robots.txt OR have the programmer look into how Joomla works more to stop this happening. 3. Any Joomla experts out there had the same experience with "hidden" pages showing when you type site:www.domain.com into Google. I will look forward to your input! 🙂
Intermediate & Advanced SEO | | JohnW-UK0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Need to duplicate the index for Google in a way that's correct
Usually duplicated content is a brief to fix. I find myself in a little predicament: I have a network of career oriented websites in several countries. the problem is that for each country we use a "master" site that aggregates all ads working as a portal. The smaller nisched sites have some of the same info as the "master" sites since it is relevant for that site. The "master" sites have naturally gained the index for the majority of these ads. So the main issue is how to maintain the ads on the master sites and still make the nische sites content become indexed in a way that doesn't break Google guide lines. I can of course fix this in various ways ranging from iframes(no index though) and bullet listing and small adjustments to the headers and titles on the content on the nisched sites, but it feels like I'm cheating if I'm going down that path. So the question is: Have someone else stumbled upon a similar problem? If so...? How did you fix it.
Intermediate & Advanced SEO | | Gustav-Northclick0 -
DCMI and Google's rich snippets
I haven't seen any consistent information regarding DCMI tags for organic SEO in a couple of years. Webmaster Tools obviously has a rich set of instructions for microdata. Has there been any updated testing on DCMI or information above the whisper/rumor stage on whether engines will be using Dublin? As a final point, would it be worth going back to static pages that haven't been touched in a couple of years and updating them with microdata? It seems a natural for retail sites and maybe some others, but what about content heavy pages?
Intermediate & Advanced SEO | | jimmyseo0