Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is the HTML content inside an image slideshow of a website crawled by Google?
-
I am building a website for a client and i am in a dilemma whether to go for an image slideshow with HTML content on the slides or go for a static full size image on the homepage. My concern is that HTML content on the slideshow may not get crawled by Google and hence may not be SEO friendly.
-
This is actually really easy to test. Set up a basic version of each, and run the URL in this tool. Seo-Browser will allow you to see how your website is seen by a search engine bot. I have used this for TONS of sites, and never had it fail me when needing to see if something had to be changed. Once you copy and paste your URL in place, click the "simple" button. You can also sign up (it's free) to get more in-depth results.
As long as you have live (meaning read-able and not image based) text that is crawlable in your slideshow, you should be fine. Try it, and test using the method above. Best of luck!
-
Hi,
Google's crawler is fetching the source code. if the content in the slider is visible in the source code - then the content is visible to google. There are a few "extra" factors related with the "real-estate" of the content that comes into play - but the bottom line is : if it's in the source code, the Google can see it.
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google ranking content for phrases that don't exist on-page
I am experiencing an issue with negative keywords, but the “negative” keyword in question isn’t truly negative and is required within the content – the problem is that Google is ranking pages for inaccurate phrases that don’t exist on the page. To explain, this product page (as one of many examples) - https://www.scamblermusic.com/albums/royalty-free-rock-music/ - is optimised for “Royalty free rock music” and it gets a Moz grade of 100. “Royalty free” is the most accurate description of the music (I optimised for “royalty free” instead of “royalty-free” (including a hyphen) because of improved search volume), and there is just one reference to the term “copyrighted” towards the foot of the page – this term is relevant because I need to make the point that the music is licensed, not sold, and the licensee pays for the right to use the music but does not own it (as it remains copyrighted). It turns out however that I appear to need to treat “copyrighted” almost as a negative term because Google isn’t accurately ranking the content. Despite excellent optimisation for “Royalty free rock music” and only one single reference of “copyrighted” within the copy, I am seeing this page (and other album genres) wrongly rank for the following search terms: “free rock music”
On-Page Optimization | | JCN-SBWD
“Copyright free rock music"
“Uncopyrighted rock music”
“Non copyrighted rock music” I understand that pages might rank for “free rock music” because it is part of the “Royalty free rock music” optimisation, what I can’t get my head around is why the page (and similar product pages) are ranking for “Copyright free”, “Uncopyrighted music” and “Non copyrighted music”. “Uncopyrighted” and “Non copyrighted” don’t exist anywhere within the copy or source code – why would Google consider it helpful to rank a page for a search term that doesn’t exist as a complete phrase within the content? By the same logic the page should also wrongly rank for “Skylark rock music” or “Pretzel rock music” as the words “Skylark” and “Pretzel” also feature just once within the content and therefore should generate completely inaccurate results too. To me this demonstrates just how poor Google is when it comes to understanding relevant content and optimization - it's taking part of an optimized term and combining it with just one other single-use word and then inappropriately ranking the page for that completely made up phrase. It’s one thing to misinterpret one reference of the term “copyrighted” and something else entirely to rank a page for completely made up terms such as “Uncopyrighted” and “Non copyrighted”. It almost makes me think that I’ve got a better chance of accurately ranking content if I buy a goat, shove a cigar up its backside, and sacrifice it in the name of the great god Google! Any advice (about wrongly attributed negative keywords, not goat sacrifice ) would be most welcome.0 -
HTML Site SEO (NO CMS)
I have got a client site, which is dated (2007) and has not been shifted to any recognised CMS yet. It is HTML based. Is it possible to SEO on such a site? Is it even worth it? If it is possible to do SEO on this, any suggestions will be highly appreciated. Thank you.
On-Page Optimization | | ArthurRadtke3 -
How do i know about my website content quality is good or bad?
According to Google updates, content is the main part of the website ranking, so how do i know about my website content quality...if you have any type of tool for check website content quality please refer to me.
On-Page Optimization | | renukishor0 -
Blocking Subdomain from Google Crawl and Index
Hey everybody, how is it going? I have a simple question, that i need answered. I have a main domain, lets call it domain.com. Recently our company will launch a series of promotions for which we will use cname subdomains, i.e try.domain.com, or buy.domain.com. They will serve a commercial objective, nothing more. What is the best way to block such domains from being indexed in Google, also from counting as a subdomain from the domain.com. Robots.txt, No-follow, etc? Hope to hear from you, Best Regards,
On-Page Optimization | | JesusD3 -
How does Google Detect which keywords my website should show up for in the SE?
When I checked my Google Webmaster Tools I found that my website is showing up for keywords that I didn't optimize for ... for example I optimize my website for "funny pictures with captions", and the website is showing up for "funny images with captions". I know that this is good, but the keyword is dancing all around, sometimes I search for "funny pictures with captions" and I show up in the 7th page, and some time I don't show up. and the same goes for the other keyword. of course I am optimizing for more than two keywords but the results is not consistent. my question is how does Google decide which keywords you website should show up for? Is it the on-page keywords?, or is it the off-page anchor text keywords? Thank you in advance ...
On-Page Optimization | | FarrisFahad
FarrisFahad0 -
Schema.org for news websites?
So as of late I have been on something of a mission to mark up my news website with as much accurate and detailed Schema and Open Graph data as possible, in order to not only allow the search engines to understand my content properly, but also to ensure everything appears in the most ideal fashion when linked to from Facebook, Google+, etc. Here is an example of a typical article page: http://www.nerdscoop.net/technology/video-games-459 As you'll see I currently have news posts marked up as article because that is essentially exactly what they are, but is there a better way to emphasise that they are news rather than just generic articles? My second question is regarding the category pages and the home page. How would be best to mark these up? With OG the task is fairly simple, because I can specify the homepage as being a website, but not so with Schema from what I can see. Either way, this is an interesting subject to me and I look forward to any discussion as a result. Thanks for looking.
On-Page Optimization | | HalogenDigital0 -
Should I let Google index tags?
Should I let Google index tags? Positive? Negative Right now Google index every page, including tags... looks like I am risking to get duplicate content errors? If thats true should I just block /tag in robots.txt Also is it better to have as many pages indexed by google or it's should be as lees as possible and specific to the content as much as possible. Cheers
On-Page Optimization | | DiamondJewelryEmpire0 -
Percentage of duplicate content allowable
Can you have ANY duplicate content on a page or will the page get penalized by Google? For example if you used a paragraph of Wikipedia content for a definition/description of a medical term, but wrapped it in unique content is that OK or will that land you in the Google / Panda doghouse? If some level of duplicate content is allowable, is there a general rule of thumb ratio unique-to-duplicate content? thanks!
On-Page Optimization | | sportstvjobs0