Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google bot read embedded content?
-
Is embedded content "really" on my page?
There are many addons nowadays that are used by embedded code and they bring the texts after the page is loaded.
For example - embedded surveys.
Are these read by the Google bot or do they in fact act like iframes and are not physically on my page?
Thanks
-
If you look at most of the Facebook comment implementations, they're usually embedded with an iframe.
Technically speaking, that is making the content load from another source (not on your site).
As we're constantly seeing Google evolve with regard to "social signals", however, I suspect embedded Facebook comments may begin to have an impact if they pertain to content that is actually located on your website.
-
Thanks!
I'm guessing it will remain a no for me since it is third party scripts - a black box for that matter.
What do you think about Facebook comments then?
Not readable as well? -
I didn't see any recent test for 2013, but it's been analyzed quite a bit, and the 2 links below expand a bit on what I mentioned.
The conclusion on the first one below is that it won't index content loaded dynamically from a javascript file on another server/domain.
http://www.seomoz.org/ugc/can-google-really-access-content-in-javascript-really
Here's the link that talks about extra programming necessary to make AJAX content crawlable and indexable.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=174992
-
Thank you all.
Here is an example from survey monkey:
There many other tools that look quite the same.
The content it loads is not visible in the view source.
-
Googlebot has become extremely intelligent since its inception, and I'd guess that most members here would probably agree that it's gotten to the point where it can detect virtually any type of content on a page.
For the purposes of analyzing the actual content that it indexes and uses for ranking / SEO, however, I'd venture to guess that the best test would be viewing the page source after the page has loaded.
If you can see the content you're questioning in the actual HTML, then Google will probably index it, and use it considerably for ranking purposes.
On the other hand, if you just see some type of javascript snippet / function where the content would otherwise be located in the page source, Google can probably read it, but won't likely use it heavily when indexing and ranking.
There are special ways to get Google to crawl such content that is loaded through javascript or other types of embeds, but it's been my experience that most embeds are not programmed this way by default.
-
Is it's easier to analyze if you have an example URL. These can be coded many different ways and a slight change can make a difference.
-
What language is the code of the embedded survey?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Google ignore content styled with 'display:none'?
Do you know if an H1 within a div that has a 'display: none' style applied will still be crawled and evaluated by Google? We have that situation on this page on line 136: view-source:https://www.junk-king.com/services/items-we-take/foreclosure-cleanouts Of course we also have an H1 up at the top of the page and are concerned that the second one will cause interference with our SEO efforts. I've seen conflicting and inconclusive information on line - not sure. Thanks for any help.
Intermediate & Advanced SEO | | rastellop0 -
Can Google Bot View Links on a Wix Page?
Hi, The way Wix is configured you can't see any of the on-page links within the source code. Does anyone know if Google Bots still count the links on this page? Here is the page in question: https://www.ncresourcecenter.org/business-directory If you do think Google counts these links, can you please send me URL fetcher to prove that the links are crawlable? Thank you SO much for your help.
Intermediate & Advanced SEO | | Fiyyazp0 -
Is possible to submit a XML sitemap to Google without using Google Search Console?
We have a client that will not grant us access to their Google Search Console (don't ask us why). Is there anyway possible to submit a XML sitemap to Google without using GSC? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Medical / Health Content Authority - Content Mix Question
Greetings, I have an interesting challenge for you. Well, I suppose "interesting" is an understatement, but here goes. Our company is a women's health site. However, over the years our content mix has grown to nearly 50/50 between unique health / medical content and general lifestyle/DIY/well being content (non-health). Basically, there is a "great divide" between health and non-health content. As you can imagine, this has put a serious damper on gaining ground with our medical / health organic traffic. It's my understanding that Google does not see us as an authority site with regard to medical / health content since we "have two faces" in the eyes of Google. My recommendation is to create a new domain and separate the content entirely so that one domain is focused exclusively on health / medical while the other focuses on general lifestyle/DIY/well being. Because health / medical pages undergo an additional level of scrutiny per Google - YMYL pages - it seems to me the only way to make serious ground in this hyper-competitive vertical is to be laser targeted with our health/medical content. I see no other way. Am I thinking clearly here, or have I totally gone insane? Thanks in advance for any reply. Kind regards, Eric
Intermediate & Advanced SEO | | Eric_Lifescript0 -
Removing duplicate content
Due to URL changes and parameters on our ecommerce sites, we have a massive amount of duplicate pages indexed by google, sometimes up to 5 duplicate pages with different URLs. 1. We've instituted canonical tags site wide. 2. We are using the parameters function in Webmaster Tools. 3. We are using 301 redirects on all of the obsolete URLs 4. I have had many of the pages fetched so that Google can see and index the 301s and canonicals. 5. I created HTML sitemaps with the duplicate URLs, and had Google fetch and index the sitemap so that the dupes would get crawled and deindexed. None of these seems to be terribly effective. Google is indexing pages with parameters in spite of the parameter (clicksource) being called out in GWT. Pages with obsolete URLs are indexed in spite of them having 301 redirects. Google also appears to be ignoring many of our canonical tags as well, despite the pages being identical. Any ideas on how to clean up the mess?
Intermediate & Advanced SEO | | AMHC0 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
How to get content to index faster in Google.....pubsubhubbub?
I'm curious to know what tools others are using to get their content to index faster (other than html sitmap and pingomatic, twitter, etc) Would installing the wordpress pubsubhubbub plugin help even though it uses pingomatic? http://wordpress.org/extend/plugins/pubsubhubbub/
Intermediate & Advanced SEO | | webestate0 -
Can Google Read Text in Carousel
so what is the best practice for getting Google to be able to read text that populates via JQuery in a carousel. If the text is originally display none, is Google going to be able to crawl it? Are there any limits to what Google can crawl when it comes to JavaScript and text? Or is it always better just to hardcopy the text on the page source?
Intermediate & Advanced SEO | | imageworks-2612900