Javascript late loaded content not read by Gogglebot
-
Hi,
We have a page with some good "keyword" content (user supplied comment widget), but there was a design choice made previously to late load it via JavaScript. This was to improve performance and the overall functionality relies on JavaScript. Unfortunately since it is loaded via js, it isn't read by Googlebot so we get no SEO value.
I've read Google doesn't weigh
<noscript>content as much as regular content. is this true? Once option is just to load some of the content via <noscript> tags. I just want to make sure Google still reads this content.</p> <p>Another option is to load some of the content via simple html when loading the page. If JavaScript is enabled, we'd hide this "read only" version via css and display the more dynamic user friendly version. - Would changing display based on js enabled be deemed as cloaking? Since non-js users would see the same thing (and this provides a ways for them to see some of the functionality in the widget, it is an overall net gain for those users too).</p> <p>In the end, I want Google to read the content but trying to figure out the best way to do so.</p> <p>Thanks,</p> <p>Nic</p> <p> </p></noscript>
-
If the content is too late, you're right, the Googlebot may not grab it. However, Google is getting better and better at indexing AJAX content that's loaded after the fact. On one of the sites I work on, we really didn't want to go through the whole process of serving up an HTML snapshot to Googlebot (outlined http://code.google.com/web/ajaxcrawling/). About a month ago, I did a search in Google based on the AJAX content, and it returned the page, meaning Google is finding that AJAX content and indexing it! They're indexing comments now (see http://www.searchenginejournal.com/google-indexing-facebook-comments/35594/) as well, like Disqus and Facebook comments. What kind of comments widget are you loading that Google can't get at? Maybe they'll be able to index them soon?
I would guess that Google would devalue
<noscript>text, as almost everyone has JavaScript enabled. Otherwise, everyone would be keyword stuffing in their <noscript> tags.</p> <p style="color: #5e5e5e;">The option you outlined sounds like it could work. If you're just taking the content from JavaScript, and loading it in the HTML if the user doesn't have JavaScript enabled. Google is actually suggesting in their ajax crawling guide to actually serve the Googlebot a static page instead of the page with AJAX content, which seems much closer to cloaking than the option you're suggesting.</p></noscript>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No Java, No Content..?
Hello Mozers! 🙂 I have a question for you: I am working on a site and while doing an audit I disabled JavaScript via the Web Developer plugin for Chrome. The result is that instead of seeing the page content, I see the typical “loading circle” but nothing else. I imagine this not a good thing but what does this implies technically from crawler perspective? Thanks
Technical SEO | | Pipistrella0 -
Issue with duplicate content
Hello guys, i have a question about duplicate content. Recently I noticed that MOZ's system reports a lot of duplicate content on one of my sites. I'm a little confused what i should do with that because this content is created automatically. All the duplicate content comes from subdomain of my site where we actually share cool images with people. This subdomain is actually pointing to our Tumblr blog where people re-blog our posts and images a lot. I'm really confused how all this duplicate content is created and what i should do to prevent it. Please tell me whether i need to "noindex", "nofollow" that subdomain or you can suggest something better to resolve that issue. Thank you!
Technical SEO | | odmsoft0 -
Duplicate Content in Wordpress.com
Hi Mozers! I have a client with a blog on wordpress.com. http://newsfromtshirts.wordpress.com/ It just had a ranking drop because of a new Panda Update, and I know it's a Dupe Content problem. There are 3900 duplicate pages, basically because there is no use of noindex or canonical tag, so archives, categories pages are totally indexed by Google. If I could install my usual SEO plugin, that would be a piece of cake, but since Wordpress.com is a closed environment I can't. How can I put a noindex into all category, archive and author peges in wordpress.com? I think this could be done by writing a nice robot.txt, but I am not sure about the syntax I shoud use to achieve that. Thank you very much, DoMiSol Rossini
Technical SEO | | DoMiSoL0 -
Duplicate Content on SEO Pages
I'm trying to create a bunch of content pages, and I want to know if the shortcut I took is going to penalize me for duplicate content. Some background: we are an airport ground transportation search engine(www.mozio.com), and we constructed several airport transportation pages with the providers in a particular area listed. However, the problem is, sometimes in a certain region multiple of the same providers serve the same places. For instance, NYAS serves both JFK and LGA, and obviously SuperShuttle serves ~200 airports. So this means for every airport's page, they have the super shuttle box. All the provider info is stored in a database with tags for the airports they serve, and then we dynamically create the page. A good example follows: http://www.mozio.com/lga_airport_transportation/ http://www.mozio.com/jfk_airport_transportation/ http://www.mozio.com/ewr_airport_transportation/ All 3 of those pages have a lot in common. Now, I'm not sure, but they started out working decently, but as I added more and more pages the efficacy of them went down on the whole. Is what I've done qualify as "duplicate content", and would I be better off getting rid of some of the pages or somehow consolidating the info into a master page? Thanks!
Technical SEO | | moziodavid0 -
Category URL Duplicate Content
I've recently been hired as the web developer for a company with an existing web site. Their web architecture includes category names in product urls, and of course we have many products in multiple categories thus generating duplicate content. According to the SEOMoz Site Crawl, we have roughly 1600 pages of duplicate content, I expect primarily from this issue. This is out of roughly 3600 pages crawled. My questions are: 1. Fixing this for the long term will obviously mean restructuring the URLs for the site. Is this worthwhile and what will the ramifications be of performing such a move? 2. How can I determine the level and extent of the effects of this duplicated content? 3. Is it possible the best course of action is to do nothing? The site has many, many other issues, and I'm not sure how highly to prioritize this problem. In addition, the IT man is highly doubtful this is causing an SEO issue, and I'm going to need to be able to back up any action I request. I do feel I will need to strongly justify any possible risks this level of site change could cause. Thanks in advance, and please let me know if any more information is needed.
Technical SEO | | MagnetsUSA0 -
Duplicate Content
Many of the pages on my site are similar in structure/content but not exactly the same. What amount of content should be unique for Google to not consider it duplicate? If it is something like 50% unique would it be preferable to choose one page as the canonical instead of keeping them both as separate pages?
Technical SEO | | theLotter0 -
Dismal content rankings
Hi, I realize this is a very broad question, but I am going to ask it anyways in the hopes that someone might have some insight. I have created a great deal of unique content for the site http://www.healthchoices.ca. You can select a video category from the top dropdown, then click on a video beside the provider box to see. The articles I've written are accessible by the View Article tab under each video. I have worked hard to make the articles informative and they are all unique with quotes from expert physicians. Even for strange health conditions that don't have a lot of competition - I don't see us appearing. Our search results are quite dismal for the amount of content we have. I guess I'm checking to see if anyone is able to point me in the right direction at all? If anything jumps out... Thanks, Erin
Technical SEO | | erinhealthchoices0 -
About duplicate content
Hi i'm a new guy around here, but i'm having this problem in my website. Using de Seomoz tools i ran a camping to my website, in results i get to many errors for duplicate conten, for example, http://www.mysite/blue/ http://www.mysite/blue/index.html, so my question is, what is the best way to resolve this problem, use a 301 or use the rel canonical tag? Wich url will be consider for main url, Thanks for yor help.
Technical SEO | | NorbertoMM0