Javascript late loaded content not read by Gogglebot
-
Hi,
We have a page with some good "keyword" content (user supplied comment widget), but there was a design choice made previously to late load it via JavaScript. This was to improve performance and the overall functionality relies on JavaScript. Unfortunately since it is loaded via js, it isn't read by Googlebot so we get no SEO value.
I've read Google doesn't weigh
<noscript>content as much as regular content. is this true? Once option is just to load some of the content via <noscript> tags. I just want to make sure Google still reads this content.</p> <p>Another option is to load some of the content via simple html when loading the page. If JavaScript is enabled, we'd hide this "read only" version via css and display the more dynamic user friendly version. - Would changing display based on js enabled be deemed as cloaking? Since non-js users would see the same thing (and this provides a ways for them to see some of the functionality in the widget, it is an overall net gain for those users too).</p> <p>In the end, I want Google to read the content but trying to figure out the best way to do so.</p> <p>Thanks,</p> <p>Nic</p> <p> </p></noscript>
-
If the content is too late, you're right, the Googlebot may not grab it. However, Google is getting better and better at indexing AJAX content that's loaded after the fact. On one of the sites I work on, we really didn't want to go through the whole process of serving up an HTML snapshot to Googlebot (outlined http://code.google.com/web/ajaxcrawling/). About a month ago, I did a search in Google based on the AJAX content, and it returned the page, meaning Google is finding that AJAX content and indexing it! They're indexing comments now (see http://www.searchenginejournal.com/google-indexing-facebook-comments/35594/) as well, like Disqus and Facebook comments. What kind of comments widget are you loading that Google can't get at? Maybe they'll be able to index them soon?
I would guess that Google would devalue
<noscript>text, as almost everyone has JavaScript enabled. Otherwise, everyone would be keyword stuffing in their <noscript> tags.</p> <p style="color: #5e5e5e;">The option you outlined sounds like it could work. If you're just taking the content from JavaScript, and loading it in the HTML if the user doesn't have JavaScript enabled. Google is actually suggesting in their ajax crawling guide to actually serve the Googlebot a static page instead of the page with AJAX content, which seems much closer to cloaking than the option you're suggesting.</p></noscript>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Cloud Hosting and Duplicate content
Hi I have an ecommerce client who has all their images cloud hosted (amazon CDN) to speed up site. Somehow it seems maybe because the pinned the images on pinterest but the CDN got indexed and there now seems to be about 50% of the site duplicated (about 2500 pages eg: http://d2rf6flfy1l.cloudfront.net..) Is this a problem with duplicate content? How come Moz doesnt show it up as crawl errors? Why is thisnot a problem that loads of people have?I only found a couple of mentions of such a prob when I googled it.. any suggestion will be grateful!
Technical SEO | | henya0 -
Duplicate content issue
Moz crawl diagnostic tool is giving me a heap of duplicate content for each event on my website... http://www.ticketarena.co.uk/events/Mint-Festival-7/ http://www.ticketarena.co.uk/events/Mint-Festival-7/index.html Should i use a 301 redirect on the second link? i was unaware that this was classed as duplicate content. I thought it was just the way the CMS system was set up? Can anyone shed any light on this please. Thanks
Technical SEO | | Alexogilvie0 -
Term for how content or data is structured
There is a term for how data or content is structured and for the life of me I can't figure it out. The following is the best I know of how to explain it: magnolia is of Seattle. Seattle is of Washington. Washington is of the US. US is of North America. North America is of Earth. etc etc etc etc. Any help is much appreciated. I'm trying to use the term to communicate It's application to SEO in that Google analyze how information is structured to understand the breadth and depth of your sites content...
Technical SEO | | BonsaiMediaGroup0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Late loading content via AJAX - impact to bots
Hi, In an attempt to reduce the latency of our site, we are planning on late-loading all content below the fold via AJAX. My concern is Googlebot won't see this content and won't index it properly (our site is very large and we have lots of content). What is good for our users is not necessarily good for bots. Will late loading AJAX content be read by Googlebot? Thoughts on how to balance speed vs search engine crawl-ability?
Technical SEO | | NicB10 -
Duplicate content issue
Hi everyone, I have an issue determining what type of duplicate content I have. www.example.com/index.php?mact=Calendar,m57663,default,1&m57663return_id=116&m57663detailpage=&m57663year=2011&m57663month=6&m57663day=19&m57663display=list&m57663return_link=1&m57663detail=1&m57663lang=en_GB&m57663returnid=116&page=116 Since I am not an coding expert, to me it looks like it is a URL parameter duplicate content. Is it? At the same time "return_id" would makes me think it is a session id duplicate content. I am confused about how to determine different types of duplicate content, even by reading articles on Seomoz about it: http://www.seomoz.org/learn-seo/duplicate-content. Could someone help me on how to recognize different types of duplicate content? Thank you!
Technical SEO | | Ideas-Money-Art0 -
Indexed non www. content
Google has indexed a lot of old non www.mysite.com contnet my page at mysite.com still answers queries, should I 301 every url on it? Google has indexed about 200 pages all erogenous 404's, old directories and dynamic content at mysite.com www.mysite.com has 12 pages listed that are all current. Is this affecting my rankings?
Technical SEO | | adamzski0 -
If I 301 re-direct a piece of content (A) to another piece of content (B) and B is unrelated in subject matter to A, will the referring search keywords to content piece A hold for content piece B?
For example, I have a piece of content about furniture and it ranks in top 5 in the SERPs for the phrase "furniture". If I were to 301 redirect that piece of furniture content to a piece of content about trucks, would the referring keyword "furniture" continue to rank over time for the trucks content? My instincts tell me that in the short term the content piece about trucks would receive traffic for the term "furniture", but over the long term, the trucks content would lose rankings for the term "furniture" since the piece has to do with trucks and not furniture. Any thoughts?
Technical SEO | | pbrothers240