What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
-
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated.
Option: 1
If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page.
Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used.
Option: 2
In order to make your AJAX application crawlable, your site needs to abide by a new agreement.This agreement rests on the following:
- The site adopts the AJAX crawling scheme.
- For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed.
- The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results.
In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide.Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tabhttps://www.pivotaltracker.com/public_projects
This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab
This is the best resources I have found regarding Google and Javascript
http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources:http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=35769
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site got hacked and now i have 1000s of 404 pages and backlinks. Should i transfer my site to a new domain name and start again?
My site was hacked and I had 1000s of pages that should not exist created and has had 1000s of backlinks put in. Now i have the same pages and backlinks redirecting to 404 pages. Is this why my site crashed out of google and my SEO fixes since have made no progress to my problem?
Industry News | | KeithWarbyUK0 -
What is the most frustrating/challenging part about sending marketing emails?
I'm in the process of gathering research into the challenges most commonly faced by marketers when executing email campaigns. What are yours? And what role do your play in the process? (Job title?) Looking forward to a spirited discussion!
Industry News | | johnbonini0 -
What Google Analytics Data to Share with Potential Website Buyer
Hi Mozzers,
Industry News | | emerald
We have contacted our competitors to let them know we would like to sell our website (domain and all content). One of them has asked for Google Analytics data. Which parts of this and how is this data best shared in such a case? As this is the opening of offers, I'm assuming some kind of PDF export with a summary of some Analytics data is sufficient to see who is serious. Then for those who are serious more data could be shared. Or is it ever ok to share your full Analytics with a competitor? Would love to hear what data and best practises are used to share this kind of information. Thank you.0 -
Very odd behavior. Google is changing “%20” to “+” in my URLS
I just realized many of the links on my site are BROKEN when entered from a Google SERP. This didn’t used to be this way. I have no idea what’s going but I’m worried. It involves a folder of our site that has a space in it. Google is even displaying the proper “%20” in the SERP but when I click the link it replaces that “%20” with a + which breaks the link!! You can see this in action by typing in “brown jordan sheffield furniture” without quotes into Google. You’ll see our site come up first and displaying this link sheffieldfurniture.com/Other%20Furniture/BrownJordan.html but when you click it the link is broken! This is happening on many of our pages! Anyone know what in the world is going on?
Industry News | | SheffieldMarketing0 -
What Keeps In-House SEOs Up @ Night
We are writing a research summary piece on in-house seo strategies. We're not trying to solve the worlds problems - we simply want to know what problems you face and how you try and solve them. We've broken our research into three key areas (training, value, and measurement). We would greatly appreciate your insights, thoughts, and recommendations. We will share our final paper (with sources) when completed. Here are the questions we're asking: How do you train and support your seo team? How do you show SEO value in the wake of Google algo changes including (keyword not provided)? How can SEO show value to other parts of the organization including market research, product managers, channel managers, etc? What should you be measuring and who should reports go to? James Loomstein Digital Space Consulting
Industry News | | Jloomstein0 -
SEO Service Needed
Any suggestion about hiring an SEO individual? Primary focus will be proper link/relationship building. What questions would you ask them? What is a fair compensation method? Can the compensation be based on results vs. effort? Thank you, Joe
Industry News | | csamsojo0 -
Did Google Search Just Get Crazy Local?
Hey All, I think it's a known fact at this point that when signed into a personal Google account while doing a search, the results are very oriented around keywords and phrases you have already searched for, as well as your account's perceived location; for instance when I wanted to check one of my own web properties in SE listings I would sign out or it would likely appear first as a false reading. Today I noticed something very interesting: even when not signed in, Google's listings were giving precedence to locality. It was to a very extreme degree, as in when searching for "web design," a firm a mile away ranked higher than one 1.5 miles away and such. It would seem that the algos having this high a level of location sensitivity and preference would actually be a boon for the little guys, which is, I assume why it was implemented. However, it brings up a couple of interesting questions for me. 1. How is this going to affect Moz (or any SE ranking platform, for that matter) reports? I assume that Google pulls locations from IP Addresses, therefore would it not simply pull the local results most relevant for the Moz server(s) IP? 2. What can one do to rise above this aggressive level of location based search? I mean, my site (which has a DA of 37 and a PA of 48) appears above sites like webdesign.org (DA of 82, PA of 85). Not that I'm complaining at the moment, but I could see this being a fairly big deal for larger firms looking to rank on a national level. What gives? I'd love to get some opinions from the community here if anyone else has noticed this...
Industry News | | G2W1 -
Node.js for SEO
We've got a client building a site using node.js (http://nodejs.org/) I'm not at all familiar with this and of course need to know how nodejs impacts SEO? Are you familiar with it? Any sites you know of using it? But as I said, bottom line - how will it impact the SEO on the site?
Industry News | | VMLYRDiscoverability1