How to make AJAX content crawlable from a specific section of a webpage?
-
Content is located in a specific section of the webpage that are being loaded via AJAX.
-
Thanks Paddy! We'll definitely try these solutions.
-
Hi there,
There are plenty of really good resources online that cover this area, so I'd like to point you towards them rather than copy and paste their guidelines here!
Google has a good guide here with lots of visuals on how they crawl AJAX -
https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
They also have a short video here covering some of the basics of Google crawling AJAX and JavaScript:
https://www.youtube.com/watch?v=_6mtiwQ3nvw
You should also become familiar with pushState which is cover in lots of detail, with an example implementation in this blog post:
http://moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
The guys at Builtvisible have also put together a few good blog posts on this topic which are worth a read:
http://builtvisible.com/javascript-framework-seo/
http://builtvisible.com/on-infinite-scroll-pushstate/
Essentially, you need to make sure that Googlebot is able to render your content as you intended and that this looks the same to them as it does to users. You can often test how well they can render your content by checking the cache of your page or by using this feature in Google Webmaster Tools.
I hope that helps!
Paddy
-
Hi,
Making the content being loaded by AJAX crawlable by Google involves serving a static HTML snapshot of the content being loaded by AJAX to Google. We should make sure that the HTML snapshot is the exact copy that will be served to the visitors through AJAX.
Here you go for more information:
https://support.google.com/webmasters/answer/174992?hl=en
Best regards,
Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Possible duplicate content issue
Hi, Here is a rather detailed overview of our problem, any feedback / suggestions is most welcome. We currently have 6 sites targeting the various markets (countries) we operate in all websites are on one wordpress install but are separate sites in a multisite network, content and structure is pretty much the same barring a few regional differences. The UK site has held a pretty strong position in search engines the past few years. Here is where we have the problem. Our strongest page (from an organic point of view) has dropped off the search results completely for Google.co.uk, we've picked this up through a drop in search visibility in SEMRush, and confirmed this by looking at our organic landing page traffic in Google Analytics and Search Analytics in Search Console. Here are a few of the assumptions we've made and things we've checked: Checked for any Crawl or technical issues, nothing serious found Bad backlinks, no new spammy backlinks Geotarggetting, this was fine for the UK site, however the US site a .com (not a cctld) was not set to the US (we suspect this to be the issue, but more below) On-site issues, nothing wrong here - the page was edited recently which coincided with the drop in traffic (more below), but these changes did not impact things such as title, h1, url or body content - we replaced some call to action blocks from a custom one to one that was built into the framework (Div) Manual or algorithmic penalties: Nothing reported by search console HTTPs change: We did transition over to http at the start of june. The sites are not too big (around 6K pages) and all redirects were put in place. Here is what we suspect has happened, the https change triggered google to re-crawl and reindex the whole site (we anticipated this), during this process, an edit was made to the key page, and through some technical fault the page title was changed to match the US version of the page, and because geotargetting was not turned on for the US site, Google filtered out the duplicate content page on the UK site, there by dropping it off the index. What further contributes to this theory is that a search of Google.co.uk returns the US version of the page. With country targeting on (ie only return pages from the UK) that UK version of the page is not returned. Also a site: query from google.co.uk DOES return the Uk version of that page, but with the old US title. All these factors leads me to believe that its a duplicate content filter issue due to incorrect geo-targetting - what does surprise me is that the co.uk site has much more search equity than the US site, so it was odd that it choose to filter out the UK version of the page. What we have done to counter this is as follows: Turned on Geo targeting for US site Ensured that the title of the UK page says UK and not US Edited both pages to trigger a last modified date and so the 2 pages share less similarities Recreated a site map and resubmitted to Google Re-crawled and requested a re-index of the whole site Fixed a few of the smaller issues If our theory is right and our actions do help, I believe its now a waiting game for Google to re-crawl and reindex. Unfortunately, Search Console is still only showing data from a few days ago, so its hard to tell if there has been any changes in the index. I am happy to wait it out, but you can appreciate that some of snr management are very nervous given the impact of loosing this page and are keen to get a second opinion on the matter. Does the Moz Community have any further ideas or insights on how we can speed up the indexing of the site? Kind regards, Jason
Intermediate & Advanced SEO | | Clickmetrics0 -
Translated Content on Country Domains
Hi, We have blogs set up in each of our markets, for example http://blog.telefleurs.fr, http://blog.euroflorist.nl and http://blog.euroflorist.be/nl. Each blog is localized correctly so FR has fr-FR, NL has nl-NL and BE has nl-BE and fr-BE. All our content is created or translated by our Content Managers. The question is - is it safe for us to use a piece of content on Telefleurs.fr and the French translated Euroflorist.be/fr, or Dutch content on Euroflorist.nl and Euroflorist.be/nl? We want to avoid canonicalising as neither site will take preference. Is there a solution I've missed until now? Thanks,
Intermediate & Advanced SEO | | seoeuroflorist
Sam0 -
Content Publishing Volume/Timing
I am working with a company that has a bi-monthly print magazine that has several years' worth of back issues. We're working on building a digital platform, and the majority of articles from the print mag - tips, how-tos, reviews, recipes, interviews, etc - will be published online. Much of the content is not date-sensitive except for the occasional news article. Some content is semi-date-sensitive, such as articles focusing on seasonality (e.g. winter activities vs. summer activities). My concern is whether, once we prepare to go live, we should ensure that ALL historical content is published at once, and if so, whether back-dates should be applied to each content piece (even if dating isn't relevant), or whether we should have a strategy in place in terms of creating a publishing schedule and releasing content over time - albeit content that is older but isn't necessarily time-sensitive (e.g. a drink recipe). Going forward, all newly-created content will be published around the print issue release. Are there pitfalls I should avoid in terms of pushing out so much back content at once?
Intermediate & Advanced SEO | | andrewkissel0 -
Content with Read More..?
How does google see content that's static on page & content that has a "see more" or "read more" tag. Where the content collapses & de-collapses on a mouse click. On a condition that the complete is readable via the source code view as well as crawl-able by spiders?
Intermediate & Advanced SEO | | welcomecure0 -
When does it make sense to make a meta description longer than what's considered best practice?
I've seen all the length recommendations and understand the reasoning is that they will be cut off when you search the time but I've also noticed that Google will "move" the meta description if the search term that the user is using is in the cached version of the page. S I have a case where Google is indexing the pages but not caching the content (at least not yet). So we see the meta description just fine on the Google results but we can't see the content cache when checking the Google cached version. **My question is: **In this case, why would it be a bad idea to make a slightly lengthier (but still relevant) meta description with the intent that one of the terms in that description could match the user's search terms and the description would "move" to highlight that term in the results.
Intermediate & Advanced SEO | | navidash0 -
4 websites with same content?
I have 4 websites (1 Main, 3 duplicate) with same content. Now I want to change the content for duplicate websites and main website will remain the same content. Is there any problem with my thinking?
Intermediate & Advanced SEO | | marknorman0 -
Will Creating a Keyword specific Page to replace the Category Section page cause any harm to my website?
I am running a word press install for my blog and recently had 3 of my main keywords set as categories. I recently decided to create a static page for the keywords instead of having the category page showing all the posts within the category, and took it off the navigation bar. I read about setting the categories to use NO index so the search engines can shine more importance on the new pages i created to really replace where the category was showing. Can this have a negative effect on my rankings? http://junkcarsforcashnjcompany.com junk car removal nj is showing the category section, So i placed the no index on it. Will the search engines refresh the data and replace it with the new page I created?
Intermediate & Advanced SEO | | junkcars0 -
Ajax Content Indexed
I used the following guide to implement the endless scroll https://developers.google.com/webmasters/ajax-crawling/docs/getting-started crawlers and correctly reads all URLs the command "site:" show me all indexed Url with #!key=value I want it to be indexed only the first URL, for the other Urls I would be scanned but not indexed like if there were the robots meta tag "noindex, follow" how I can do?
Intermediate & Advanced SEO | | wwmind1