Auto-loading content via AJAX - best practices
-
We have an ecommerce website and I'm looking at replacing the pagination on our category pages with functionality that auto-loads the products as the user scrolls. There are a number of big websites that do this - MyFonts and Kickstarter are two that spring to mind.
Obviously if we are loading the content in via AJAX then search engine spiders aren't going to be able to crawl our categories in the same way they can now. I'm wondering what the best way to get around this is.
Some ideas that spring to mind are:
-
detect the user agent and if the visitor is a spider, show them the old-style pagination instead of the AJAX version
-
make sure we submit an updated Google sitemap every day (I'm not sure if this a reasonable substitute for Google being able to properly crawl our site)
Are there any best practices surrounding this approach to pagination? Surely the bigger sites that do this must have had to deal with these issues?
Any advice would be much appreciated!
-
-
Hi Paul,
Pagination is always a bit of a sticky area!
Firstly I certainly wouldn't do any user agent detection, you don't wanna get busted for cloaking when you aren't even up to anything that naughty.
A nice way i've seen this handled (for wordpress sites although the idea can work on any site) is with the wordpress infinite scroll plugin : http://wordpress.org/extend/plugins/infinite-scroll/
That basically leaves the site as it is for non-javascript web browsers (so with page 1, 2 3 etc.) but if you have js enabled (i.e. not a spider bot) it will keep scrolling the page. This functionality could I guess be changed to create a pagination effect.
Tie this is with some rel="prev" and rel="next" markeup (http://googlewebmastercentral.blogspot.co.uk/2011/09/pagination-with-relnext-and-relprev.html) and I think that is certainly one way to fix the problem.
Another way could be using similar markup for a 'View All' page : http://googlewebmastercentral.blogspot.co.uk/2011/09/view-all-in-search-results.html
Hope that helps!
Stuart
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Geo Targeting Content Question
Hi, all First question here so be gentle, please My question is around geo targeted dynamic content; at the moment we run a .com domain with, for example, an article about running headphones and then at the end - taking up about 40% of the content - is a review of some people can buy, with affiliate links. We have a .co.uk site with the same page about running headphones and then 10 headphones for the UK market. Note: rel alternative is used on the pages to point to each other, therefore (hopefully) removing duplicate content issues. This design works well but it involves having to build links to two pages, in the case of this example. What we are thinking of doing is to just use the .com domain and having the product page of the page served dynamically, ie, people in the UK see UK products and people in US see US products. What are people's thoughts on this technique, please? From my understanding, it wouldn't be any problem with Google for cloaking etc because a googlebot and a human from the same country will see the same content. The site is made in Wordpress and has <....html lang="en-US"> (for the .com) in the header. Would this cause problems for the page ranking in the UK etc? The ultimate goal of doing this would be to reduce link building efforts by halving the number of pages which links would have to be built for. I welcome any feedback. Many thanks
Technical SEO | | TheMuffinMan0 -
Dealing with broken internal links/404s. What's best practice?
I've just started working on a website that has generated lots (100s) of broken internal links. Essentially specific pages have been removed over time and nobody has been keeping an eye on what internal links might have been affected. Most of these are internal links that are embedded in content which hasn't been updated following the page's deletion. What's my best way to approach fixing these broken links? My plan is currently to redirect where appropriate (from a specific service page that doesn't exist to the overall service category maybe?) but there are lots of pages that don't have a similar or equivalent page. I presume I'll need to go through the content removing the links or replacing them where possible. My example is a specific staff member who no longer works there and is linked to from a category page, should i be redirecting from the old staff member and updating the anchor text, or just straight up replacing the whole thing to link to the right person? In most cases, these pages don't rank and I can't think of many that have any external websites linking to them. I'm over thinking all of this? Please help! 🙂
Technical SEO | | Adam_SEO_Learning0 -
AJAX and SEO
Hello team, Need to bounce a question off the group. We have a site that uses the .NET AJAX tool kit to toggle tabs on a page. Each tab has content and the content is drawn on page load. In other words, the content is not from an AJAX call, it is there from the start. The content sits in DIV tags which the javascript toggles - that's all. My customer hired an "SEO Expert" who is telling them that this content is invisible to search engines. I strongly disagree and we're trying to come to a conclusion. I understand that content rendered async via an AJAX call would not be spidered, however just using the AJAX (Javascript) to switch tabs will not affect the spiders finding the content in the markup. Any thoughts?
Technical SEO | | ChrisInColorado0 -
How to avoid duplicate content
Hi, I have a website which is ranking on page 1: www.oldname.com/landing-page But because of legal reason i had to change the name.
Technical SEO | | mikehenze
So i moved the landing page to a different domain.
And 301'ed this landing page to the new domain (and removed all products). www.newname.com/landing-page All the meta data, titles, products are still the same. www.oldname.com/landing-page is still on the same position
And www.newname.com/landing-page was on page 1 for 1 day and is now on page 4. What did i do wrong and how can I fix this?
Maybe remove www.oldname.com/landing-page from Google with Google Webmaster Central or not allow crawling of this page with .htaccess ?0 -
Ajax Optimization in Mobile Site - Ajax Crawling
I'm working on a mobile site that has links embedded in JavaScript/Ajax in the homepage. This functionality is preventing the crawlers for accessing the links to mobile specific URLs. We're using an m. sub-domain. This is just an object in the homepage with an expandable list of links. I was wondering if using the following solution provided by Google will be a good way to help with this situation. https://developers.google.com/webmasters/ajax-crawling/ Thanks!
Technical SEO | | burnseo0 -
Tips and duplicate content
Hello, we have a search site that offers tips to help with search/find. These tips are organized on the site in xml format with commas... of course the search parameters are duplicated in the xml so that we have a number of tips for each search parameter. For example if the parameter is "dining room" we might have 35 pieces of advice - all less than a tweet long. My question - will I be penalized for keyword stuffing - how can I avoid this?
Technical SEO | | acraigi0 -
Duplicate Content Issue
SEOMOZ is giving me a number of duplicate content warnings related to pages that have an email a friend and/or email when back in stock versions of a page. I thought I had those blocked via my robots.txt file which contains the following... Disallow: /EmailaFriend.asp Disallow: /Email_Me_When_Back_In_Stock.asp I had thought that the robot.txt file would solve this issue. Anyone have any ideas?
Technical SEO | | WaterSkis.com0 -
A problem with duplicate content
I'm kind of new at this. My crawl anaylsis says that I have a problem with duplicate content. I set the site up so that web sections appear in a folder with an index page as a landing page for that section. The URL would look like: www.myweb.com/section/index.php The crawl analysis says that both that URL and its root: www.myweb.com/section/ have been indexed. So I appear to have a situation where the page has been indexed twice and is a duplicate of itself. What can I do to remedy this? And, what steps should i take to get the pages re-indexed so that this type of duplication is avoided? I hope this makes sense! Any help gratefully received. Iain
Technical SEO | | iain0