Is it OK to have Search Engines Skip Ajax Content Execution?
-
I recently added some ajax pages to automatically fill in small areas of my site upon page loading. That is, the user doesn't have to click anything. Therefore when Google and Bing crawl the site the ajax is executed too. However, my understanding is that does not mean Google and Bing are also crawling the ajax content.
I actually would prefer that the content would be not be executed OR crawled by them. In the case of Bing I would prefer that the content not even be executed because indications are that the program exits the ajax page for Bing because Bing isn't retaining session variables which that page uses, which makes me concerned that perhaps when that happens Bing isn't able to even crawl the main content..dunno..So, ajax execution seems potentially risky for normal crawling in this case.
I would like to simply have my program skip the ajax execution for Google and Bing by recognizing them in the useragent and using an If robot == Y skip ajax approach. I assume I could put the ajax program in the robots.txt file but that wouldn't keep Bing from executing it (and having that exit problem mentioned above). It would be simpler to just have them skip the ajax execution altogether.
Is that ok or is there a chance the search engines will penalize my site if they find out (somehow) that I have different logic for them than for the actual users? In the past this surely was not a concern but I understand that Google is increasingly trying to become like a browser so may increasingly have a problem with this approach.
Thoughts?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it a good strategy to link older content that was timely at one point to newer content that we would prefer to guide traffic and value to
Hi All, I've been working for a website/publisher that produces good content and has been around for a long time but has recently been burdened by a high level of repetitious production, and a high volume in general with pages that don't gather as much traffic as desired. One such fear of mine is that every piece published doesn't have any links pointing to when it is published outside of the homepage or syndicated referrals. They do however have a lot (perhaps too many) outbound internal links away from it. Would it be a good practice, especially for new content that has a longer shelf life, to go back to older content and place links pointing to the new one? I would hope this would boost traffic via internal recircultion and Page Authority, with the added benefits of anchor text boosts.
Intermediate & Advanced SEO | | ajranzato91 -
How to make AJAX content crawlable from a specific section of a webpage?
Content is located in a specific section of the webpage that are being loaded via AJAX.
Intermediate & Advanced SEO | | zpm20140 -
Site: inurl: Search
I have a site that allows for multiple filter options and some of these URL's have these have been indexed. I am in the process of adding the noindex, nofollow meta tag to these pages but I want to have an idea of how many of these URL's have been indexed so I can monitor when these have been re crawled and dropped. The structure for these URL's is: http://www.example.co.uk/category/women/shopby/brand1--brand2.html The unique identifier for the multiple filtered URL's is --, however I've tried using site:example.co.uk inurl:-- but this doesn't seem to work. I have also tried using regex but still no success. I was wondering if there is a way around this so I can get a rough idea of how many of these URL's have been indexed? Thanks
Intermediate & Advanced SEO | | GrappleAgency0 -
Scraping / Duplicate Content Question
Hi All, I understanding the way to protect content such as a feature rich article is to create authorship by linking to your Google+ account. My Question
Intermediate & Advanced SEO | | Mark_Ch
You have created a webpage that is informative but not worthy to be an article, hence no need create authorship in Google+
If a competitor comes along and steals this content word for word, something similar, creates their own Google+ page, can you be penalised? Is there any way to protect yourself without authorship and Google+? Regards Mark0 -
Do Local Search Efforts (Citations, NAP, Reviews) have an impact on traditional organic search listings (without the A, B, C mapping icons), but rather the traditional listings?
Are citations, NAP, Reviews, and other local search efforts impact traditional SEO listings? Can one elaborate?
Intermediate & Advanced SEO | | JQC0 -
Content linking ?
If you have links on the left hand side of the website on the Navigation and content at the bottom of the page and link to the same page with different anchor text or the same would it help the page (as it is surrounded by similar text) or is the first one counted and this is it?
Intermediate & Advanced SEO | | BobAnderson0 -
Duplicate Content Help
seomoz tool gives me back duplicate content on both these URL's http://www.mydomain.com/football-teams/ http://www.mydomain.com/football-teams/index.php I want to use http://www.mydomain.com/football-teams/ as this just look nice & clean. What would be best practice to fix this issue? Kind Regards Eddie
Intermediate & Advanced SEO | | Paul780 -
Sub domain versus separate domains, which is better for Search engine purposes?
We are pitching to a hotel client to build two new websites, a summer website and a winter website, two completely different looking websites. The client wants to automatically switch their domain name to point to one or the other, depending on the time of year. The customer does not want to use a landing page where you would choose which site to visit; they want the domain name to go directly to the relevant website. Our options: Set up two new domain names and optimise each website based on the holiday season and facilities offered at that time of year. Then change the exisiting domain name to point at the website that is in season. Or Use the existing domain name and setup two sub domains, switching the home page as necessary. We have been chewing this one over for a couple of days, the concern that we have with both options is loss of search visibility. The current website performs well in search engines, it has a home page rank of 4 and sub-pages ranking 2 and 3’s, when we point the domain at the summer site (the client only has a winter website at present) then we will lose all of the search engine benefits already gained. The new summer content will be significantly different to the winter content. We then work hard for six months optimising the summer site and switch back to the Winter site, the content will be wrong. Maybe because it's Friday afternoon we cannot see the light for the smoke of the cars leaving the car park for the weekend, or maybe there is no right or wrong approach. Is there another option? Are we not seeing the wood for the trees? Your comments highly welcome. Martin
Intermediate & Advanced SEO | | Bill-Duff0