Hide messenger for crawlers
-
At Magnet.me we are using Intercom to communicate with our users. This means that we are actively adding javascript code which will load the Intercom javascript on each page, and render the button afterwards.
However, this button has no value for crawlers, and slows the page down as the javascript is big and fairly slow. Therefore I considered to ship some code which disables this button, such that performance would improve. To give a ball pack estimate, the buttons javascript is around 3x bigger than the actual entire react application...
Unfortunately this would result in giving users and crawlers slightly different content on the page. I'm unsure about the possible SEO impact:
- Would Google mark the page as faster due to less resources to load?
- Or would it penalize the page for showing slightly different content to users and search engines?
-
In general, I don't think that this is a great idea. Although Google does meter out crawl-allowance, Google also wants a realistic view of the pages which it is crawling. Your attempt at easing the burden of Google's crawl-bots may be seen as an attempt to 'fake' good page-speed metrics, for example (by letting Google load the web-page much faster than end users). This could cause some issues with your rankings if uncovered by a 'dumb' algorithm (which won't factor in your good intentions)
Your efforts may also be unrequired. Although Google 'can' fire and crawl JavaScript generated elements, it doesn't always do so and it doesn't do that for everyone. If you read my (main) response to this question, you'll get a much better idea of what I'm talking about here. As such, the majority of the time - you may be taking on 'potential' risk for no reward
Would it be possible to code things slightly differently? Currently you state that this is your approach:
"This means that we are actively adding javascript code which will load the Intercom javascript on each page, and render the button afterwards"
Could you not add the button through HTML / CSS, and bind a smaller script to the button which then loads the "Intercom javascript"? I am assuming here that the "Intercom javascript" is the large script which is slowing the page(s) down. Why not load that script, only on request (seems logical, but also admit I am no dev - sorry)? It just seems as though more things are being initiated and loaded up-front than are really required
Google want to know which technologies are deployed on your page if they choose to look, they also don't want people going around faking higher page-speed loading scores
If you really want to stop Google wasting time on that script, your basic options would be:
- Code the site to refuse to serve the script to the "googlebot" user agent
- Block the script in robots.txt so that it is never crawled (directive only)
The first option is a little thermonuclear and may mean you get accused of cloaking (unlikely), or at the least 'faking' higher page-speed scores (more likely). The second option is only a directive which Google can disregard, so the risks are lower. The down-side is that Google will pick up on the blocked resource, and may not elevate your page-loading speed. Even if they do, they may say "since we can't view this script or know what it does, we don't know what the implication for end-users is so we'll dampen the rankings a little as a risk assessment factor"
Myself, I would look for an implementation that doesn't slow the site down so much (for users or search-bots). I get that it may be tricky, obviously re-coding the JS from Intercom would probably break the chat entirely. Maybe though, you could think about when that script has to be loaded. Is it really needed, on page-load, all the time for everyone? Or do people only need that functionality, when they choose to interact? How can you slot the loading of the code into that narrow trench, and get the best of both worlds?
Sorry it's not a super simple answer, hope it helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler issues on subdomain - Need resolving?
Hey Guys, I'm fairly new to the world of SEO and have a ton of crawler issues with a friends website I'm doing some work on. After Moz did a site crawl I'm getting loads of errors (Total of 100+ for critical crawler, content and meta data). Most of these are due to broken social links on a subdomain - so my question is do I need to resolve all of the errors even if they are on a sub-domain? Will it affect the primary website? Thanks, Jack
Technical SEO | | Jack11660 -
Why are crawlers not picking up these pages?
Hi there, I've been asked to audit a new subdomain for a travel company. It's all a bit messy, so it's going to take some time to remedy. However, one thing I couldn't understand was the low number of pages appearing in certain crawlers. The subdomain has many pages. A homepage, category pages then product pages. Unfortunately, tools like Screaming Frog and xml-sitemaps.com are only picking up 19 pages and I can't figure out why. Google has so far indexed around 90 pages - this is by no means all of them, but that's probably because of the new domain and lack of sitemap etc. After looking at the crawl results, only the homepage and category (continent pages) are showing. So all the product pages are not. for example, tours.statravel.co.uk/trip/Amsterdam_Kings_Day_(Start_London_end_London)-COCCKDM11 is not appearing in the crawl results. After reviewing the source code, I can't see anything that would prevent this page being crawled. Am I missing something? At the moment, the crawl should be picking up around 400+ product pages, but it's not picking up any. Thanks
Technical SEO | | PeaSoupDigital0 -
Hiding h1 tags in Magento
Hi Moz Community, I know that hiding h1 tags isn't a good practice for SEO and google, but we have banners that look much nicer than the stock text Magento uses for its titles. The banners have the same text and the h1 is in the source code, just not visible on front end. The option Magento gives is "hide title on the page." So I'm not sure if this is actually the bad way to hide it or if it's fine for search engines. Thanks,
Technical SEO | | IceIcebaby
-Reed0 -
Block bad crawlers
Hi! how are you? I've been working on some of my sites, and noticed that i'm getting lots of crawls by search engines that i'm not intereted in ranking well. My question is the following: do you have a list of 'bad behaved' search engines that take lots of bandwidth and don´t send much/good traffic? If so, do you know how to block them using robots.txt? Thanks for the help! Best wishes, Ariel
Technical SEO | | arielbortz0 -
What does the Google Crawler see when crawling this page?
If you look at this page http://www.rockymountainatvmc.com/t/49/61/185/730/Batteries. You will see we have a vehicle filter on it. Right now you only see a picture of a battery and some bad text that needs to be updated ( We just hired a copywriter!). Our question is when google crawls this site will thy just see this or will they see all the products that appear after you pick a "machine type" "make" "model" and "year" Any help would be great. Right now we think it just sees this main page how we have set things up; however, we know that the crawler is also crawling some ajax. We just want to be sure of things.
Technical SEO | | DoRM0 -
How does your crawler treat ajax links?
Hello! It looks like the seomoz crawler (and google) follows ajax links. Is this normal behavior? We have implemented the canonical element and that seems to resolve most of the duplicate content issues. Anything else we can do? Example: Krom
Technical SEO | | AJPro0 -
Crawler Stats
Hello, On all of my crawler stats, it is showing both http://domainhere.com and http://www.domainhere.com Is it bad to have both types of URLS (one with www and one without www) or should we be only using links with www? This question may not make any sense, but pretty much should it matter that both versions are showing up in my crawler results or not? Thanks for any help in advance.
Technical SEO | | EQ-Richie0 -
Page not Accesible for crawler in on-page report
Hi All, We started using SEOMoz this week and ran into an issue regarding the crawler access in the on-page report module. The attached screen shot shows that the HTTP status is 200 but SEOMoz still says that the page is not accessible for crawlers. What could this be? Page in question
Technical SEO | | TiasNimbas
http://www.tiasnimbas.edu/Executive_MBA/pgeId=307 Regards, Coen SEOMoz.png0