Implications of extending browser caching for Google?
-
I have been asked to leverage browser caching on a few scripts in our code.
- http://www.googletagmanager.com/gtm.js?id=GTM-KBQ7B5 (16 minutes 22 seconds)
- http://www.google.com/jsapi (1 hour)
- https://www.google-analytics.com/plugins/ua/linkid.js (1 hour)
- https://www.google-analytics.com/analytics.js (2 hours)
- https://www.youtube.com/iframe_api (expiration not specified)
- https://ssl.google-analytics.com/ga.js (2 hours)
The number beside each link is the expiration for cache applied by the owners. I'm being asked to extend the time to 24 hours. Part of this task is to make sure doing this is a good idea. It would not be in our best interest to do something that would disrupt the collection of data.
Some of what I'm seeing is recommending having a local copy which would mean missing updates from ga/gtm or call for the creation of a cron job to download any updates on a daily basis.
Another concern is would caching these have a delay/disruption in collecting data? That's an unknown to me – may not be to you.
There is also the concern that Google recommends not caching outside of their settings.
Any help on this is much appreciated.
Do you see any issues/risks/benefits/etc. to doing this from your perspective?
-
Thanks, this is super helpful
-
You wouldn't disrupt the collection of data, but you would need to run a cron job to keep updating it. It is not recommended that you store Google analytics locally & honestly it would make little difference to your speed and is more trouble than it's worth. Caching is not recommended by Google for a reason.
All though if your page speed is healthy your really have nothing to worry about. If your concern is just trying to get 100/100 on the page tests i have heard that this does the trick:
https://developers.google.com/speed/pagespeed/module/filter-make-google-analytics-async#description
Danny
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Browser Cacheing - HTTPS redirects to HTTP
Howdy lovely Moz people. A webmaster redirected https protocol links to http a number of years ago in order to try and capture as many links as possible on a site we now manage. We have recently tried to implement https and realised that because of this existing redirect rule, they are now causing infinite loops when trying to test an http redirect. http redirecting to https redirecting back to http, etc. The https version works by itself weirdly enough. We believe that this is due to the permanent browser caching. So unless users clear their cache, they will get this infinite loop. Does anyone have any advice on how we can get round this? a) index both sites and specify in GSC that the https is the canonical version of the site and hope that Google sees that and removes the http version for the https version b) stick with http as infinite loops will kill the site c) ??????????? Thanks all.
Intermediate & Advanced SEO | | HenryFrance0 -
Google Update
My rank has dropped quite a lot this past week and I can see from the Moz tools that there is an unconfirmed Google update responsible. Is there any information from Moz on this?
Intermediate & Advanced SEO | | moon-boots0 -
"Null" appearing as top keyword in "Content Keywords" under Google index in Google Search Console
Hi, "Null" is appearing as top keyword in Google search console > Google Index > Content Keywords for our site http://goo.gl/cKaQ4K . We do not use "null" as keyword on site. We are not able to find why Google is treating "null" as a keyword for our site. Is anyone facing such issue. Thanks & Regards
Intermediate & Advanced SEO | | vivekrathore0 -
How do you check the google cache for hashbang pages?
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage. My question is - how can you actually look up the cache for hashbang page?
Intermediate & Advanced SEO | | navidash0 -
Google not taking Meta...
Hello all, So I understand that Google may sometimes take content from the page as a snippet to display on SERPs rather than the meta description, but my problem goes a little beyond that. I have a section on my site which updates everyday so a lot of the content is dynamics (products for a shop, every morning unique stock is added or removed), and despite having a meta description, title and receiving an 'A' grade in the MOZ on page grader, these pages never show up in Google. After a little research I did a 'site:www.mysite.com/productpage' in Google and this indeed listed all my products, but interestingly for every single one Google had taken the copyright notice at the bottom of the page as the snippet instead of the meta or any H1, H2 or P text on the page... Does anyone have any idea why Google is doing this? It would explain a lot to me in terms of overall traffic, I'm just out of ideas... Thanks!
Intermediate & Advanced SEO | | HB170 -
Google+ Page Question
Just started some work for a new client, I created a Google+ page and a connected YouTube page, then proceeded to claim a listing for them on google places for business which automatically created another Google+ page for the business listing. What do I do in this situation? Do I delete the YouTube page and Google+ page that I originally made and then recreate them using the Google+ page that was automatically created or do I just keep both pages going? If the latter is the case, do I use the same information to populate both pages and post the same content to both pages? That doesn't seem like it would be efficient or the right way to go about handling this but I could be wrong.
Intermediate & Advanced SEO | | goldbergweismancairo0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0 -
Random Google?
In 2008 we performed an experiment which showed some seemingly random behaviour by Google (indexation, caching, pagerank distributiuon). Today I put the results together and analysed the data we had and got some strange results which hint at a possibility that Google purposely throws in a normal behaviour deviation here and there. Do you think Google randomises its algorithm to prevent reverse engineering and enable chance discoveries or is it all a big load balancing act which produces quasi-random behaviour?
Intermediate & Advanced SEO | | Dan-Petrovic0