Leverage browser caching
-
anyone know a good tutorial on how to implement Leverage Browser Caching? Do I need something like cloud flare or can I add meta tags to do this?
-
Spot on that worked a dream... went from 82 to 92 on google page speed test
-
Thats great I check this out today
-
You would actually add it to your htaccess file if you are using a linux server. This should help, http://www.feedthebot.com/pagespeed/leverage-browser-caching.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Cached of live version for ranking
Hello, Is google using the cached version of the live version for ranking a website (if those are different) Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Browser Cacheing - HTTPS redirects to HTTP
Howdy lovely Moz people. A webmaster redirected https protocol links to http a number of years ago in order to try and capture as many links as possible on a site we now manage. We have recently tried to implement https and realised that because of this existing redirect rule, they are now causing infinite loops when trying to test an http redirect. http redirecting to https redirecting back to http, etc. The https version works by itself weirdly enough. We believe that this is due to the permanent browser caching. So unless users clear their cache, they will get this infinite loop. Does anyone have any advice on how we can get round this? a) index both sites and specify in GSC that the https is the canonical version of the site and hope that Google sees that and removes the http version for the https version b) stick with http as infinite loops will kill the site c) ??????????? Thanks all.
Intermediate & Advanced SEO | | HenryFrance0 -
Google cache is for a 3rd parties site for HTTP version and correct for HTTPS
If I search Google for my cache I get the following: cache:http://www.saucydates.com -> Returns the cache of netball.org (HTTPS page with Plesk default page) cache:https://www.saucydates.com -> Displays the correct page Prior to this my http cache was the Central Bank of Afghanistan. For most searches at present my index page is not returned and when it is, it’s the Net Ball Plesk page. This is, of course hurting my search traffic considerably. ** I have tried many things, here is the current list:** If I fetch as Google in webmaster tools the HTTPS fetch and render is correct. If I fetch the HTTP version I get a redirect (which is correct as I have a 301 HTTP to HTTPS redirect). If I turn off HTTPS on my server and remove the redirect the fetch and render for HTTP version is correct. The 301 redirect is controlled with the 301 Safe redirect option in Plesk 12.x The SSL cert is valid and with COMODO I have ensured the IP address (which is shared with a few other domains that form my sites network / functions) has a default site I have placed a site on my PTR record and ensured the HTTPS version goes back to HTTP as it doesn’t need SSL I have checked my site in Waybackwhen for 1 year and there are no hacked redirects I have checked the Netball site in Waybackwhen for 1 year, mid last year there is an odd firewall alert page. If you check the cache for the https version of the netball site you get another sites default plesk page. This happened at the same time I implemented SSL Points 6 and 7 have been done to stop the server showing a Plesk Default page as I think this could be the issue (duplicate content) ** Ideas:** Is this a 302 redirect hi-jack? Is this a Google bug? Is this an issue with duplicate content as both servers can have a default Plesk page (like millions of others!) A network of 3 sites mixed up that have plesk could be a clue? Over to the experts at MOZ, can you help? Thanks, David
Intermediate & Advanced SEO | | dmcubed0 -
Cached Alternate URL appearing as base page
Hi there, I'm currently targeting Australia and the US for one of my web-pages. One of my web-pages begin with a subdomain (au.site.com) and the other one is just the root domain (site.com). After searching the website on Australian Google and checking the description and title, it keeps the US ones (i.e. root domain) and after checking the cached copy, it was cached earlier today but it is displayed exactly as the American website when it is supposed to be the Australian one? In the url for the caching it appears as au.site.com while displaying the American page's content. Any ideas why? Thanks, Oliver
Intermediate & Advanced SEO | | oliverkuchies0 -
Why are some pages indexed but not cached by Google?
The question is simple but I don't understand the answer. I found a webpage that was linking to my personal site. The page was indexed in Google. However, there was no cache option and I received a 404 from Google when I tried using cache:www.thewebpage.com/link/. What exactly does this mean? Also, does it have any negative implication on the SEO value of the link that points to my personal website?
Intermediate & Advanced SEO | | mRELEVANCE0 -
Is the TTFB for different locations and browsers irrelevant if you are self-hosting?
Please forgive my ignorance on this subject. I have little to no experience with the technical aspects of setting up and running a server. Here is the scenario: We are self-hosted on an Apache server. I have been on the warpath to improve page load speed since the beginning of the year. I have been on this warpath not so much for SEO, but for conversion rate optimization. I recently read the Moz Post "How Website Speed Actually Impacts Search Rankings" and was fascinated by the research regarding TTFB. I forwarded the post to my CEO, who promptly sent me back a contradictory post from Cloudflare on the same topic. Ily Grigorik published a post in Google+ that called Cloudflare's experiment "silly" and said that "TTFB absolutely does matter." I proceeded to begin gathering information on our site's TTFB using data provided by http://webpagetest.org. I documented TTFB for every location and browser in an effort to show that we needed to improve. When I presented this info to my CEO (I am in-house) and IT Director, that both shook their heads and completely dismissed the data and said it was irrelevant because it was measuring something we couldn't control. Ignorant as I am, it seems that Ilya Grigorik, Google's own Web Dev Advocate says it absolutely is something that can be controlled, or at least optimized if you know what you are doing. Can any of you super smart Mozzers help me put the words together to express that TTFB from different locations and for different browsers is something worth paying attention to? Or, perhaps they are right, and it's information I should ignore? Thanks in advance for any and all suggestions! Dana
Intermediate & Advanced SEO | | danatanseo0 -
How can I leverage Places/Yelp Reviews for Attorney Schema?
A little confused by this. We have some on site reviews, is all I need is to reference that? What if we don't have "products", only "services"......should i be leveraging Places/Yelp reviews for this? Anyone add review schema for services?
Intermediate & Advanced SEO | | VistageSEO0 -
How Google Carwler Cached Orphan pages and directory?
I have website www.test.com I have made some changes in live website and upload it to "demo" directory (which is recently created) for client approval. Now, my demo link will be www.test.com/demo/ I am not doing any type of link building or any activity which pass referral link to www.test.com/demo/ Then how Google crawler find it and cached some pages or entire directory? Thanks
Intermediate & Advanced SEO | | darshit210