HTTP Vary:User-Agent Server or Page Level?
-
Looking for any insights regarding the usage of the Vary HTTP Header. Mainly around the idea that search engines will not like having a Vary HTTP Header on pages that don't have a mobile version, which means the header will be to be implemented on a page-by-page basis.
Additionally, does anyone has experience with the usage of the Vary HTTP Header and CDNs like Akamai?Google still recommends using the header, even though it can present some challenges with CDNs.
Thanks!
-
hey burnseo - if you're still getting notifications from this thread, would you happen to recall where you ended up finding info. that google recommends placing the vary header at page level? running into the same question myself. if you have links you could post to where you found the answer, that'd be great. thanks!
-
I would go by what Google recommends I cannot imagine Akamai being something bad for website or overwhelming it anyway. You may try using a C name with your www. straight to the CDN & if you're using a mobile subdomain like m. also having that go directly into your content delivery network.
I hope this is better help.
sincerely,
Thomas
-
I found some information that suggests that it is recommended to avoid using the Vary HTTP Header by User-Agent site-wide because search engines and (and this is Google) would assume the other version simply hadn't yet been discovered and perhaps keep looking for it. There is also a recommendation to implement the Vary Header on a page-level only when there is a mobile version. This only applies to sites that are serving mobile HTML content dynamically based in the user-agent. Additionally, there is some controversy around using the header when a CDN network like Akamai is in place because it can overload the site. Despite this controversy Google still recommends using the header. These seem to be two important points to consider before implementing the Vary HTTP Header.
-
Very true I shoud have compleated it woun't use a cell phone to Q&A
-
Thomas, it appears that this is taken from http://stackoverflow.com/questions/1975416/trying-to-understand-the-vary-http-header. Q&A is for original answers; if you are referring to another blog post, it's best to just put a link into the blog post and let people go there rather than copy work (that may be copyright) and use that as your answer. Thanks for understanding!
-
-
The
cache-control
header is the primary mechanism for an HTTP server to tell a caching proxy the "freshness" of a response. (i.e., how/if long to store the response in the cache) -
In some situations,
cache-control
directives are insufficient. A discussion from the HTTP working group is archived here, describing a page that changes only with language. This is not the correct use case for the vary header, but the context is valuable for our discussion. (Although I believe the Vary header would solve the problem in that case, there is a Better Way.) From that page:
Vary
is strictly for those cases where it's hopeless or excessively complicated for a proxy to replicate what the server would do.- This page describes the header usage from the server perspective, this one from a caching proxy perspective. It's intended to specify a set of HTTP request headers that determine uniqueness of a request.
A contrived example:
Your HTTP server has a large landing page. You have two slightly different pages with the same URL, depending if the user has been there before. You distinguish between requests and a user's "visit count" based on Cookies. But -- since your server's landing page is so large, you want intermediary proxies to cache the response if possible.
The URL, Last-Modified and Cache-Control headers are insufficient to give this insight to a caching proxy, but if you add
Vary: Cookie
, the cache engine will add the Cookie header to it's caching decisions.Finally, for small traffic, dynamic web sites -- I have always found the simple
Cache-Control: no-cache, no-store
andPragma: no-cache
sufficient.Edit -- to more precisely answer your question: the HTTP request header 'Accept' defines the Content-Types a client can process. If you have two copies of the same content at the same URL, differing only in Content-Type, then using
Vary: Accept
could be appropriate.Update 11 Sep 12:
I'm including a couple links that have appeared in the comments since this comment was originally posted. They're both excellent resources for real-world examples (and problems) with Vary: Accept; Iif you're reading this answer you need to read those links as well.
The first, from the outstanding EricLaw, on Internet Explorer's behavior with the Vary header and some of the challenges it presents to developers: Vary Header Prevents Caching in IE. In short, IE (pre IE9) does not cache any content that uses the Vary header because the request cache does not include HTTP Request headers. EricLaw (Eric Lawrence in the real world) is a Program Manager on the IE team.
The second is from Eran Medan, and is an on-going discussion of Vary-related unexpected behavior in Chrome:Backing doesn't handle Vary header correctly. It's related to IE's behavior, except the Chrome devs took a different approach -- though it doesn't appear to have been a deliberate choice.
-
-
Hey Thomas, thank you for your interest in answering my question. However, the question isn't really about using a CDN. It is more around how using the Vary HTTP Header can affect the CDN performance. In addition, I wanted to find guidance on where to implement the Vary HTTP Header as it was brought to my attention that search engines don't like it when this is implemented site wide even on pages that don't have a mobile version.
-
Hi Keri,
Thank you for the heads up on that. I definitely was having some technical issues. I have cleaned it up let me know if you think it is a need any more work.
Thank you for letting me know.
Sincerely,
Thomas
-
Thomas, I think the voice recognition software botched some of your reply. Could you go through and edit it a little? There are some words that seem to be missing. Thanks!
-
Hi,
For insights regarding the usage of the Vary HTTP Header.
I would check out this blog post right here
As far as using a content delivery network. I love them and have used quite a few. Depending on your budget there is a wide range
Use Anycast DNS with CDN's here is what I think of them.
#1 DNS DynECT (my fav)
#2 DNS Made Easy (great deal $25 for 10 domains for the YEAR)
#3 UltraDNS
#4 VerisignDNS
CDN's many have anycast DNS built in already
Check out this website it will give you a good view of what's going on this site
http://www.cdnplanet.com/cdns/
I don't know what you want for data however if you want a great CDN with support & killer price Max CDN it's only $39 for the first terabyte performs Amazon's cloudflaire Rackspace clouldfiles
My list of CDN's I would use the cost is anywhere form $39 a year to $4,000 a month if you said you where going to use video it will cost more as data adds up fast.
#1 Level 3 personal favorite content delivery network
http://www.level3.com/en/products-and-services/data-and-internet/cdn-content-delivery-network/
http://www.edgecast.com/free-trial/
http://mediatemple.net/webhosting/procdn/ You get 200 gb's a month for $20 it is 100% EdgeCast (just a reseller)
https://presscdn.com/ PRESSCDN is 50GB's for $10 month & gives you FOUR CDN's it has Max CDN, Edgecast, Akamai & cloudfront price for 150GB a month is $19
http://www.rackspace.com/cloud/files/
http://aws.amazon.com/cloudfront/
Look a thttp://cloudharmony.com/speedtest for speed testing
However please remember that coding makes a huge difference on websites and it is not really a fair depiction of speed.
You could use CloudFlare it is free I don't like it for for anything other than site protection it's not very fast and my opinion and is simply a proxy reverse proxy server
You get CloudFlare with Railgun already on
https://www.cloudflare.com/railgun cost is now $200 a month (Use Level 3 if paying that much)
Edge cast is a great content delivery network. However,you will have to buy it through a third-party that you want a full enterprise version. You can buy to media temple that you must use their DNS and it is only $20 a month.
However if you're going to spend over $20 a month I would strongly consider talking to Level 3. There notoriously high-priced however they just lowered their prices and you can negotiate some very sweet deals.
I would simply sign up for DNS made easy and MaxCDN if you don't have a content delivery network already & just convenient fast
It's also faster. It is faster than AWS cloudfront & rack space cloudfiles.
Max CDN is faster than anything else I have compared to the it's price range for almost double
But inexpensive service you will get Anycast DNS for $25 and the CDN would be $39 and that's for the year not the month
I hope this is been of help to you,and it answers your question. Please let me know if I could be of any more help.
Sincerely,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Purchased domain with links - redirect page by page or entire domain?
Hi, I purchased an old domain with a lot of links that I'm redirecting to my site. I want all of their links to redirect to the same page on my site so I can approach this two different ways: Entire site
Technical SEO | | ninel_P
1.) RedirectMatch 301 ^(.*)$ http://www.xyz.com or Page by page
2). Redirect 301 /retiredpage.html http://www.xyz.com/newpage.html Is there a better option I should go with in regards to SEO effectiveness? Thanks in advance!0 -
Should I change my targeted page?
Currently I have a site where the targeted keywords were on the home page, with links built to the homepage. It has been widely recognised though that Google is looking more and more for specific content on webpages that holds greater relevance to search queries. As such, I switched this targeted page to other created webpages - changing metatags and creating more relevant content for respective keywords. I thought this would improve rankings, however, upon doing this there was a sharp fall in rankings for keywords. Is there anything that I could have done wrong, or can do better so that keywords move back up the rankings?
Technical SEO | | Gavo0 -
Switchboard Tags - Multiple desktop pages pointing to one mobile page
I have recently started to implement switchboard tags to connect our mobile and desktop pages, and to ensure that our mobile pages show up in rankings for mobile users. Because our desktop site is much deeper in content than our mobile site, there are a number of desktop pages we would like to have point to one mobile page. However, with the switchboard tags, this poses a problem because it requires multiple rel=canonical tags to be placed on the one mobile page. I'm assuming this will either confuse the search engines, or they will choose to ignore the rel=canonical tag altogether. Any ideas on how to approach this situation other than creating an equivalent mobile version of every desktop page or implementing a user agent detection redirect?
Technical SEO | | JBlank0 -
Results Pages Duplication - What to do?
Hi all, I run a large, well established hotel site which fills a specific niche. Last February we went through a redesign which implemented pagination and lots of PHP / SQL wizzardy. This has left us, however, with a bit of a duplication problem which I'll try my best to explain! Imagine Hotel 1 has a pool, as well as a hot tub. This means that Hotel 1 will be in the search results of both 'Hotels with Pools' and 'Hotels with Hot Tubs', with exactly the same copy, affiliate link and thumbnail picture in the search results. Now imagine this issue occurring hundreds of times across the site and you have our problem, especially since this is a Panda-hit site. We've tried to keep any duplicate content away from our landing pages with some success but it's just all those pesky PHP paginated pages which doing us in (e.g. Hotels/Page-2/?classifications[]263=73491&classifcations[]742=24742 and so on) I'm thinking that we should either a) completely noindex all of the PHP search results or b) move us over to a Javascript platform. Which would you guys recommend? Or is there another solution which I'm overlooking? Any help most appreciated!
Technical SEO | | dooberry0 -
Page not Accesible for crawler in on-page report
Hi All, We started using SEOMoz this week and ran into an issue regarding the crawler access in the on-page report module. The attached screen shot shows that the HTTP status is 200 but SEOMoz still says that the page is not accessible for crawlers. What could this be? Page in question
Technical SEO | | TiasNimbas
http://www.tiasnimbas.edu/Executive_MBA/pgeId=307 Regards, Coen SEOMoz.png0 -
New Domain Page 7 Google but Page 1 Bing & Yahoo
Hi just wondered what other people's experience is with a new domain. Basically have a client with a domain registered end of May this year, so less than 3 months old! The site ranks for his keyword choice (not very competitive), which is in the domain name. For me I'm not at all surprised with Google's low ranking after such a short period but quite surprsied to see it ranking page 1 on Bing and Yahoo. No seo work has been done yet and there are no inbound links. Anyone else have experience of this? Should I be surprised or is that normal in the other two search engines? Thanks in advance Trevor
Technical SEO | | TrevorJones0 -
Duplicate Page Content
Hi within my campaigns i get an error "crawl errors found" that says duplicate page content found, it finds the same content on the home pages below. Are these seen as two different pages? And how can i correct these errors as they are just one page? http://poolstar.net/ http://poolstar.net/Home_Page.php
Technical SEO | | RouteAccounts0 -
Hundreds of 404 Pages, What Should I do?
Hi, My client just had there website redeveloped within wordpress. I just ran a crawl errors test for their website using Google Webmasters. I discovered that the client has about six hundred, 404 pages. Most of the error pages originated from their previous image gallery. I already have a custom 404 page set-up, but is there something else I should be doing? Is it worth while to 301 redirect every single page within the .htaccess file, or will Google filter these pages out of its index naturally? Thanks Mozers!
Technical SEO | | calindaniel0