Does Bing support rel="canonical" HTTP Headers?
-
anyone know^
-
Yeah, I'm honestly not 100% sure on the HTTP header version, but I'd bet they don't support it. It won't hurt to try it, though, and you'd at least cover Google - I think it's probably a good best practice for PDFs that have HTML equivalents.
-
Hey Peter,
I am attempting to add the HTTP Header for PDF Files. I really feel that this can be a bonus for sites that do have duplicated PDF content, especially on large e-commerce based sites.
I figured that they(Bing) didnt support it, and it sounds like it is probably not considered in the form of an HTTP Header
I may have to consider conditional logic and/or create a dynamic robots.txt file to disallow these PDF files for all other search engines, while serving up canonical HTTP Headers for Google, assuming that Bing doesnt support it.
It would be good to try and test, I may just end up doing that
-
I don't believe that Bing supports the HTTP header version of rel="canonical". They do technically support the link attribute (their comment about it being a "hint" was from 2009) - Duane confirmed that last year (I asked him point blank). Although, honestly, experiences vary and many SEOs claim that their support is inconsistent even for the link attribute.
Honestly, when it comes to canonicalization, when in doubt, try it. The worst that can happen in most scenarios (implemented properly) is that it just doesn't work.
Out of curiosity, why are your trying to use the HTTP Header version. Is it a non-HTML file (like a PDF)?
-
Hi Brandon
"No "Bing does not support rel="canonical" HTTP Headers, Bing isn’t supporting the canonical link element. Bing says canonical tags are hints and not directives, So 301 redirects are your best friend for redirecting, use rel=”nofollow” on useless pages, and use robots.txt to keep content you don’t want crawled out. When you have duplicate problems due to extra URLs parameters, use the URL Normalization feature.
-
I think you guys are confused. There is a difference between the rel="canonical" HTTP header, and a rel="canonical" tag.
I understand their stance with regards to the tag, but wonder if they even consider the canonical in the form of an HTTP Header.
http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
-
Does Bing support rel="canonical" HTTP Headers?
** No.
Bing posted: "This tag will be interpreted as a hint by Live Search, not as a command. We'll evaluate this in the context of all the other information we know about the website and try and make the best determination<a> of the canonical URL</a>. This will help us handle any potential implementation errors or abuse of this tag."
-
Well Brandon, Bing has officially said that they see it as only a hint and determine in their senses as to what is right, but SEO folks do use the tag and I don't think anyone has yet had a problem. You can have a glimpse at the latest SEOmoz talk on this too.
Cheers,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the "Homepage" for an International Website With Multiple Languages?
BACKGROUND: We are developing a new multi-language website that is going to have: 1. Multiple directories for various languages:
Intermediate & Advanced SEO | | mirabile
/en-us, /de, etc....
2. Hreflang tags
3. Universal footer links so user can select their preferred language.
and
4. Automatic JS detection of location on homepage only, so that when the user lands on /, it redirect them to the correct location. Currently, the auto JS detection only happens on /, and no other pages of the website. The user can also always choose to override the auto-detection on the homepage anytime, by using the language-selector links on the bottom. QUESTION: Should we try to place a 301 on / to point to en/us? Someone recommended this to us, but my thinking is "NO" - we do NOT want to 301 /. Instead, I feel like we should allow Google Access to /, because that is also the most authoritative page on the website and where all incoming links are pointing. In most cases, users / journalists / publications IMHO are just going to link to /, not dilly dally around with the language-directory. My hunch is just to keep / as is, but also work to help Google understand the relationship between all of the different language-specific directories. I know that Google officially doesn't advocate meta refresh redirects, but this only happens on homepage, and we likewise allow user to override this at any time (and again, universal footer links will point both search engines and users to all other locations.) Thoughts? Thanks for any tips/feedback!2 -
Should I add rel=nofollow ?
Say I have an article that includes a list of many websites with ressources for the articles topic. From a SEO perspective, should I add nofollow to them? some of them? all of them?
Intermediate & Advanced SEO | | Superberto0 -
Putting "noindex" on a page that's in an iframe... what will that mean for the parent page?
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Intermediate & Advanced SEO | | Philip-DiPatrizio0 -
How use Rel="canonical" for our Website
How is the best way to use Rel="canonical" for our website www.ofertasdeemail.com.br, for we can say goodbye for duplicated pages? I appreciate for every help. I also hope to contribute to the SEOmoz community. Sincerely,
Intermediate & Advanced SEO | | ZZNINTERNETMEDIAGROUP
Amador Goncalves0 -
"Starting Over" With A New Domain & 301 Redirect
Hello, SEO Gurus. A client of mine appears to have been hit on a non-manual/algorithm penalty. The penalty appears to be Penguin-like, and the client never received any message (not that that means it wasn't manual). Prior to my working with her, she engaged in all kinds of SEO fornication: spammy links on link farms, shoddy article marketing, blog comment spam -- you name it. There are simply too many tens of thousands of these links to have removed. I've done some disavowal, but again, so much of the link work is spam. She is about to launch a new site, and I am tempted to simply encourage her to buy a new domain and start over. She competes in a niche B2B sector, so it is not terribly competitive, and with solid content and link earning, I think she'd be ok. Here's my question: If we were to 301 the old website to the new one, would the flow of page rank outperform any penalty associated with the site? (The old domain only has a PR of 2). Anyone like my idea of starting over, rather than trying to "recover?" I thank you all in advance for your time and attention. I don't take it for granted.
Intermediate & Advanced SEO | | RCNOnlineMarketing0 -
Bad use of the Rel="canonical" tag
Google is currently ranking my category page instead of our homepage for our key term and we would rather have our homepage rank for the term. Would it be a bad idea to rel="canonical" our category page to our homepage? Our homepage is optimized to rank for the keyword and has more PR than our category page. However, I don't really know if this will have negative repercussions. Thanks, Jason
Intermediate & Advanced SEO | | Jason_3420 -
If google ignores links from "spammy" link directories ...
Then why does SEO moz have this list: http://www.seomoz.org/dp/seo-directory ?? Included in that list are some pretty spammy looking sites such as: <colgroup><col width="345"></colgroup>
Intermediate & Advanced SEO | | adriandg
| http://www.site-sift.com/ |
| http://www.2yi.net/ |
| http://www.sevenseek.com/ |
| http://greenstalk.com/ |
| http://anthonyparsons.com/ |
| http://www.rakcha.com/ |
| http://www.goguides.org/ |
| http://gosearchbusiness.com/ |
| http://funender.com/free_link_directory/ |
| http://www.joeant.com/ |
| http://www.browse8.com/ |
| http://linkopedia.com/ |
| http://kwika.org/ |
| http://tygo.com/ |
| http://netzoning.com/ |
| http://goongee.com/ |
| http://bigall.com/ |
| http://www.incrawler.com/ |
| http://rubberstamped.org/ |
| http://lookforth.com/ |
| http://worldsiteindex.com/ |
| http://linksgiving.com/ |
| http://azoos.com/ |
| http://www.uncoverthenet.com/ |
| http://ewilla.com/ |0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0