Googlebot + Meta-Refresh
-
Quick question, can Googlebot (or other search engines) follow meta refresh tags? Does it work anything like a 301 in terms of passing value to the new page?
-
Sorry to say we're digging in the crates here... but one of the companies we acquired and took over full ownership of in May of this year had a site with no htaccess in standard html. I went meta refresh for my redirects.
I'm thinking if I verify the site in WMT and then acknowledge that is has been moved to our current domain, this should probably be the most legit way to inform GOOG that we made the move.
If anyone has any feedback that is more up to date, I'd love to hear it. Thanks!
-
Right. Meta-refresh was a common black hat technique for redirecting back in the late 90s and early 2000s so it has a bit of a stigma associated with it.
-
The best information I can find on the subject is 3 years old and from Yahoo.
My understanding is do a 301 if you can, if not do a meta refresh preferably with 0.
Also in 2007, Matt Cutts said this:
Matt Cutts: In general, Google does a relatively good job of following the 301s, and 302s, and even Meta Refreshes and JavaScript. Typically what we don't do would be to follow a chain of redirects that goes through a robots.txt that is itself forbidden.
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
Based on that discussion it is inferred that value is passed along.
-
Meta Refresh can pass link juice, according to Matt Cutts:
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
But as Ryan Purkey suggests, a 301 is the accepted best-practice here. With Meta Refresh you need to be careful to avoid looking like a black-hat to a picky google algorithm. Some more discussion here.
-
I fully understand that the 301 is the best option, i was interested if it had been published anywhere that meta-refreshes could pass any value or not?
I did some searching around and couldn't find any trust worthy articles. They only thing i found was that it wasn't suggested by SEOMoz and the W3C doesn't support it...
-
Search engines can read meta refresh, but the the standard practice for passing value is a 301 redirect as meta refresh can have other uses while a 301 is specifically for permanent redirection. Use the 301 if you want to pass value.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta-description issue in SERPs for different countries
I'm working with a US client on the SEO for their large ecommerce website, I'm working on it from the UK. We've now optimised several of the pages including updating the meta-descriptions etc. The problem is when I search on the keyword iin the UK I see the new updated version of the meta-description in SERPs results. BUT when my client searches on the same keyword in the US they're see the old version of the meta-description. Does any one have any idea why this is happening and how we can resolve it? Thanks Tanya
Intermediate & Advanced SEO | | TanyaKorteling0 -
Googlebot on steroids... Why?
We launched a new website (www.gelderlandgroep.com). The site contains 500 pages, but some pages (like https://www.gelderlandgroep.com/collectie/) contains filters (so there are a lot possible url parameters). Last week we mentioned a tremendous amount of traffic (25 GB!!) and CPU usage on the server. 2017-12-04 16:11:57 W3SVC66 IIS14 83.219.93.171 GET /collectie model=6511,6901,7780,7830,2105-illusion&ontwerper=henk-vos,foklab 443 - 66.249.76.153 HTTP/1.1 Mozilla/5.0+(Linux;+Android+6.0.1;+Nexus+5X+Build/MMB29P)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/41.0.2272.96+Mobile+Safari/537.36+(compatible;+Googlebot/2.1;++http://www.google.com/bot.html) - - www.gelderlandgroep.com 200 0 0 9445 501 312 We find out that "Googlebot" was firing many, many requests. At first we did a nslookup for the IPadres where it actually seems to be googlebot. Second we visited Google Searchconsole and I was really surprised... Googlebot on steroids? Googlebot requested 922.565 different url's and made combinations for every filter/ parameter combination on the site. Why? The sitemap.xml contains 500 url's... The authority of the site isn't very high, no other signal that this is a special website... Why so much "Google resources"? Of course we will exclude the parameters in SearchConsole, but I never saw a Googlebot activity for a small website like this before! Does anybody have any clue? Regards Olaf searchconsole.png nslookup.png
Intermediate & Advanced SEO | | Olaf0 -
Should I be using meta robots tags on thank you pages with little content?
I'm working on a website with hundreds of thank you pages, does it make sense to no follow, no index these pages since there's little content on them? I'm thinking this should save me some crawl budget overall but is there any risk in cutting out the internal links found on the thank you pages? (These are only standard site-wide footer and navigation links.) Thanks!
Intermediate & Advanced SEO | | GSO0 -
Google Not Pulling The Right Title Tag & Meta Description
Hi guys. We've found Google is pulling the wrong information for our title tag and meta description. Instead of pulling the actual title tag, Google is pulling the menu name you click on to get to the page: "Bike Barcelona" instead of "Barcelona Bike Tours | ...." Also, we've found that, instead of pulling the meta description we wrote, Google is using text from the pages copy. Any tips?
Intermediate & Advanced SEO | | BarcelonaExperience0 -
Should I remove all meta descriptions to avoid duplicates as a short term fix?
I’m currently trying to implement Matt Cutt’s advice from a recent YouTube video, in which he said that it was better to have no meta descriptions at all than duplicates. I know that there are better alternatives, but, if forced to make a choice, would it be better to remove all duplicate meta descriptions from a site than to have duplicates (leaving a lone meta tag description on the home page perhaps?). This would be a short term fix prior to making changes to our CMS to allow us to add unique meta descriptions to the most important pages. I’ve seen various blogs across the internet which recommend removing all the tags in these circumstances, but I’m interested in what people on Moz think of this. The site currently has a meta description which is duplicated across every page on the site.
Intermediate & Advanced SEO | | RG_SEO1 -
Can I, in Google's good graces, check for Googlebot to turn on/off tracking parameters in URLs?
Basically, we use a number of parameters in our URLs for event tracking. Google could be crawling an infinite number of these URLs. I'm already using the canonical tag to point at the non-tracking versions of those URLs....that doesn't stop the crawling tho. I want to know if I can do conditional 301s or just detect the user agent as a way to know when to NOT append those parameters. Just trying to follow their guidelines about allowing bots to crawl w/out things like sessionID...but they don't tell you HOW to do this. Thanks!
Intermediate & Advanced SEO | | KenShafer0 -
We recently fixed a Meta Refresh that was affecting our home page - But something still seems wrong. Any suggestions?
We recently fixed a meta refresh issue on our home page. Our store URL: http://www.ccisolutions.com had a meta refresh on it that was going to: www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain The meta refresh is now gone, however there still seem to be some problems: Our IT Director has not been successful in trying to make www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain 301 redirect to http://www.ccisolutions.com - so I believe we now have a duplicate content issue If you look at both of these URLs in OSE, you will see that www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain is getting credit for almost all of the Internal Followed Links, while http://www.ccisolutions.com is getting all the credit for External Followed links. Why doesn't http://www.ccisolutions.com show the same number of Internal Followed Links? I realize this is more of a developer/webmaster question and would be very appreciative of any suggestions or advice. Thanks!
Intermediate & Advanced SEO | | danatanseo0 -
How to find what Googlebot actually sees on a page?
1. When I disable java-script in Firefox and load our home page, it is missing entire middle section. 2. Also, the global nav dropdown menu does not display at all. (with java-script disabled) I believe this is not good. 3. But when type in <website name="">in Google search and click on the cached version of home page > and then click on text only version, It displays the Global nav links fine.</website> 4. When I switch the user agent to Googlebot(using Firefox plugin "User Agent Swticher)), the home page and global nav displays fine. Should I be worried about#1 and #2 then? How to find what Googlebot actually sees on a page? (I have tried "Fetch as Googlebot" from GWT. It displays source code.) Thanks for the help! Supriya.
Intermediate & Advanced SEO | | Amjath0