Fetch as Googlebot
-
"With Fetch as Googlebot you can see exactly how a page appears to Google"
I have verified the site and clicked on Fetch button. But how can i "see exactly how a page appears to Google"
Thanks
-
Hi Atul,
Here's Google's comprehensive explanation about Fetch as Googlebot, and what it can do for you.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=158587
-
" The title description of the site you mentioned are getting displayed properly in Google Search."
Will it be displayed in Google webmaster tools ? This is what i would like to know ?
-
What is the exact issue you are encountering? The title description of the site you mentioned are getting displayed properly in Google Search.
"Fetch as Googlebot" will give you the source code of the page and whatever is displayed gets crawled by Google though you need to ensure that the syntax are correct.
-
Instant Previews are page snapshots that are displayed in search results.
It's not displaying title or description. Only the site image is being displayed.
The site in question is http://bit.ly/xu2mGi
-
"Fetch as GooleBot" gives you the details that Google will fetch from your source code (no ajax code will appear for instance) along with server response code.
If you want to see how your page will show in SERP's preview you need to use the tool "Instant Previews" under Labs section of the Google webmaster tool.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What IP Address does Googlebot use to read your site when coming from an external backlink?
Hi All, I'm trying to find more information on what IP address Googlebot would use when arriving to crawl your site from an external backlink. I'm under the impression Googlebot uses international signals to determine the best IP address to use when crawling (US / non-US) and then carries on with that IP when it arrives to your website? E.g. - Googlebot finds www.example.co.uk. Due to the ccTLD, it decides to crawl the site with a UK IP address rather than a US one. As it crawls this UK site, it finds a subdirectory backlink to your website and continues to crawl your website with the aforementioned UK IP address. Is this a correct assumption, or does Googlebot look at altering the IP address as it enters a backlink / new domain? Also, are ccTLDs the main signals to determine the possibility of Google switching to an international IP address to crawl, rather than the standard US one? Am I right in saying that hreflang tags don't apply here at all, as their purpose is to be used in SERPS and helping Google to determine which page to serve to users based on their IP etc. If anyone has any insight this would be great.
Intermediate & Advanced SEO | | MattBassos0 -
Fetch and render partial result could this affect SERP rankings [NSFW URL]
Moderator's Note: URL NSFW We have been desperately trying to understand over the last 10 days why our homepage disappears for a few days in the SERPS for our most important keywords, before reappearing again for a few more days and then gone again! We have tried everything. Checked Google webmaster - no manual actions, no crawl errors, no messages. The site is being indexed even when it disappears but when it's gone it will not even appear in the search results for our business name. Other internal pages come up instead. We have searched for bad back links. Duplicate content. We put a 301 redirect on the non www. version of the site. We added a H1 tag that was missing. Still after fetching as Google and requesting reindexing we were going through this cycle of disappearing in the rankings (an internal page would actually come in at 6th position as opposed to our home page which had previously spent years in the number 2 spot) and then coming back for a few days. Today I tried fetch and render as Google and was only getting a partial result. It was saying the video that we have embedded on our home page was temporarily unavailable. Could this have been causing the issue? We have removed the video for now and fetched and rendered and returned a complete status. I've now requested reindexing and am crossing everything that this fixes the problem. Do you think this could have been at the root of the problem? If anyone has any other suggestions the address is NSFW https://goo.gl/dwA8YB
Intermediate & Advanced SEO | | GemmaApril2 -
Fetch as Google - Redirected
Hi I have swaped from HTTP to HTTPS and put a redirect on for HTTP to redirect to HTTPS. I also put www.xyz.co.uk/index.html to redirect to www.xyz.co.uk When I fetch as Google it shows up redirect! Does this mean that I have too many 301 looping? Do I need the redirect on index.html to root domain if I have a rel conanical in place for index.html htaccess (Linix) - RewriteCond %{HTTP_HOST} ^xyz.co.uk
Intermediate & Advanced SEO | | Cocoonfxmedia
RewriteRule (.*) https://www.xyz.co.uk/$1 [R=301,L] RewriteRule ^$ index.html [R=301,L]0 -
Block Googlebot from submit button
Hi, I have a website where many searches are made by the googlebot on our internal engine. We can make noindex on result page, but we want to stop the bot to call the ajax search button - GET form (because it pass a request to an external API with associate fees). So, we want to stop crawling the form button, without noindex the search page itself. The "nofollow" tag don't seems to apply on button's submit. Any suggestion?
Intermediate & Advanced SEO | | Olivier_Lambert0 -
Is it dangerous to use "Fetch as Google" too much in Webmaster Tools?
I saw some people freaking out about this on some forums and thought I would ask. Are you aware of there being any downside to use "Fetch as Google" often? Is it a bad thing to do when you create a new page or blog post, for example?
Intermediate & Advanced SEO | | BlueLinkERP0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Googlebot on paywall made with cookies and local storage
My question is about paywalls made with cookies and local storage. We are changing a website with free content to a open paywall with a 5 article view weekly limit. The paywall is made to work with cookies and local storage. The article views are stored to local storage but you have to have your cookies enabled so that you can read the free articles. If you don't have cookies enable we would pass an error page (otherwise the paywall would be easy to bypass). Can you say how this affects SEO? We would still like that Google would index all article pages that it does now. Would it be cloaking if we treated Googlebot differently so that when it does not have cookies enabled, it would still be able to index the page?
Intermediate & Advanced SEO | | OPU1 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1