When to Fetch?
-
If I'm about to submit a new sitemap for Google to crawl, is there any need to use the Fetch tool?
-
Hi muzzmoz! Do these responses help to answer your question or are you looking for more information? If you're good to go, please mark this as answered. Thanks!
-
Hey there -
Assuming you are talking about an XML sitemap, not an HTML sitemap, you shouldn't need to Fetch your site when you do this. You're literally telling Google about your pages through your XML sitemap, so they should crawl that sitemap pretty fast.
-
I'd recommend using something like Screaming frog to run a scan and make sure site wide you are healthy before pushing a new sitemap in Search console.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If Fetch As Google can render website, it should be appear on SERP ?
Hello everyone and thank you in advance for helping me. I have a Reactjs application which has been made by Create-React-App that is zero configuration. Also I connect it using Axios to the API using Codeigniter(PHP). Before using Reactjs, this website was at the top Google's SERPs for specific keywords. After Using Reactjs and some changes in URLs with no redirection in htaccess or something else, I lost my search engine visibility! I guess it should be caused by Google penalties! I tried using "react-snap", "react-snapshot" and so forth for prerendering but there are so many problem with them. Also I tried using Prerender.io and unfortunately my host provider didn't help me to config the shared host! Finally I found a great article that my website eventually display in Rendering box of Fetch As Google. But still in Fetching box, the dynamic content didn't display. But I can see my entire website in both "This is how Googlebot saw the page" and "This is how a visitor to your website would have seen the page" for all pages without any problem. If Fetch As Google can render the entire of the website, is it possible to index my pages after a while and it would be appear on Google's SERP? mokaab_serp.png
Intermediate & Advanced SEO | | hamoz10 -
Fetch and render partial result could this affect SERP rankings [NSFW URL]
Moderator's Note: URL NSFW We have been desperately trying to understand over the last 10 days why our homepage disappears for a few days in the SERPS for our most important keywords, before reappearing again for a few more days and then gone again! We have tried everything. Checked Google webmaster - no manual actions, no crawl errors, no messages. The site is being indexed even when it disappears but when it's gone it will not even appear in the search results for our business name. Other internal pages come up instead. We have searched for bad back links. Duplicate content. We put a 301 redirect on the non www. version of the site. We added a H1 tag that was missing. Still after fetching as Google and requesting reindexing we were going through this cycle of disappearing in the rankings (an internal page would actually come in at 6th position as opposed to our home page which had previously spent years in the number 2 spot) and then coming back for a few days. Today I tried fetch and render as Google and was only getting a partial result. It was saying the video that we have embedded on our home page was temporarily unavailable. Could this have been causing the issue? We have removed the video for now and fetched and rendered and returned a complete status. I've now requested reindexing and am crossing everything that this fixes the problem. Do you think this could have been at the root of the problem? If anyone has any other suggestions the address is NSFW https://goo.gl/dwA8YB
Intermediate & Advanced SEO | | GemmaApril2 -
No content using Fetch
Wooah, this one makes me feel a bit nervous. The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that. If I Google some of the content it is there in the index and the cache version is yesterday. If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems? The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow"> class="data"> some content... Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
Intermediate & Advanced SEO | | MickEdwards0 -
Fetch as Google
I have odd scenario I don't know if anyone can help? I've done some serious speed optimisation on a website, amongst other things CDN and caching. However when I do a Search Console Fetch As Google It is still showing 1.7 seconds download time even though the cached content seems to be delivered in less than 200 ms. The site is using SSL which obviously creams off a bit of speed, but I still don't understand the huge discrepancy. Could it be that Google somehow is forcing the server to deliver fresh content despite settings to deliver cache? Thanks in advance
Intermediate & Advanced SEO | | seoman100 -
Javascript to fetch page title for every webpage, is it good?
We have a zend framework that is complex to program if you ask me, and since we have 20k+ pages that we need to get proper titles to and meta descriptions, i need to ask if we use Javascript to handle page titles (basically the previously programming team had NOT set page titles at all) and i need to get proper page titles from a h1 tag within the page. current course of action which we can easily implement is fetch page title from that h1 tag being used throughout all pages with the help of javascript, But this does makes it difficult for engines to actually read what's the page title? since its being fetched with javascript code that we have put in, though i had doubts, is anyone one of you have simiilar situation before? if yes i need some help! Update: I tried the JavaScript way and here is what it looks like http://islamicencyclopedia.org/public/index/hadith/id/1/book_id/106 i know the fact that google won't read JavaScript like the way we have done with the website, But i need help on "How we can work around this issue" Knowing we don't have other options.
Intermediate & Advanced SEO | | SmartStartMediacom0 -
Google Fetch Issue
I'm having some problems with what google is fetching and what it isn't, and I'd like to know why. For example, google IS fetching a non-existent page but listing it as an error: http://www.gaport.com/carports but the actual url is http://www.gaport.com/carports.htm. Google is NOT able to fetch http://www.gaport.com/aluminum/storage-buildings-10x12.htm. It says the page doesn't exist (even though it does) and when I click on the not found link in Google fetch it adds %E@%80%8E to the url causing the problem. One theory we have is that this may be some sort of server/hosting problem, but that's only really because we can't figure out what we could have done to cause it. Any insights would be greatly appreciated. Thanks and Happy Holidays! Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Is it dangerous to use "Fetch as Google" too much in Webmaster Tools?
I saw some people freaking out about this on some forums and thought I would ask. Are you aware of there being any downside to use "Fetch as Google" often? Is it a bad thing to do when you create a new page or blog post, for example?
Intermediate & Advanced SEO | | BlueLinkERP0 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0