Is it dangerous to use "Fetch as Google" too much in Webmaster Tools?
-
I saw some people freaking out about this on some forums and thought I would ask.
Are you aware of there being any downside to use "Fetch as Google" often? Is it a bad thing to do when you create a new page or blog post, for example?
-
Hi Keri
I did yes, i stumbled upon it and thought i'd give my two pennies worth as an SEO!
Certainly wasnt looking for a backlink as it would be pretty irrelevant for our industry and would never expect a dofollow links from a comments section anyway.
Thanks to you also for your feedback
Cheers!
-
Welcome, LoveSavings. Just wanted to make sure you knew this post is a year old, and that all of the links in Q&A are automatically nofollowed. Thanks for the thoughtful answer!
-
Having done lots of tests on this, i would say that fetching as google is the best wat forward.
Although the steps listed above are all excellent ways of boosting the speed at which google will index your page, none of them seem to be as effective as fetching in webmaster tools. you can a few hundred of these a month, so you shouldnt run out unless you are publishing immense amounts of content - in which case google is likely to be indexing your content very quickly anyway.
www.loveenergysavings.com is still relatively small although we publish excellent, though leadership style content. so, to ensure that our posts are indexed as quickly as possible (as we are competing with some massive sites) we always fetch our posts in google webmaster tools. this is always quicker than tweeting, google+ etc. we also have an xml sitemap which automatically adds our post, this doesnt guarantee rapid indexing though.
having messed around with all of these methods, fetching as g-bot is always the quickest and most effective option. as danatanseo says, its there to be utilised by seo's so why not take full advantage? i can't see why google would ever look unfavourably on a site for wanting its content to be available to the public as quickly as possible?
-
I would say it is not a preferred way to alert Google when you have a new page and it is pretty limited. What is better, and frankly more effective is to do things like:
- add the page to your XML sitemap (make sure sitemap is submitted to Google)
- add the page to your RSS feeds (make sure your RSS is submitted to Google)
- add a link to the page on your home page or other "important" page on your site
- tweet about your new page
- status update in FB about your new page
- Google Plus your new page
- Feature your new page in your email newsletter
Obviously, depending on the page you may not be able to do all of these, but normally, Google will pick up new pages in your sitemap. I find that G hits my sitemaps almost daily (your mileage may vary).
I only use fetch if I am trying to diagnose a problem on a specific page and even then, I may just fetch but not submit. I have only submitted when there was some major issue with a page that I could not wait for Google to update as a part of its regular crawl of my site. As an example, we had a release go out with a new section and that section was blocked by our robots.txt. I went ahead and submitted the robots.txt to encourage Google to update the page sooner so that our new section would be :"live" to Google sooner as G does not hit our robots.txt as often. Otherwise for 99.5% of my other pages on sites, the options above work well.
The other thing is that you get very few fetches a month, so you are still very limited in what you can do. Your sitemaps can include thousands of pages each. Google fetch is limited, so another reason I reserve it for my time sensitive emergencies.
-
https://support.google.com/webmasters/answer/158587?hl=en#158587
I just double-checked David, and it looks like the allocation may not be different for different sites. According to Google you get 500 fetches and 10 URL + Linked pages submissions every week.
-
You are welcome David, and no this isn't a lifetime limit at all. I believe it resets at least once every 30 days, maybe more often than that. I manage four different sites, some large, some small and I've never run out of fetches yet.
-
Thanks Dana. Is it possible to get more fetches? Presumably it's not a lifetime limit, right?
-
No, I wouldn't worry about this at all. This is why Google has already allocated a finite number of "Fetches" and URL + Links submissions to your account. These numbers are based on the size of your site. Larger sites are allocated more and smaller sites less. [Please see my revised statement below regarding Google's "Fetch" limits - it isn't based on site size] I don't think enough Webmasters take advantage of the Fetch as often as they should.
Hope that helps!
Dana
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I use links intag instead of "ahref" tag can Google read links inside div tag?
Hi All, Need a suggestion on it. For buttons, I am using links in tag instead of "ahref". Do you know that can Google read links inside "div" tag? Does it pass rank juice? It will be great if you can provide any reference if possible.
Intermediate & Advanced SEO | | pujan.bikroy0 -
Nofollow "print" URLs?
Hi there, Apols for the basic question but is it considered good practice to nofollow one of one's own URLs? Basically our 'print page' command produces an identical URL in the same window but with .../?print=1 at the end. As far as I've been reading, the nofollow html attribute is, broadly speaking, only for links to external websites you don't want to vouch for or internal links to login/register pages that together with noindex, you're asking Google not to waste crawl budget on. (The print page is already noindexed so we're good there) Can anyone confirm the above from their own experience? Thanks so much!
Intermediate & Advanced SEO | | Daft.ie0 -
Is there a tool to find out if a URL has been deemed "SPAM" by GOOGLE
I am currently doing a link audit on one of my sites and I am coming across some links that appear to be spam. Is there a tool that I can plug their URL into to see if they have been deemed spam by GOOGLE?
Intermediate & Advanced SEO | | Mozd0 -
Error: Missing required field "updated"
In my WordPress blog, there are pages for tags,categories,... like : https://www.abc.com/blog/category/how-to-cook-something/ On these pages I am getting the following error: Error: Missing required field "updated" So far I have 39 if these errors. Please let me know if this is an important issue to pay attention to? If yes, how I can fix it? Thanks Everyone
Intermediate & Advanced SEO | | AlirezaHamidian0 -
Dates in the URLs for a "hot" content website (tipping service)
Hi, I'm planning to build a website that will present games previews for different sports. I think that the date should be included in the URL as the content will be valuable until the kick off f the game. So first i want to know if this is the right approach and second the URL structure i have imagined is /tips/sport/competition/year/month/day Ex : /tips/football/premier_league/2013/11/05 Is this a good structure ? Guillaume.
Intermediate & Advanced SEO | | betadvisor0 -
Novice Question - Can Browsers realistically distinguish words within concatenated strings e.g. text55fun or should one use text-55-fun? What about foreign languages especially more obscure ones like Finnish which Google Translate often miss-translates?
I am attempting to understand what is realistically possible within Google, Yahoo and Bing as they search websites for KeyWords. Technically my understanding is that they should be able to distinguish common words within concatenated strings, although there can be confusion between word boundaries when ambiguity is involved. So in the simple example of text55fun, do search engines actually distinguish text, 55 and fun separately? There are practical processing, databased and algorithm limitations that might turn a technically possible solution into a unrealistic one at a commercial scale. What about more ambiguous strings like stringsstrummingstrongly would that be parsed as string s strummings trongly or strings strummings trongly or strings strumming strongly? Does one need to use dashes or underscores to make it unambiguous to the search engine? My guess is that the engine would recognize the dash or space and better understand the word boundaries yet ignore the dash or underscore from an overall concatenated string perspective. Thanks in advance to whoever can provide any insight to an old coder who is new to this field.
Intermediate & Advanced SEO | | ny600 -
Merging your google places page with google plus page.
I have a map listing showing for the keyword junk cars for cash nj. I recently created a new g+ page and requested a merge between the places and the + page. now when you do a search you see the following. Junk Cars For Cash NJ LLC
Intermediate & Advanced SEO | | junkcars
junkcarforcashnj.com/
Google+ page - Google+ page the first hyperlink takes me to the about page of the G+ and the second link takes me to the posts section within g+. Is this normal? should i delete the places account where the listing was originally created? Or do i leave it as is? Thanks0 -
Help with setting up 301 redirects from /default.aspx to the "/" in ASP.NET using MasterPages?
Hi SEOMoz Moderators and Staff, My web developer and I are having a world of trouble setting up the best way to 301 redirect from www.tisbest.org/default.aspx to the www.tisbest.org since we're using session very heavily for our ASP.NET using MasterPages. We're hoping for some help since our homepage has dropped 50+ positions for all of our search terms since our first attempt at setting this up 10 days ago. = ( A very bad result. We've rolled back the redirects after realizing that our session system was redirecting www.tisbest.org back to www.tisbest.org/default.aspx?AutoDetectCookieSupport=1 which would redirect to a URL with the session ID like this one: http://www.tisbest.org/(S(whukyd45tf5atk55dmcqae45))/Default.aspx which would then redirect again and throw the spider into an unending redirect loop. The Google gods got angry, stopped indexing the page, and we are now missing from our previous rankings though, thankfully, several of our other pages do still exist on Google. So, has anyone dealt with this issue? Could this be solved by simply resetting up the 301 redirects and also configuring ASP.NET to recognize Google's spider as supporting cookies and thus not serving it the Session ID that has caused issue for us in the past? Any help (even just commiserating!) would be great. Thanks! Chad
Intermediate & Advanced SEO | | TisBest0