How long will Google take to read my robots.txt after updating?
-
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website.
The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change.
How long will it take?
-
What I mean is the website logs:
66.249.73.219 - - [21/May/2012:21:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:21:53:00 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:22:05:33 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:22:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:23:01:31 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:23:44:15 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:23:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:00:16:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:00:46:02 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:00:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:01:24:08 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:01:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:01:51:17 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:02:32:28 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:02:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:02:56:28 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:03:40:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:03:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:04:01:29 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.88.227 - - [22/May/2012:04:38:59 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:04:43:06 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:04:51:02 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -
Thanks Alan, so to see the log you enter the cache version of the url?
-
Hello Christian.
It depends on many things.
In my logs, I see four googlebots today. Each one has read the robots.txt at hourly intervals.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO Best Practices regarding Robots.txt disallow
I cannot find hard and fast direction about the following issue: It looks like the Robots.txt file on my server has been set up to disallow "account" and "search" pages within my site, so I am receiving warnings from the Google Search console that URLs are being blocked by Robots.txt. (Disallow: /Account/ and Disallow: /?search=). Do you recommend unblocking these URLs? I'm getting a warning that over 18,000 Urls are blocked by robots.txt. ("Sitemap contains urls which are blocked by robots.txt"). Seems that I wouldn't want that many urls blocked. ? Thank you!!
Intermediate & Advanced SEO | | jamiegriz0 -
Ranking on google search
Hello Mozzers Moz On page grader shows A grade for the particular URL,but my page was not ranking on top 100 Google search. Any help is appreciated ,Thanks
Intermediate & Advanced SEO | | sobanadevi0 -
What will happen if we 302 a page that is ranking #1 in google for a high traffic term?
We're planning to test something and we want to 302 a page to another page for a period of time. The question is, the original page is ranking #1 for a high traffic term. I want to know what will happen if we do this? Will we lose our rank? Will the traffic remain the same? Ultimately I do not want to lose traffic and I do not want to 301 until it has been properly tested.
Intermediate & Advanced SEO | | maxcdn0 -
How long for Panda 4.1 fixes to take affect?
Hi, If you have been hit by Panda 4.1 and now putting fixes in place, for this example lets say you remove a load of dup content (and that's what caused the problem) - how long would it take for that fix to take affect? Do you have to wait for the next Panda update? or will it be noticed on the next crawl? Thanks.
Intermediate & Advanced SEO | | followuk0 -
Google Fetch Issue
I'm having some problems with what google is fetching and what it isn't, and I'd like to know why. For example, google IS fetching a non-existent page but listing it as an error: http://www.gaport.com/carports but the actual url is http://www.gaport.com/carports.htm. Google is NOT able to fetch http://www.gaport.com/aluminum/storage-buildings-10x12.htm. It says the page doesn't exist (even though it does) and when I click on the not found link in Google fetch it adds %E@%80%8E to the url causing the problem. One theory we have is that this may be some sort of server/hosting problem, but that's only really because we can't figure out what we could have done to cause it. Any insights would be greatly appreciated. Thanks and Happy Holidays! Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Robot.txt help
Hi, We have a blog that is killing our SEO. We need to Disallow Disallow: /Blog/?tag*
Intermediate & Advanced SEO | | Studio33
Disallow: /Blog/?page*
Disallow: /Blog/category/*
Disallow: /Blog/author/*
Disallow: /Blog/archive/*
Disallow: /Blog/Account/.
Disallow: /Blog/search*
Disallow: /Blog/search.aspx
Disallow: /Blog/error404.aspx
Disallow: /Blog/archive*
Disallow: /Blog/archive.aspx
Disallow: /Blog/sitemap.axd
Disallow: /Blog/post.aspx But Allow everything below /Blog/Post The disallow list seems to keep growing as we find issues. So rather than adding in to our Robot.txt all the areas to disallow. Is there a way to easily just say Allow /Blog/Post and ignore the rest. How do we do that in Robot.txt Thanks0 -
Google Listings
How can i make my pages appear in google results such as menu, diner, hours, contact us etc.. when some searches for my keyword or domain take a look at this screen shot Thanks UbqY4kwA UbqY4kwA
Intermediate & Advanced SEO | | vlad_mezoz0 -
How Long Before a URL is 'Too Long'
Hello Mozzers, Two of the sites I manage are currently in the process of merging into one site and as a result, many of the URLs are changing. Nevertheless (and I've shared this with my team), I was under the impression that after a certain point, Google starts to discount the validity of URLs that are too long. With that, if I were to have a URL that was structured as follows, would that be considered 'too long' if I'm trying to get the content indexed highly within Google? Here's an example: yourdomain.com/content/content-directory/article and in some cases, it can go as deep as: yourdomain.com/content/content-directory/organization/article. Albeit there is no current way for me to shorten these URLs is there anything I can do to make sure the content residing on a similar path is still eligible to rank highly on Google? How would I go about achieving this?
Intermediate & Advanced SEO | | NiallSmith0