How long will Google take to read my robots.txt after updating?
-
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website.
The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change.
How long will it take?
-
What I mean is the website logs:
66.249.73.219 - - [21/May/2012:21:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:21:53:00 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:22:05:33 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:22:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:23:01:31 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:23:44:15 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:23:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:00:16:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:00:46:02 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:00:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:01:24:08 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:01:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:01:51:17 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:02:32:28 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:02:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:02:56:28 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:03:40:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:03:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:04:01:29 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.88.227 - - [22/May/2012:04:38:59 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:04:43:06 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:04:51:02 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -
Thanks Alan, so to see the log you enter the cache version of the url?
-
Hello Christian.
It depends on many things.
In my logs, I see four googlebots today. Each one has read the robots.txt at hourly intervals.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does anyone know of a Google update in the past few days?
Have seen a fairly substantial drop in Google search console, I'm still looking into it comparing things, but does anyone know if there's been a Google updates within the past few days? Or has anyone else noticed anything? Thanks
Intermediate & Advanced SEO | | seoman100 -
Long Title Tags
Hi guys, We have product e-commerce title tags which are over 60 characters - around 80 plus. The reason we added them in there is to incorporate
Intermediate & Advanced SEO | | seowork214
more information for Google. The format of these title tags are: Name + Colour + Rug Type + Origin Name = for people searching for the name of the rug
Color = people searching for a specific color
Type = The type of rug (e.g. normal or designer)
Origin = Where the rug is for. So this title will cover people searching for: People searching for designer rugs, the specific colour and also where it comes from. This then results in the title tag going way over 60 characters - around 80-90 characters. -- Would it be wise to try and shrink it down to under 60 characters, and what would be a good approach to do this? Cheers.0 -
Robots.txt Disallowed Pages and Still Indexed
Alright, I am pretty sure I know the answer is "Nothing more I can do here." but I just wanted to double check. It relates to the robots.txt file and that pesky "A description for this result is not available because of this site's robots.txt". Typically people want the URL indexed and the normal Meta Description to be displayed but I don't want the link there at all. I purposefully am trying to robots that stuff outta there.
Intermediate & Advanced SEO | | DRSearchEngOpt
My question is, has anybody tried to get a page taken out of the Index and had this happen; URL still there but pesky robots.txt message for meta description? Were you able to get the URL to no longer show up or did you just live with this? Thanks folks, you are always great!0 -
How long will this Site be punished? (place your bets!)
Got hit with a manual penalty in Feb2014. Got it removed in 6 months (before the Penguin refresh). Whole site got deindexed at one point including BRAND searches. (Brand searches has since come back) Disavowed over 20k domains (yea spam was bad). But still have a good amount of authority links such as huff post, edu, wiki, apple apps, reddit etc. About 97% of our links are on page 2 or beyond. Cant get past that spot 11 'Wall'. The suppression machine is not kind. We even had a very popular article get tons of shares, media pick up, and the original article would not rank on the first page for its title. Our 'brand + keyword' gets about 2k searches a month. Just 'keyword' gets nothing, which i find amusing. So whats the prognosis doc? Another year, 3 or never? Anyone else in same boat? Wait for the next penguin refresh and hope for the best? Cheers eXfPjzX.png
Intermediate & Advanced SEO | | IsHot0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Why are these results being showed as blocked by robots.txt?
If you perform this search, you'll see all m. results are blocked by robots.txt: http://goo.gl/PRrlI, but when I reviewed the robots.txt file: http://goo.gl/Hly28, I didn't see anything specifying to block crawlers from these pages. Any ideas why these are showing as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
Dropped Out of Google and Bing
I am helping with a site that at one time I had on page 1 for Google/Bing. Site started to slip in rankings, then someone else did a makeover of the store and botched things by renaming pages, having errors in pages (multiple head/body), mismatch page names from sitemap, etc. Site slipped to page 4/5. I righted things, fixed duplication using canonicalization, made some other changes. Now site is gone completely from Google/Bing for desired keyword. No penalties. Site still shows if do search on domain name. Site is www.plussizeplum.com (plus size lingerie, sorry), keyword target is plus size lingerie. Anyone have any clues, tips, etc on why we fell off the face of the earth? Page Authority/Domain Authority are both comparable to most of the page 1/2 sites for same thing. Thanks for any advice.
Intermediate & Advanced SEO | | dlcohen0 -
Panda Updates - robots.txt or noindex?
Hi, I have a site that I believe has been impacted by the recent Panda updates. Assuming that Google has crawled and indexed several thousand pages that are essentially the same and the site has now passed the threshold to be picked out by the Panda update, what is the best way to proceed? Is it enough to block the pages from being crawled in the future using robots.txt, or would I need to remove the pages from the index using the meta noindex tag? Of course if I block the URLs with robots.txt then Googlebot won't be able to access the page in order to see the noindex tag. Anyone have and previous experiences of doing something similar? Thanks very much.
Intermediate & Advanced SEO | | ianmcintosh0