Robots.txt Download vs Cache
-
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap.
The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file.
This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
-
No to my knowledge. You will have to wait. Anyway, Google could have already download the new robots but while the reports are showing the older file. Those reports always take a while until refreshing completely.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Backlink quality vs quantity: Should I keep spammy backlinks?
Regarding backlinks, I'm wondering which is more advantageous for domain authority and Google reputation: Option 1: More backlinks including a lot of spammy links Option 2: Fewer backlinks but only reliable, non-spam links I've researched this topic around the web a bit and understand that the answer is somewhere in the middle, but given my site's specific backlink volume, the answer might lean one way or the other. For context, my site has a spam score of 2%, and when I did a quick backlink audit, roughly 20% are ones I want to disavow. However, I don't want to eliminate so many backlinks that my DA goes down. As always, we are working to build quality backlinks, but I'm interested in whether eliminating 20% of backlinks will hurt my DA. Thank you!
Technical SEO | | LianaLewis1 -
Why does my Google Web Cache Redirects to My Homepage?
Why does my Google Webcache appears in a short period of time and then automatically redirects to my homepage? Is there something wrong with my robots.txt? The only files that I have blocked is below: User-agent: * Disallow: /bin/ Disallow: /common/ Disallow: /css/ Disallow: /download/ Disallow: /images/ Disallow: /medias/ Disallow: /ClientInfo.aspx Disallow: /*affiliateId* Disallow: /*referral*
Technical SEO | | Francis.Magos0 -
Robots and Canonicals on Moz
We noticed that Moz does not use a robots "index" or "follow" tags on the entire site, is this best practice? Also, for pagination we noticed that the rel = next/prev is not on the actual "button" rather in the header Is this best practice? Does it make a difference if it's added to the header rather than the actual next/previous buttons within the body?
Technical SEO | | PMPLawMarketing0 -
Cached version of website
Hi, Upon checking the text cache view of our home page, I noticed the mobile menu links are also coming in text format which looks weird. Please see: http://webcache.googleusercontent.com/search?q=cache:indialetsplay.com&biw=1366&bih=638&noj=1&strip=1 Our coder told us that he has created separate menu i.e. one version for the desktop and one for the mobile version. Anyway, the reason the coder created a different menu for mobile in order to support the design requirements. Does the duplicating the menu good for on page SEO? Give the best solution for handling it.
Technical SEO | | Obbserv0 -
Blocking Affiliate Links via robots.txt
Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
Technical SEO | | Mark_Ginsberg0 -
Schema.org implementation for physician's office vs physician herself?
Hi, Regarding schema.org microdata, which page(s) should have the microdata? 1) http://schema.org/Physician -- appears to be about the office. Since we have all of the contact/address info in the footer on each page, should we do the same with microdata? I can't seem to find a suggested implementation on schema.org Assuming an office has multiple MDs, how should the docs be listed since the physician schema appears to be for the office, not for the individual doctors? Thanks for any insight!
Technical SEO | | Titan5520 -
Robots.txt anomaly
Hi, I'm monitoring a site thats had a new design relaunch and new robots.txt added. Over the period of a week (since launch) webmaster tools has shown a steadily increasing number of blocked urls (now at 14). In the robots.txt file though theres only 12 lines with the disallow command, could this be occurring because a line in the command could refer to more than one page/url ? They all look like single urls for example: Disallow: /wp-content/plugins
Technical SEO | | Dan-Lawrence
Disallow: /wp-content/cache
Disallow: /wp-content/themes etc, etc And is it normal for webmaster tools reporting of robots.txt blocked urls to steadily increase in number over time, as opposed to being identified straight away ? Thanks in advance for any help/advice/clarity why this may be happening ? Cheers Dan0 -
Root directory vs. subdirectories
Hello. How much more important does Google consider pages in the root directory relative to pages in a subdirectory? Is it best to keep the most important pages of a site in the root directory? Thanks!
Technical SEO | | nyc-seo0