Do I have a robots.txt problem?
-
I have the little yellow exclamation point under my robots.txt fetch as you can see here- http://imgur.com/wuWdtvO
This version shows no errors or warnings- http://imgur.com/uqbmbug
Under the tester I can currently see the latest version. This site hasn't changed URLs recently, and we haven't made any changes to the robots.txt file for two years. This problem just started in the last month. Should I worry?
-
Today it has a green check mark, and absolutely no changes were made to the website since I asked this question.
-
It could be that your server had a hard time when Google tried to view your robots.txt file that's why it wouldn't be able to fetch it. As long as this issue doesn't prevent Google anymore in the future it's not much to worry about.
-
That would make me feel more confident of a false error being reported. Time to closely monitor the crawl logs, look at server stats, and keep an eye on GWT for a change in the reporting/indexing. I would also go into the GWT forums and post, see if anyone is reporting a similar error these past couple days.
-
I can't post the domain but I know it is accessible.
When I go to the tester it shows the live robots.txt with no problems. I also can look at the server logs and see that it is being crawled, but being crawled less then Bing Crawls. Also the Bing Webmaster Tools is showing no problems.
-
Can you post your domain? Manually checking the robots.txt file would help.
I've checked many of my GWT accounts and I am not showing a sudden robots.txt error. It could be a false error, but I would take anything with the robots.txt file seriously. You'll want to make sure that it is in fact accessible to all the crawlers desired.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexation and visibility problem
Hi I am working on a website (usarrestsearch org) for 6 months. I wrote about 100 pages full of good content. for some reason I see only 75% of the pages indexed in GWT. and Im having problems with SERP positions not rising. I suspect that it might be connected to the structure of the site. will appreciate any help thanks
Technical SEO | | holdportals0 -
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
Google Structured Data Problem
Hello everyone, About 1-2 weeks ago, I have implemented rich snippets (microdata) for the product pages of my e-commerce site. However, in the web masters tools, google is saying that the crawlers did not detect any structured data in my site. I have also checked my pages using Structured Data Testing Tool. You can see an example test result in the following address. http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.tarzimon.com%2Fproduct%2Fnaif-tasarim-torr-aydinlatma-1031 What may cause this problem? Thank you for your help
Technical SEO | | hknkynr0 -
Problem With Video Sitemap Becuase All Videos Are in he Same URL
Hi, I created a video sitemap and now I'm getting an error on webmaster tools because the location for some of the videos is the same. It says: Duplicate URL - This URL is a duplicate of another URL in the sitemap. Please remove it and resubmit. What can I do if all my videos are located in the same URL?? Thanks
Technical SEO | | Tug-Agency0 -
WordPress E-Commerce Plugin Duplicate Content Problem
I am working on a wordpress website that uses the WP E-Commerce plugin. I am using the Yoast seo plugin but not totally familiar with it. I have noticed that WP E-Commerce creates duplicate content issues. Here's an example: http://www.domain.com/parent-category/product-url-1/ is the same content as http://www.domain.com/parent-category/child-category/product-url-1/. I was wondering which of these following options are the best solution: 1. 301 redirect the multiple instances to one page
Technical SEO | | theanglemedia
2. noindex all but one instance
3. Use the canonical tag (i've used this tag before for telling SE's to use the www version of a page but not sure if it's the appropriate for this)
4. a combination of one of these 3 options? Thanks in advance!0 -
How to publish duplicate content legitimately without Panda problems
Let's imagine that you own a successful website that publishes a lot of syndicated news articles and syndicated columnists. Your visitors love these articles and columns but the search engines see them as duplicate content. You worry about being viewed as a "content farm" because of this duplicate content and getting the Panda penalty. So, you decide to continue publishing the content and use... <meta name="robots" content="noindex, follow"> This allows you do display the content for your visitors but it should stop the search engines from indexing any pages with this code. It should also allow robots to spider the pages and pass link value through them. I have two questions..... If you use "noindex" will that be enough to prevent your site from being considered as a content farm? Is there a better way to continue publication of syndicated content but protect the site from duplicate content problems?
Technical SEO | | EGOL0 -
Slash at end of URL causing Google crawler problems
Hello, We are having some problems with a few of our pages being crawled by Google and it looks like the slash at the end of the URL is causing the problem. Would appreciate any pointers on this. We have a redirect in place that redirects the "no slash" URL to the "slash" URL for all pages. The obvious solution would be to try turning this off, however, we're unable to figure our where this redirect is coming from. There doesn't appear to be an instruction in our .htaccess file doing this, and we've also tried using "DirectorySlash Off" in the .htaccess file, but that doesn't work either. (if it makes a difference it is a 302 redirect doing this, not a 301) If we can't get the above to work, then the other solution would be to somehow reconfigure the page so that it is recognizable with the slash at the end by Google. However, we're not sure how this would be done. I think the quickest solution would be to turn off the "add slash" redirect. Any ideas on where this command might be hiding, and how to turn it off would be greatly appreciated. Or any tips from people who have had similar crawl problems with google and any workarounds would be great! Thanks!
Technical SEO | | onetwentysix0