What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
-
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT.
I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap.
Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
-
No, but they might write to it, modify it, or do all sorts of other nasty stuff I've seen hackers do when they get a hold of any writeable file on a system.
-
lol it's a robots text file. what are they going to do. Steal it? I should have clarified do a 777 to make sure that is not your problem, then yes change the permission to be tighter
-
Eesh I don't recommend 777. 644 or, if you're going to change it right back, 755 at most.
-
File permission maybe? Change it to 777 and try it again
-
If you have shell access on Linux you can use wget or GET or run lynx.
If google is getting the wrong robots file then your web server must be sending out something other than what you think is the robots file.
What happens if you do this in your browser:
-
Looking in my log files, Google hits robots.txt just about every time it crawls our site.
What are you trying to accomplish using fetch as Googlebot? Any chance CURL could do the job for you, or another tool that ignores robots.txt?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Selling same products under separate brands and can't consolidate sites...duplicate content issues?
I have a client selling home goods online and in-store under two different brand names in separate regions of the country. Currently, the websites are completely identical aside from branding. It is unlikely that they would have the capacity to write unique titles and page content for each website (~25,000 pages each), and the business would never consolidate the sites. Would it make sense to use canonical tags pointing to the higher-performing website on category and product pages? This way we could continue to capture branded search to the lesser brand while consolidating authority on the better performing website. What would you do?
Technical SEO | | jluke.fusion0 -
Content in Accordion doesn't rank as well as Content in Text box?
Does content rank better in a full view text layout, rather than in a clickable accordion? I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true? accordion example: see features: https://www.workday.com/en-us/applications/student.html
Technical SEO | | DigitalCRO1 -
Why doesn't SEOmoz see internal/external links on my site?
My SEOmoz analysis that my site contains neither external or internal lnks. I have used other tools and they have all seen the internal and external links on the pages. There aren't many but they are there. Why isn't SEOmoz seeing them?
Technical SEO | | iain0 -
Weird 404 Errors in Webmaster Tools
Hi, In a regular check with Webmaster Tools, I have noticed a sudden increase in the number of "not found-404" errors. So I have been looking at them and noticed something weird has been going on. There are well over 100 pages with 404-errors. The funny thing is, none of the ULR's are correct, For example, if the actual url is something like www.domain.com/latest-reviews , the 404-error points to a non-existent URL like www.domain.com/latest-re And when I checked where they were linked from, they are all from these spammy sites. Anyone know what could be causing these links, why would anyone link on purpose to a non-existent page? cheers,
Technical SEO | | Gamer070 -
Webmaster Tools 404s
We try to keep our 404s in google webmaster tools to a minimum but in recent months, the volume has simply exploded to over 500k errors. 99.95% of this is complete spam linking to pages that never existed. Have tried marking as resolved but they just end up back in the list eventually and don't like the idea of 301ing so many links when the pages never existed in the first place. We can just ignore them all but this makes it hard to identify legitimate 404s that need redirecting as there is only so much data we can export out of WT. Has anyone had experience with returning 410s? Does google eventually drop these from WT?
Technical SEO | | jandunlop0 -
Can JavaScrip affect Google's index/ranking?
We have changed our website template about a month ago and since then we experienced a huge drop in rankings, especially with our home page. We kept the same url structure on entire website, pretty much the same content and the same on-page seo. We kind of knew we will have a rank drop but not that huge. We used to rank with the homepage on the top of the second page, and now we lost about 20-25 positions. What we changed is that we made a new homepage structure, more user-friendly and with much more organized information, we also have a slider presenting our main services. 80% of our content on the homepage is included inside the slideshow and 3 tabs, but all these elements are JavaScript. The content is unique and is seo optimized but when I am disabling the JavaScript, it becomes completely unavailable. Could this be the reason for the huge rank drop? I used the Webmaster Tolls' Fetch as Googlebot tool and it looks like Google reads perfectly what's inside the JavaScrip slideshow so I did not worried until now when I found this on SEOMoz: "Try to avoid ... using javascript ... since the search engines will ... not indexed them ... " One more weird thing is that although we have no duplicate content and the entire website has been cached, for a few pages (including the homepage), the picture snipet is from the old website. All main urls are the same, we removed some old ones that we don't need anymore, so we kept all the inbound links. The 301 redirects are properly set. But still, we have a huge rank drop. Also, (not sure if this important or not), the robots.txt file is disallowing some folders like: images, modules, templates... (Joomla components). We still have some html errors and warnings but way less than we had with the old website. Any advice would be much appreciated, thank you!
Technical SEO | | echo10 -
What is the sense of robots.txt?
Using robots.txt to prevent search engine from indexing the page is not a good idea. so what is the sense of robots.txt? just for attracting robots to crawl sitemap?
Technical SEO | | jallenyang0 -
Having some weird crawl issues in Google Webmaster Tools
I am having a large amount of errors in the not found section that are linked to old urls that haven't been used for 4 years. Some of the ulrs being linked to are not even in the structure that we used to use for urls. Never the less Google is saying they are now 404ing and there are hundreds of them. I know the best way to attack this is to 301 them, but I was wondering why all of these errors would be popping up. I cant find anything in the google index searching for the link in "" and in webmaster tools it shows unavailable as where these are being linked to from. Any help would be awesome!
Technical SEO | | Gordian1