Robots.txt - "File does not appear to be valid"
-
Good afternoon Mozzers!
I've got a weird problem with one of the sites I'm dealing with. For some reason, one of the developers changed the robots.txt file to disavow every site on the page - not a wise move!
To rectify this, we uploaded the new robots.txt file to the domain's root as per Webmaster Tool's instructions. The live file is: User-agent: * (http://www.savistobathrooms.co.uk/robots.txt)
I've submitted the new file in Webmaster Tools and it's pulling it through correctly in the editor. However, Webmaster Tools is not happy with it, for some reason. I've attached an image of the error.
Does anyone have any ideas? I'm managing another site with the exact same robots.txt file and there are no issues.
Cheers,
Lewis
-
Thanks for the quick response, Patrick. Why, if this robots.txt file is incorrect, does it yield no errors on other sites we use this on?
Cheers,
Lewis
-
Hi there
I want to say that needs an...
Allow: /
...or a "Group 2" specification.
I would take a look at Google Developer's Robots.txt Specifications and see where you have opportunities to remedy this issue.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to stop robots.txt restricting access to sitemap?
I'm working on a site right now and having an issue with the robots.txt file restricting access to the sitemap - with no web dev to help, I'm wondering how I can fix the issue myself? The robots.txt page shows User-agent: * Disallow: / And then sitemap: with the correct sitemap link
Technical SEO | | Ad-Rank0 -
"Ghost" errors on blog structured data?
Hi, I'm working on a blog which Search Console account advises me about a big bunch of errors on its structured data: Structured data - graphics Structured data - hentry list Structured data - detail But I get to https://developers.google.com/structured-data/testing-tool/ and it tells me "all is ok": Structured data - test Any clue? Thanks in advance, F0NE5lz.png hm7IBtV.png aCRJdJO.jpg 15SRo93.jpg
Technical SEO | | Webicultors0 -
We have 302 redirect links on our forum that point to individual posts. Should we add a rel="nofollow" to these links?
Moz is showing us that we have a HUGE amount of 302 redirects. These are coming from our community forum. Forum URL: https://www.foodbloggerpro.com/community/ Example thread URL: https://www.foodbloggerpro.com/community/viewthread/322/ Example URL that points to a specific reply: https://www.foodbloggerpro.com/community/viewreply/1582/ The above link 302 redirects to this URL: https://www.foodbloggerpro.com/community/viewthread/322/#1582 My two questions would be: Do you think we should we add rel=nofollow to the specific reply URLs? If possible, should we make those redirects 301 vs. 302? Screencast attached. nofollow_302.mp4
Technical SEO | | Bjork1 -
"Search Box Optimization"
A client of ours recently received en email from a random SEO "company" claiming they could increase website traffic using a technique known as "search box optimization". Essentially, they are claiming they can insert a company name into the autocomplete results on Google. Clearly, this isn't a legitimate service - however, is it a well known technique? Despite our recommendation to not move forward with it, the client is still very intrigued. Here is a video of a similar service:
Technical SEO | | McFaddenGavender
https://www.youtube.com/watch?v=zW2Fz6dy1_A0 -
Robots.txt
Hello, My client has a robots.txt file which says this: User-agent: * Crawl-delay: 2 I put it through a robots checker which said that it must have a **disallow command**. So should it say this: User-agent: * Disallow: crawl-delay: 2 What effect (if any) would not having a disallow command make? Thanks
Technical SEO | | AL123al0 -
Objects behind "hidden" elements
If you take a look at this page: http://www.americanmuscle.com/2010-mustang-body-kits.html You will notice we have a little "Read More" script set up. I have used Google Data Validator to test structured data located behind this 'Read More' and it checks out OK but I was wondering if anyone has insight to whether or not the spiders are even seeing links, etc. behind the 'Read More' script.
Technical SEO | | andrewv0 -
Client accidently blocked entire site with robots.txt for a week
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues. Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in. The error has been corrected, but... Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly? Thanks for all of your help.
Technical SEO | | pixelpointpress0