Can't find mistake in robots.txt
-
Hi all,
we recently filled our robots.txt file to prevent some directories from crawling.
Looks like:
User-agent: * Disallow: /Views/ Disallow: /login/ Disallow: /routing/ Disallow: /Profiler/ Disallow: /LILLYPROFILER/ Disallow: /EventRweKompaktProfiler/ Disallow: /AccessIntProfiler/ Disallow: /KellyIntProfiler/ Disallow: /lilly/
now, as Google Webmaster Tools hasn't updated our robots.txt yet,
I checked our robots.txt in some ckeckers.
They tell me that the User agent: * contains an error.
**Example:**
**Line 1: Syntax error! Expected <field>:</field> <value></value>
1: User-agent: *
****`I checked other robots.txt written the same way --> they work,`**
accordign to the checkers...
**`Where the .... is the mistake???`** ```
-
_Hi, _
_Just wondering .. Did you save the txt file in ANSI format? Sometimes, people mistakenly save it different format and this is where the problem creeps in. _
-
Hi!
The robots.txt is fine. Some checkers return wildcards as an error, as not all crawlers supported "*". I wouldn't worry about it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
Webmaster tools Hentry showing pages that don't exist
In Webmaster Tools I have a ton of pages listed under Structured Data >> Hentry. These pages are not on my website and I don't know where they are coming from. I redid the site for someone and perhaps they are from the old site. How do I find and delete these? Thank you Rena
Technical SEO | | renalynd270 -
Will a robots.txt disallow apply to a 301ed URL?
Hi there, I have a robots.txt query which I haven't tried before and as we're nearing a big time for sales, I'm hesitant to just roll out to live! Say for example, in my robots.txt I disallow the URL 'example1.html'. In reality, 'example1.html' 301s/302s to 'example2.html'. Would the robots.txt directive also apply to 'example2.html' (disallow) or as it's a separate URL, would the directive be ignored as it's not valid? I have a feeling that as it's a separate URL, the robots disallow directive won't apply. However, just thought I'd sense-check with the community.
Technical SEO | | ecommercebc0 -
Should I add 'nofollow' to site wide internal links?
I am trying to improve the internal linking structure on my site and ensure that the most important pages have the most internal links pointing to them (which I believe is the best strategy from Google's perspective!). I have a number of internal links in the page footer going to pages such as 'Terms and Conditions', 'Testimonials', 'About Us' etc. These pages, therefore, have a very large number of links going to them compared with the most important pages on my site. Should I add 'nofollow' to these links?
Technical SEO | | Pete40 -
What to do with 404 errors when you don't have a similar new page to 301 to ??
Hi If you have 404 errors for pages that you dont have similar content pages to 301 them to, should you just leave them (the 404's are optimised/qood quality with related links & branding etc) and they will eventually be de-indexed since no longer exist or should you 'remove url' in GWT ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
'External nofollow' in a robots meta tag? (advertorial links)
I believe this has never worked? It'd be an easy way of preventing any penalties from Google's recent crackdown on paid links via advertorials. When it's not possible to nofollow each external link individually, what are people doing? Nofollowing and/or noindexing the whole page?
Technical SEO | | Alex-Harford0 -
I have 404 errors but can't find where these links are?
The 4xx report had 0 errors, and then on the recent crawl it found over 200. They are all variations on real URLs e.g.: Real URL: http://www.bullseyeuk.com/10-up-deluxe-literature-holder.html 404 Error URL: http://www.bullseyeuk.com/10-up-deluxe-literature-holder.html �� None of them are linked to the root domain and I can't find where they are coming from. Any ideas? Thanks Jack
Technical SEO | | JackMurphy0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0