Can't find mistake in robots.txt
-
Hi all,
we recently filled our robots.txt file to prevent some directories from crawling.
Looks like:
User-agent: * Disallow: /Views/ Disallow: /login/ Disallow: /routing/ Disallow: /Profiler/ Disallow: /LILLYPROFILER/ Disallow: /EventRweKompaktProfiler/ Disallow: /AccessIntProfiler/ Disallow: /KellyIntProfiler/ Disallow: /lilly/
now, as Google Webmaster Tools hasn't updated our robots.txt yet,
I checked our robots.txt in some ckeckers.
They tell me that the User agent: * contains an error.
**Example:**
**Line 1: Syntax error! Expected <field>:</field> <value></value>
1: User-agent: *
****`I checked other robots.txt written the same way --> they work,`**
accordign to the checkers...
**`Where the .... is the mistake???`** ```
-
_Hi, _
_Just wondering .. Did you save the txt file in ANSI format? Sometimes, people mistakenly save it different format and this is where the problem creeps in. _
-
Hi!
The robots.txt is fine. Some checkers return wildcards as an error, as not all crawlers supported "*". I wouldn't worry about it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
How Many Words To Make Content 'unique?'
Hi All, I'm currently working on creating a variety of new pages for my website. These pages are based upon different keyword searches for cars, for example used BMW in London, Used BMW in Edinburgh and many many more similar kinds of variations. I'm writing some content for each page so that they're completely unique to each other (the cars displayed on each page will also be different so this would not be duplicated either). My question is really, how much content do you think that I'll need on each page? or what is optimal? What would be the minimum you might need? Thank for your help!
Technical SEO | | Sandicliffe0 -
New Page Showing Up On My Reports w/o Page Title, Words, etc - However, I didn't create it
I have a WordPress site and I was doing a crawl for errors and it is now showing up as of today that this page : https://thinkbiglearnsmart.com/event-registration/?event_id=551&name_of_event=HTML5 CSS3 is new and has no page title, words, etc. I am not even sure where this page or URL came from. I was messing with the robots.txt file to allow some /category/ posts that were being hidden, but I didn't re-allow anything with the above appendages. I just want to make sure that I didn't screw something up that is now going to impact my rankings - this was just a really odd message to come up as I didn't create this page recently - and that shouldnt even be a page accessible to the public. When I edit the page - it is using an Event Espresso (WordPress plugin) shortcode - and I don't want to noindex this page as it is all of my events. Sorry this post is confusing, any help or insight would be appreciated! I am also interested in hiring someone for some hourly consulting work on SEO type issues if anyone has any references. Thank you!
Technical SEO | | webbmason0 -
Can the Hosting location of image files have a negative effect if 'off-site' such as on the devs own media server ?
Hi Can the Hosting location of image files have a negative effect if 'off-site' such as if they are on the developers own media server ? As opposed to on the actual websites server or file structure ? In the case i'm looking at the image files are hosted on a totally separate server (a media subdomain of the developers site server) from the subject sites dedicated server. Will engines still attribute the properties of files hosted in this manner to the main website (such as file name, alt attributes, etc etc) ? Or should they really be on the subject sites server own media folder ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Have I constructed my robots.txt file correctly for sitemap autodiscovery?
Hi, Here is my sitemap: User-agent: * Sitemap: http://www.bedsite.co.uk/sitemaps/sitemap.xml Directories Disallow: /sendfriend/
Technical SEO | | Bedsite
Disallow: /catalog/product_compare/
Disallow: /media/catalog/product/cache/
Disallow: /checkout/
Disallow: /categories/
Disallow: /blog/index.php/
Disallow: /catalogsearch/result/index/
Disallow: /links.html I'm using Magento and want to make sure I have constructed my robots.txt file correctly with the sitemap autodiscovery? thanks,0 -
No Google cached snapshot image... 'Text-only version' working.
We are having an issue with Googles cached image snapshops... Here is an example: http://webcache.googleusercontent.com/search?q=cache:IyvADsGi10gJ:shop.deliaonline.com/store/home-and-garden/kitchen/morphy-richards-48781-cooking/ean/5011832030948+&cd=308&hl=en&ct=clnk&gl=uk I wondered if anyone knows or can see the cause of this problem? Thanks
Technical SEO | | pekler1 -
Keyword rankings improve but traffic doesn't
I am working on a couple of SEO projects and have noticed over the past couple of months that the keywords rankings have improved immensely with most of them amongst top 10 on google, but still the traffic on the website doesn't improve much. Can somebody explain me the possible reasons behind this, and what can I do to attract more traffic?
Technical SEO | | KS__0 -
Robots.txt blocking site or not?
Here is the robots.txt from a client site. Am I reading this right --
Technical SEO | | 540SEO
that the robots.txt is saying to ignore the entire site, but the
#'s are saying to ignore the robots.txt command? See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file To ban all spiders from the entire site uncomment the next two lines: User-Agent: * Disallow: /0