Robots.txt - Allow and Disallow. Can they be the same?
-
Hi All,
I need some help on the following:
Are the following commands the same?
User-agent: *
Disallow:
or
User-agent: *
Allow: /
I'm a bit confused. I take it that the first one allows all the bots but the second one blocks all the bots.
Is that correct?
Many thanks,
Aidan
-
Hi Aidan
I'm getting a similar problem on a site I'm working on. The on page rank checker "can't reach the page". I've checked everything obvious (at least I think I have!)
May I ask how you eventually resolved it?
Thanks Aidan
-
Hi
you can use this tool for be sure that the crawler see your files
http://pro.seomoz.org/tools/crawl-test
but you must wait for receive the report to a email.
when you say:
"get the following msg when I try to run On Page Analysis:"
the tools is this?
http://pro.seomoz.org/tools/on-page-keyword-optimization/new
for check the website you can use this:
http://www.opensiteexplorer.org
Ciao
Maurizio
-
Hi,
Thanks for the clarification. So the Robots.txt isn't blocking anything.
Do you know why then i cannot use SEOMoz On Page Analysis and Xenu and Screaming Frog only return 3 URLs?
I get the following msg when I try to run On Page Analysis:
"Oops! We were unable to reach the papge you requested for your report. Please try again later."
Would there be something else blocking me? GWMT Parameters maybe?
-
E' un piacere.
but I don't understand the problem.
if the site have this robots.txt
**User-agent: ***
Allow: /
every crawler can index and see all files of the this website and Seo moz also.
Maybe the problem is different?
Ciao
-
Thanks Maurizio,
I need to do some analysis on this site. Is there a way to use my SEO tools (screaming frog, SEOMoz) to ignore the robots.txt to enable me to do a good site audit?
Thanks again for the answers. Much appreciated
Aidan
-
Hi Aidan
User-agent: *
Disallow:and
User-agent: *
Allow: /are the same
Ciao
Maurizio -
Hi Maurizio,
The reason I asked is because I am working on a site and it's robots.txt is :
User-agent: *
Allow: /
Why would they have this?
I can't use On-Page Analysis or Screaming Frog as it only results in 3 URLs.
Thanks again,
Aidan
-
Hi
1° example:
User-agent: *
Disallow:all User-agent can index your files
2° example
User-agent: *
Disallow: /never User-agent"can index you files
other example here:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
Ciao
Maurizio
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I safely asume that links between subsites on a subdirectories based multisite will be treated as internal links within a single site by Google?
I am building a multisite network based in subdirectories (of the mainsite.com/site1 kind) where the main site is like a company site, and subsites are focused on brands or projects of that company. There will be links back and forth from the main site and the subsites, as if subsites were just categories or pages within the main site (they are hosted in subfolders of the main domain, after all). Now, Google's John Mueller has said: <<as far="" as="" their="" url="" structure="" is concerned,="" subdirectories="" are="" no="" different="" from="" pages="" and="" subpages="" on="" your="" main="" site.="" google="" will="" do="" its="" best="" to="" identify="" where="" sites="" separate="" using="" but="" the="" is="" same="" for="" a="" single="" site,="" you="" should="" assume="" that="" seo="" purposes,="" network="" be="" treated="" one="">></as> This sounds fine to me, except for the part "Google will do its best to identify where sites are separate", because then, if Google establishes that my multisite structure is actually a collection of different sites, links between subsites and mainsite would be considered backlinks between my own sites, which could be therefore considered a link wheel, that is, a kind of linking structure Google doesn't like. How can I make sure that Google understand my multisite as a unique site? P.S. - The reason I chose this multisite structure, instead of hosting brands in categories of the main site, is that if I use the subdirectories based multisite feature I will be able to map a TLD domain to any of my brands (subsites) whenever I'd choose to give that brand a more distinct profile, as if it really was a different website.
Web Design | | PabloCulebras0 -
Block parent folder in robot.txt, but not children
Example: I want to block this URL (which shows up in Webmaster Tools as an error): http://www.siteurl.com/news/events-calendar/usa But not this: http://www.siteurl.com/news/events-calendar/usa/event-name
Web Design | | Zuken0 -
Https pages indexed but all web pages are http - please can you offer some help?
Dear Moz Community, Please could you see what you think and offer some definite steps or advice.. I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
Web Design | | SEOguy10 -
Can anyone recommend a tool that will identify unused and duplicate CSS across an entire site?
Hi all, So far I have found this one: http://unused-css.com/ It looks like it identifies unused, but perhaps not duplicates? It also has a 5,000 page limit and our site is 8,000+ pages....so we really need something that can handle a site larger than their limit. I do have Screaming Frog. Is there a way to use Screaming Frog to locate unused and duplicate CSS? Any recommendations and/or tips would be great. I am also aware of the Firefix extensions, but to my knowledge they will only do one page at a time? Thanks!
Web Design | | danatanseo0 -
Can external links in a menu attract a penalty?
We have some instances of external links (i.e. pointing to another domain) in site menus. Although there are legitimate reasons (e.g. linking to a news archive kept on a separate domain) I understand this can be considered bad from a usability perspective. This begs the question - is this bad for SEO? With the recent panda changes we've seen certain issues which were previously "only" about usability attract SEO penalties, but I can't find any references to this example. Anyone have thoughts / experience?
Web Design | | SOS_Children0 -
Need to hire a tech for find out why google spider can't access my site
Google spider can't access my site and all my pages are gone out of serps. I've called network solutions tech support and they say all is fine with the site which is wrong. does anyone know of a web tech who i can hire to fix this issue? Thanks, Ron
Web Design | | Ron100 -
Is anyone using Humans.txt in your websites? What do you think?
http://humanstxt.org Anyone using this on their websites and if so have you seen and positive benefits of doing so? Would be good to see some examples of sites using it and potentially how you're using the files. I'm considering adding this to my checklist for launching sites
Web Design | | eseyo1 -
Can't figure out what's going on with these strange 403's
The last crawl found a good number of 404's and I can't figure out what's going on. I don't even recognize these links. What's really strange is a few of them high a fairly decent page authority. For instance this one has a PA of 55: http://noahsdad.com/?path=http%3A%2F%2Feurosystems.it%2Fconf_commerciale%2Fimages%2Fdd.gif%3F%3F There are several more like this one also, it seems most of these new ones have "Feurosystems" in the link...I have no idea what that is. Just curious what you guys think is going on, why these are 404'ing, and how to fix it. Thanks. Edit: I took out the "%" from the links and I get this: http://noahsdad.com/?path=http3A2F2Feurosystems.it2Fconf_commerciale2Fimages2Fdd.gif3F3F which takes me to a page on my site. I have no idea what's going on, or what that link is. Hoping someone can chime in because this is strange. Another edit: I just checked out the Google Webmaster's and it looks like these errors are 403's and all started around March 21st. I have no idea what happend on March 21st to start causing all of these errors, but I'd sure like to get it fixed. 🙂
Web Design | | NoahsDad0