Robots.txt question
-
I want to block spiders from specific specific part of website (say abc folder).
In robots.txt, i have to write -
User-agent: *
Disallow: /abc/
Shall i have to insert the last slash. or will this do
User-agent: *
Disallow: /abc
-
I will do so. And hope to get that back.
-
If you contact the help desk, they can probably help you get your old account back.
-
I am the same person with the username seoug, but lost that account. So, had to start afresh ! I was a PR0 member, but accidently deleted that account ( it was not intentional ). And now , when i tried login in, i get a message that seoug name is already taken.
-
Thanks for clearing my doubts.
-
at least our answers agree, so no Atul is doubley sure of how to do it...
-
EGOL does it to me all the time!
-
Hi Atul,
Add the trailing slash.
/abc could be a page url. Where as /abc/ is definitely a folder.
http://www.robotstxt.org/robotstxt.html <-- Everything you ever wanted to know about robots.txt
Regards
Aran
[EDIT: Damn it, Ryan submitted whilst I was answering! Must type faster ]
-
Use the trailing slash.
More about robots.txt can be learned at this site: http://www.robotstxt.org/
The trailing slash indicates you are blocking a folder. Without the slash the object would be considered a file (i.e. page). I am not sure what the result would be if you tried to block a folder without the trailing slash. Even if it worked it would not be the correct code and may lead to various bots treating it differently.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking in Robots.txt and the re-indexing - DA effects?
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic. Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK. I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it. I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value. Has anyone ever blocked and un-blocked and whats the affects pls? All answers greatly received - cheers GB
Technical SEO | | Bush_JSM0 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Google place listings and search results- quick question.
Has anybody else noticed that they are ranking better on 'places' yet they have dropped off in the actual search results? We've had no message through webmaster tools. The same seems to have happened to our competitors.
Technical SEO | | onlinechester0 -
Quick Seo question regarding 301 redirect
Hi everyone and thank you for showing interested in my problem and for helping me out with this easy thing i have going on Here is how it puts out : I have 2 websites, same niche, mostly same keywords. Site #1 holding strong on google #2 ranking for months now. Site #2 was holding strong in google top 10 rankings until 2 weeks ago when it got sandboxed for some reason I want to use a 301 permanent redirect from Site #2 to Site #1 to pass all the link juice onto Site #1 and hopefully beat the #1 spot The question: Will this affect Site #1 is anyway, considering Site #2 is in somehow sandbox ( i assume that, since he dropped more then 70 positions over night ) Is thins a good think to do or i risk damaging Site #1 by doing this ? Thanks allot in advance. Best regards,
Technical SEO | | caw_ro
Trinca Alexandru0 -
Technical SEO question re: java
Hi, I have an SEO question that came my way, but it's a bit too technical for me to handle. Our entire ecom site is in java, which apparently writes to a page after it has loaded and is not SEO-friendly. I was presented with a work-around that would basically consist of us pre redering an html page to search engines and leaving the java page for the customer. It sounds like G's definition of "cloaking" to me, but I wanted to know if anyone has any other ideas or work-arounds (if there are any) on how we can make the java based site more SEO-friendly. Any thoughts/comments you have would be much appreciated. Thanks!!
Technical SEO | | Improvements0 -
Robots.txt usage
Hey Guys, I am about make an important improvement to our site's robots.txt we have large number of properties on our site and we have different views for them. List, gallery and map view. By default list view shows up and user can navigate through gallery view. We donot want gallery pages to get indexed and want to save our crawl budget for more important pages. this is one example of our site: http://www.holiday-rentals.co.uk/France/r31.htm When you click on "gallery view" URL of this site will remain same in your address bar: but when you mouse over the "gallery view" tab it will show you URL with parameter "view=g". there are number of parameters: "view=g, view=l and view=m". http://www.holiday-rentals.co.uk/France/r31.htm?view=l http://www.holiday-rentals.co.uk/France/r31.htm?view=g http://www.holiday-rentals.co.uk/France/r31.htm?view=m Now my question is: I If restrict bots by adding "Disallow: ?view=" in our robots.txt will it effect the list view too? Will be very thankful if yo look into this for us. Many thanks Hassan I will test this on some other site within our network too before putting it to important one's. to measure the impact but will be waiting for your recommendations. Thanks
Technical SEO | | holidayseo0