Is our robots.txt file correct?
-
Could you please review our robots.txt file and let me know if this is correct.
Thank you!
-
What's the end goal here?
Are you actively trying to block all bots?If so, I would still suggest "Disallow:/".
The other syn-text may also work, but if Google suggests using a backslash, you should probably use it. -
Hi, it seems correct to me however try to use the robots.txt checker tool in GWTools. You may try to include a couple of your urls and see if google can crawl them.
I find only redundant the follwing rule:
User-agent: Mediapartners-Google.
If you have already set up a disallow: rule for all bot excluding rogerbot which can't access the community folder why create a new rule stating the same for mediapartners?
Again, why are you saying to all bots they can access the entire site, being that the default rule? Avoid those lines, include just the rogerbot and sitemaps rule and you're done.
-
Thank you for the reply. We want to allow all crawling, except for rogerbot in the community folder.
I have updated the robots.txt to the following, does this look right?:
User-agent: * Disallow: User-agent: rogerbot Disallow: /community/ User-agent: Mediapartners-Google Disallow: Sitemap: http://www.faithology.com/sitemap.xml view the robots here: http://www.faithology.com/robots.txt
-
There are some errors, but since I'm not sure what you are trying to accomplish, I recommend checking it with a tool first. Here is a great tool to check your robots.txt file and give you information on errors - http://tool.motoricerca.info/robots-checker.phtml
If you still need assistance after running it through the tool, please reply and we can help you further.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocked internal resources Wordpress
Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted. I've created the following new one: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush, I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!2 -
How long for a Disavow file to take affect for a non-manual penalty?
Hi guys, hope you're all good, quick question in regards to a Disavow file. A page of ours recently crashed from page 2 all the way to page 7ish. It's weird that it happened considering it was ranking on the 2nd page for around a year, then all of a sudden it came crashing down. I identified an affiliate link which was placed in a sidebar, webmaster tools picked up 24,000+ links coming from the site so I have decided to disavow it. I disavowed the site around 3 days ago, and in the mean time we have managed to grab ourselves some very good do-follow links from very authoritative sites. At the moment the page has gone up 1 page, sitting at 4-5th page, but the rankings have been very inconsistent. Any ideas to when we may see an increase in ranking for this page? I am being very impatient, at the moment my workload has been dedicated to get this one page ranking again. All comments greatly appreciated.
Intermediate & Advanced SEO | | Brett-S0 -
Block subdomain directory in robots.txt
Instead of block an entire sub-domain (fr.sitegeek.com) with robots.txt, we like to block one directory (fr.sitegeek.com/blog).
Intermediate & Advanced SEO | | gamesecure
'fr.sitegeek.com/blog' and 'wwww.sitegeek.com/blog' contain the same articles in one language only labels are changed for 'fr' version and we suppose that duplicate content cause problem for SEO. We would like to crawl and index 'www.sitegee.com/blog' articles not 'fr.sitegeek.com/blog'. so, suggest us how to block single sub-domain directory (fr.sitegeek.com/blog) with robot.txt? This is only for blog directory of 'fr' version even all other directories or pages would be crawled and indexed for 'fr' version. Thanks,
Rajiv0 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Is my schema implemented correctly and not spammy?
i already used the validation tool from Google. It tell's me it's implemented, but i would like to know specifically if everything is implemented correctly and in a non spam way. Here is a product link to test: http://www.suddora.com/pink-sweatbands-wholesale-pink-wristbands.html
Intermediate & Advanced SEO | | Hyrule1 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
Sitemaps. When compressed do you use the .gz file format or the (untidy looking, IMHO) .xml.gz format?
When submitting compressed sitemaps to Google I normally use the a file named sitemap.gz A customer is banging on that his web guy says that sitemap.xml.gz is a better format. Google spiders sitemap.gz just fine and in Webmaster Tools everything looks OK... Interested to know other SEOmoz Pro's preferences here and also to check I haven't made an error that is going to bite me in the ass soon! Over to you.
Intermediate & Advanced SEO | | NoisyLittleMonkey0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0