Robots.txt being blocked
-
I think there is an issue with this website I'm working on here is the URL: http://brownieairservice.com/
In Google Webmaster tools I am seeing this in the Robots.txt tester:
User-agent: *
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/Also when I look at "blocked resources" in the webmaster tools this is showing to be blocked:
http://brownieairservice.com/wp-content/plugins/contact-form-7/includes/js/jquery.form.min.js?ver=3.51.0-2014.06.20It looks like the form plug in is giving the issues but I don't understand this.
There are no site errors or URL errors so I don't understand what this crawl delay means or how to fix it. Any input would be greatly appreciated. Thank you
-
Hi Matt,
Thank you for checking back. I did change the robot.txt in the dashboard as people suggested but when I go here: http://brownieairservice.com/robots.txt
It is still showing the disallow. I need to load this:
User-agent: *
Disallow:to the root folder and I'm not sure how to do that if I need to FTP it or how I do that so that's where I'm at now.
Anybody have any thoughts? I have googled this question on how to do it and I keep getting put into this loop of information that does not address this questions directly.
Thank you
-
Hi Wendy! Did this get worked out?
-
Thanks Dirk for your input I will look at this too and respond back.
-
Thank you for your answer. I went in and uploaded this plug in: WP Robots Txt Now I can see the robots.txt content. This is what I see:
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/I don't see this as I see in Webmaster tools:
User-agent: *
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/My question now is this: is the Disallow:/wp-includes/ the same as Disallow:/wp-content/plugins/
so if I do this: allow:/wp-includes/ then that should solve my issue?
I'm still going through your other suggestions so will type back later on that. Thank you for your help.
Wendy
-
To add to the previous comment - crawl delay is ignored by Googlebot. Check http://tools.seobook.com/robots-txt/
It can be used to limit the speed for the bots - it is however not part of the original robots.txt specification. Since this value is not part of the standard, its interpretation is dependent on the crawler reading it
Yandex: https://yandex.com/support/webmaster/controlling-robot/robots-txt.xml#crawl-delay
Didn't find more info for Bing (they mention it here but do not provide additional info: https://www.bing.com/webmaster/help/how-to-create-a-robots-txt-file-cb7c31ec
If you want to limit the speed for Google bot you have to do it in Webmastertools.
Dirk
-
Wendy,
Google likes to have access to all your css and js. Plugins can contain these files, as seen with your blocked resources message.
The way to fix this would be by removing the Disallow: /wp-content/plugins/ line from your robots.txt file, and thus allowing google full access.
Another solution as provided by a useful article on moz: https://moz.com/blog/why-all-seos-should-unblock-js-css
"How to unblock your JavaScript and CSS
For most users, it's just a case of checking the robots.txt and ensuring you're allowing all JavaScript and CSS files to be crawled. For Yoast SEO users, you can edit your robots.txt file directly in the admin area of Wordpress.
Gary Illyes from Google also shared some detailed robots.txt changes on Stack Overflow. You can add these directives to your robots.txt file in order to allow Googlebot to crawl all Javascript and CSS.
To be doubly sure you're unblocking all JavaScript and CSS, you can add the following to your robots.txt file, provided you don't have any directories being blocked in it already:
User-Agent: Googlebot Allow: .js Allow: .css
If you have a more specialized robots.txt file, where you're blocking entire directories, it can be a bit more complicated.
In these cases, you also need to allow the .js and.css for each of the directories you have blocked.
For example:
User-Agent: Googlebot Disallow: /deep/ Allow: /deep/*.js Allow: /deep/*.css
Repeat this for each directory you are blocking in robots.txt."
Hope this helps.
-
What problem is it causing Wendy?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Has anyone ever tested to see if having an ads.txt file provided any SEO lift?
I know that the ads.txt system is designed to prevent ad fraud and technically has nothing to do with search. That said, the presence of such a file would seem to be an indicator of overall site quality because it would show that a site owner wants to participate in a fraud-free system. Has anyone ever tested that? If so, they don't seem to have published their results. Maybe it's a secret weapon that some pros are using and not sharing?
Web Design | | scodtt0 -
Images are Blocked Resources in webmasters. Anything wrong?
Hi all, The images in our sub directory are hosted from a sub domain. This sub domain is blocked to robots. So, I can see all these images are shown as "Blocked Resources" in webmasters. Is anything wrong with this? If so, we also usually block robots to image files location in our website. What's the difference? Thanks
Web Design | | vtmoz0 -
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Google tag manager on blocked beta site - will it phone home to Google and cause site to get indexed?
We want to develop a beta site, in a directory with the robots.txt blocking bots. We want to include the Google Tag Manager tags and event layer tracking code on this beta site. My question is that by including the Google Tag Manager code, that phones home to Google, will it cause Google to index this beta site when we don't want it indexed?
Web Design | | CFSSEO0 -
Is anyone using Humans.txt in your websites? What do you think?
http://humanstxt.org Anyone using this on their websites and if so have you seen and positive benefits of doing so? Would be good to see some examples of sites using it and potentially how you're using the files. I'm considering adding this to my checklist for launching sites
Web Design | | eseyo1 -
Search directory - How to apply robots
Hi. On the site I'm working on, we use a search directory to display our search results. It displays as follows - Mydomain.com/search-results/# With the dynamic search results appearing after the hash tag. Because of the structure of the website, many of the lefthand nav defers back to this directory. I know that most websites "noindex, nofollow" the search results pages, but due to the ease of customers generating them, I'm afraid that if I do this, we'll miss out on the inevitable links customers will provide...and, even though it's just the main search directory, these links will still help my domain. The search is all java-generated so there's nothing for spiders to follow within this directory - save the standard category nav. How should I handle this? Thanks.
Web Design | | Blenny0 -
Should /dev folder be blocked?
I have been experiencing a ranking drop every two months, so I came upon a new theory this morning... Does Google do a deep crawl of your site say every 60-90 days and would they penalize a site if they crawled into your /dev area which would contain pretty the exact same urls and content as your production environment and therefore penalize you for duplicate content? The only issue I see with this theory is that I have been penalized only for specific keywords on specific pages, not necessarily across the board. Thoughts? What would be the best way to block out your /dev area?
Web Design | | BoulderJoe0 -
Correct use for Robots.txt
I'm in the process of building a website and am experimenting with some new pages. I don't want search engines to begin crawling the site yet. I would like to add the Robot.txt on my pages that I don't want them to crawl. If I do this, can I remove it later and get them to crawl those pages?
Web Design | | EricVallee340