Google (GWT) says my homepage and posts are blocked by Robots.txt
-
I guys.. I have a very annoying issue..
My Wordpress-blog over at www.Trovatten.com has some indexation-problems..
Google Webmaster Tools data:
GWT says the following: "Sitemap contains urls which are blocked by robots.txt." and shows me my homepage and my blogposts..This is my Robots.txt: http://www.trovatten.com/robots.txt
"User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/Do you have any idea why it says that the URL's are being blocked by robots.txt when that looks how it should?
I've read a couple of places that it can be because of a Wordpress Plugin that is creating a virtuel robots.txt, but I can't validate it..1. I have set WP-Privacy to crawl my site
2. I have deactivated all WP-plugins and I still get same GWT-Warnings.Looking forward to hear if you have an idea that might work!
-
Do you know which plugin (or combination) was the trouble?
I use a lot of wordpress, and this is very interesting.
-
You are absolutely right.
The problem was that a plugin I installed messed with my robots.txt
-
I am going to disagree with the above.
The command <meta < span="">name="robots" content="noodp, noydir" /> has nothing to do with denying any access to the robots.</meta <>
It is used to prevent the engines from displaying meta descriptions from DMOZ and the Yahoo directory. Without this line, the search engines might choose to use those descriptions, rather than the descriptions you have as meta descriptions.
-
Hey Frederick,
Here's your current meta data for robots on your home page (in the section):
name="robots" content="noodp, noydir" />
Should be something like this:
name="robots" content="INDEX,FOLLOW" />
I don't think it's the robots.txt that's the issue, but rather the meta-robots in the head of the site.
Hope this helps!
Thanks,
Anthony
[moderator's note: this answer was actually not the correct answer for this question, please see responses below]
-
I have tweak around with an XML SItemap-generater and I think it works. I'll give an update in a couple of hours!
Thansk!
-
Thanks for your comment Stubby and you are probably right.
But the problem is the Disallowing and not the sitemaps.. And based on my Robots.txt should everything be crawable.
What I'm worried about is that the virtuel Robots.txt that WP-generates is trouble.
-
Is Yoast generating another sitemap for you?
You have a sitemap from a different plugin, but Yoast can also generate sitemaps, so perhaps you have 2 - and one of the sitemaps lists the items that you are disallowing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Google Update Frequency
Hi, I recently found a large number of duplicate pages on our site that we didn't know existed (our third-party review provider was creating a separate page for each product whether it was reviewed or not - the ones not reviewed are almost identical so they have been no indexed. Question - how long do you have to typically wait for Google to pick this up On our site? Is it a normal crawl or do we need to wait for the next Panda review (if there is such a thing)? Thanks much.
Technical SEO | | trophycentraltrophiesandawards0 -
Google Dancing?
Hello, I was wondering why my website for some keywords goes from 2nd 3rd page in Google to 7th or even more sometimes? This happens since a while. Any suggestion? Thanks. Eugenio
Technical SEO | | socialengaged0 -
Have I constructed my robots.txt file correctly for sitemap autodiscovery?
Hi, Here is my sitemap: User-agent: * Sitemap: http://www.bedsite.co.uk/sitemaps/sitemap.xml Directories Disallow: /sendfriend/
Technical SEO | | Bedsite
Disallow: /catalog/product_compare/
Disallow: /media/catalog/product/cache/
Disallow: /checkout/
Disallow: /categories/
Disallow: /blog/index.php/
Disallow: /catalogsearch/result/index/
Disallow: /links.html I'm using Magento and want to make sure I have constructed my robots.txt file correctly with the sitemap autodiscovery? thanks,0 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
Robots.txt query
Quick question, if this appears in a clients robots.txt file, what does it mean? Disallow: /*/_/ Does it mean no pages can be indexed? I have checked and there are no pages in the index but it's a new site too so not sure if this is the problem. Thanks Karen
Technical SEO | | Karen_Dauncey0 -
Google penalty
Anyone have any success stories on what they did to get out of Google penalty?
Technical SEO | | phatride0