Google (GWT) says my homepage and posts are blocked by Robots.txt
-
I guys.. I have a very annoying issue..
My Wordpress-blog over at www.Trovatten.com has some indexation-problems..
Google Webmaster Tools data:
GWT says the following: "Sitemap contains urls which are blocked by robots.txt." and shows me my homepage and my blogposts..This is my Robots.txt: http://www.trovatten.com/robots.txt
"User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/Do you have any idea why it says that the URL's are being blocked by robots.txt when that looks how it should?
I've read a couple of places that it can be because of a Wordpress Plugin that is creating a virtuel robots.txt, but I can't validate it..1. I have set WP-Privacy to crawl my site
2. I have deactivated all WP-plugins and I still get same GWT-Warnings.Looking forward to hear if you have an idea that might work!
-
Do you know which plugin (or combination) was the trouble?
I use a lot of wordpress, and this is very interesting.
-
You are absolutely right.
The problem was that a plugin I installed messed with my robots.txt
-
I am going to disagree with the above.
The command <meta < span="">name="robots" content="noodp, noydir" /> has nothing to do with denying any access to the robots.</meta <>
It is used to prevent the engines from displaying meta descriptions from DMOZ and the Yahoo directory. Without this line, the search engines might choose to use those descriptions, rather than the descriptions you have as meta descriptions.
-
Hey Frederick,
Here's your current meta data for robots on your home page (in the section):
name="robots" content="noodp, noydir" />
Should be something like this:
name="robots" content="INDEX,FOLLOW" />
I don't think it's the robots.txt that's the issue, but rather the meta-robots in the head of the site.
Hope this helps!
Thanks,
Anthony
[moderator's note: this answer was actually not the correct answer for this question, please see responses below]
-
I have tweak around with an XML SItemap-generater and I think it works. I'll give an update in a couple of hours!
Thansk!
-
Thanks for your comment Stubby and you are probably right.
But the problem is the Disallowing and not the sitemaps.. And based on my Robots.txt should everything be crawable.
What I'm worried about is that the virtuel Robots.txt that WP-generates is trouble.
-
Is Yoast generating another sitemap for you?
You have a sitemap from a different plugin, but Yoast can also generate sitemaps, so perhaps you have 2 - and one of the sitemaps lists the items that you are disallowing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will it be possible to point diff sitemap to same robots.txt file.
Will it be possible to point diff sitemap to same robots.txt file.
Technical SEO | | nlogix
Please advice.0 -
Robots file set up
The robots file looks like it has been set up in a very messy way.
Technical SEO | | mcwork
I understand the # will comment out a line, does this mean the sitemap would
not be picked up?
Disallow: /js/ should this be allowed like /*.js$
Disallow: /media/wysiwyg/ - this seems to be causing alerts in webmaster tools as it can not access
the images within.
Can anyone help me clean this up please #Sitemap: https://examplesite.com/sitemap.xml Crawlers Setup User-agent: *
Crawl-delay: 10 Allowable Index Mind that Allow is not an official standard Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Allow: /catalogsearch/result/ Allow: /media/catalog/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
Disallow: /js/
Disallow: /lib/
Disallow: /magento/ Disallow: /media/ Disallow: /media/captcha/ Disallow: /media/catalog/ #Disallow: /media/css/
#Disallow: /media/css_secure/
Disallow: /media/customer/
Disallow: /media/dhl/
Disallow: /media/downloadable/
Disallow: /media/import/
#Disallow: /media/js/
Disallow: /media/pdf/
Disallow: /media/sales/
Disallow: /media/tmp/
Disallow: /media/wysiwyg/
Disallow: /media/xmlconnect/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
#Disallow: /skin/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalog/product/gallery/
Disallow: */catalog/product/upload/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt
Disallow: /get.php # Magento 1.5+ Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID=
Disallow: /rss*
Disallow: /*PHPSESSID Disallow: /:
Disallow: /😘 User-agent: Fatbot
Disallow: / User-agent: TwengaBot-2.0
Disallow: /0 -
Rebranding: 404 to homepage?
Hello all!
Technical SEO | | JohnPalmer
I did a rebranding, [Domain A] -> [Domain B]. what to do with all the 404 pages? 1. [Domain A (404)] -> [Domain B (homepage)]?
2. [Domain A (404)] -> [Domain B (404 page + same url) - for example: xixix.com/page/bla What do you think ?0 -
Google Indexing - what did I missed??
Hello, all SEOers~ I just renewed my web site about 3 weeks ago, and in order to preserve SEO values as much as possible, I did 301 redirect, XML Sitemap and so on for minimize the possible data losses. But the problem is that about week later from site renewal, my team some how made mistake and removed all 301 redirects. So now my old site URLs are all gone from Google Indexing and my new site is not getting any index from Google. My traffic and rankings are also gone....OMG I checked Google Webmaster Tool, but it didn't say any special message other than Google bot founds increase of 404 error which is obvious. Also I used "fetch as google bot" from webmaster tool to increase chance to index but it seems like not working much. I am re-doing 301 redirect within today, but I am not sure it means anything anymore. Any advise or opinion?? Thanks in advance~!
Technical SEO | | Yunhee.Choi0 -
Meta-robots Nofollow
I don't understand Meta-robots Nofollow. Wordpress has my homepage set to this according to SEOMoz tool. Is this really bad?
Technical SEO | | hopkinspat1 -
Wordpress Robots.txt Sitemap submission?
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/r... Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read this, but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt. [http://www.seomoz.org/q/removing-...](http://www.seomoz.org/q/removing-robots-txt-on-wordpress-site-problem) I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. like <code> <code>
Technical SEO | | joony2008
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
Google shows the wrong domain for client's homepage
Whenever the homepage of my client's homepage appears in Google results, the search engine is not showing our URL as our domain, but instead a partner domain that is linking to us. (The correct title and meta description of our homepage is showing.) I believe this is caused by the partner website (with a much higher pank rank) linking to our homepage from their footer to a URL with it's own domain that 302 redirects to our homepage. Example: Link: http://www.partnerwebsite.com/?ad2203 302 redirects to: http://www.clientwebsite.com/?moreadtracking The simple fix would be for the client to ask for removal of the 302 hijacking link - but they are uncomfortable with this request since they had requested it prior, and their relationship is not the best. Is there any other way to fix this?
Technical SEO | | Conor_OShea_ETUS0