Blocked by Meta Robots.
-
Hi,
I get this warning on my reporting.
- Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots.
what does that means ? and how to solve that, if i using wordpress as my website engine.
and about rel=canonical , in which page I should put this tag, in original page, or in copy page ?
thanks for all of your answer, it will be means a lot
-
There are wordpress plugins you can use to modify your robots.txt, wordpress makes it difficult.
http://yoast.com/example-robots-txt-wordpress/
Also, make sure it is an important page for your blog- Google is just being proactive on your behalf, it might be an irrelevant page to your overall plan.
-
Actually it would not be in the meta robots noindex. The meta tag does not prevent Google from crawling the page it is on. If it did that, then Google would not be able to crawl the page and then it would not be able to read the tag :-). The meta robots will tell Google to remove the page from the index and so it is very effective for that application.
That said, the GWT warning, is probably related to you robots.txt file located at
http://www.yourdomain.ext/robots.txt
Put that in your browser and see if you have any of your files, pages Disallowed in that file. If that is the case, then Google will not be able to spider a page to start with, let alone read the meta tags. Do some searching on Google on how robots.txt works Moz obviously has one
http://moz.com/learn/seo/robotstxt
Here is a video on how to use Wordpress and robot.txt - it may or may not relate to your config, but will show a plugin that you can use to adjust
http://www.youtube.com/watch?v=JY9A5OqHTvw
You can figure out how to understand it and then what you need to update it. Get with your IT person or whoever admins your site
-
It means there is a meta tag on the page that is blocking the page. Look in the head section of the page for a tag. Remove this and you should be good to go. Check your WordPress settings, sometimes these tags are automatically assigned to pages as a default. You could also download a SEO plug in to help manage the meta robots tags.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Can't work out robots.txt issue.
Hi I'm getting crawl errors that MOZ isn't able to access my robots.txt file but it seems completely fine to me? Any chance anyone can help me understand what might be the issue? www.equip4gyms.co
Moz Pro | | brenmcc10 -
Why is MOZ crawl is returning URLs with variable results showing Missing Meta Desc? Example: http://nw-naturals.net/?page_number_0=47
Can you help me dive down into my website guts to find out why the MOZ crawl is returning URLs with variable results? And saying this is missing a description when it's not really a page? Example: http://nw-naturals.net/?page_number_0=47. I've asked MOZ but it's a web development issue so they can't help me with it. Has anyone had an issue with this on their website? Thank you!
Moz Pro | | lewisdesign0 -
Same linking c-blocks trend as competitor
I noticed in our competitive link report that our number of linking c-blocks has risen and fallen in the exact same pattern as one of our competitors. Is there a reason why this would be happening?
Moz Pro | | ZoomInformation0 -
Robots.txt
I have a page used for a reference that lists 150 links to blog articles. I use in in a training area of my website. I now get warnings from moz that it has too many links. I decided to disallow this page in robots.text. Below is the what appears in the file. Robots.txt file for http://www.boxtheorygold.com User-agent: * Disallow: /blog-links/ My understanding is that this simply has google bypass the page and not crawl it. However, in Webmaster Tools, I used the Fetch tool to check out a couple of my blog articles. One returned an expected result. The other returned a result of "access denied" due to robots.text. Both blog article links are listed on the /blog/links/ reference page. Question: Why does google refuse to crawl the one article (using the Fetch tool) when it is not referenced at all in the robots.text file. Why is access denied? Should I have used a noindex on this page instead of robots.txt? I am fearful that robots.text may be blocking many of my blog articles. Please advise. Thanks,
Moz Pro | | Rong
Ron0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
What's name of SEOmoz and Open Site Explorer robots?!
I would like to exclude in robots.txt SEOmoz and Open Site Explorer bots to don't let them index my sites… what's their names?
Moz Pro | | cezarylech0 -
Missing Meta Description tags?
I just ran our first SEOMoz pro report and it's showing that every article page on our site is missing descriptions. However, it's visible on the source and Google seems to be picking them up.
Moz Pro | | notebooks
Can you please tell me why SEOMoz is makring them as missing? Are we doing something wrong here? http://notebooks.com0