What does it mean by 'blocked by Meta Robot'? How do I fix this?
-
When i get my crawl diagnostics, I am getting a blocked by Meta Robot, which means that my page is not being indexed in the search engines... obviously this is a major issue for organic traffic!!!
What does it actually mean, and how can i fix it?
-
Thank you for your help. It is a word press site- don't know if that makes a difference? It is www.airlinegamesatc.com Thanks again!
-
It sounds like you have a Noindex tag in your website source. Open up your website, View Source (search for Noindex). That's telling bots not to crawl and index your website. You might want to remove it. If it's a CMS, it might be in the header include file/template. If you post the URL of your website, I can check for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt error
Moz Crawler is not able to access the robots.txt due to server error. Please advice on how to tackle the server error.
Technical SEO | | Shanidel0 -
"Url blocked by robots.txt." on my Video Sitemap
I'm getting a warning about "Url blocked by robots.txt." on my video sitemap - but just for youtube videos? Has anyone else encountered this issue, and how did you fix it if so?! Thanks, J
Technical SEO | | Critical_Mass0 -
4xx fix
Hi
Technical SEO | | gavinr
I have quite a lot of 4xx errors on our site. The 4xx occurred because I cleaned poor URLs that had commas etc in them so its the old URLs that now 4xx. There are no links to the URLs that 4xx. What is the best way of rectifying this issue of my own making?! Thanks
Gavin0 -
Blocked by robots
my client GWT has a number of notices for "blocked by meta-robots" - these are all either blog posts/categories/or tags his former seo told him this: "We've activated following settings: Use noindex for Categories Use noindex for Archives Use noindex for Tag Archives to reduce keyword stuffing & duplicate post tags
Technical SEO | | Ezpro9
Disabling all 3 noindex settings above may remove google blocks but also will send too many similar tags, post archives/category. " is this guy correct? what would be the problem with indexing these? am i correct in thinking they should be indexed? thanks0 -
On-Page Report Says 'F', and I'm Confoozled As to Why
I'm primarily interested in how we failed in our "Broad Keyword Usage in Title" category. The Keyword Pair we're gunnin' for is: "Mac Windows" Our current page title is: "CrossOver: Windows on Mac and Linux with the easiest and most affordable emulator - CodeWeavers" This is, I grant, ugly. However, bear with me. SEOMoz Report Card says "Easy Fix!" and suggests: "Employ the keyword in the page title, preferrably as the first words in the element." I humbly submit that "Mac" and "Windows" IS in the page title. So what am I missing? Is it the placement of the words relative to each other, or relative to the start of the sentence? Or is the phrase "CrossOver:" somehow blocking the rest of the sentence from being read? Are colons evil? I'm genuinely mystified as to why (from a structural standpoint) our existing title tag is failing this test, and I'd be delighted for answers and/or feedback. Thanks in advance.
Technical SEO | | CodeWeavers0 -
How do I fix a multiple domain mess?
I just picked up an account that is a franchisee and they have 6 exact match domains plus their main domain (all exact duplicates, not 301s). GA shows the main domain as getting the lion's share of hits, but for some important local keywords, the exact match domains rank higher. Some pages may have the exact match domain, primary domain and the franchise domain all ranking on the same SERP. Yuk! My strategy is to work on the main domain and as the work progresses, the main site will surpass the exact match domains and main franchise domain for the important searches. For the exact match domains I plan on just leaving them alone. Is this a sound strategy? I could pull the exact match domains down, but since they rank well for their keywords, it seems most sensible to leave them up. What do you think?
Technical SEO | | KristinnD0 -
404's and duplicate content.
I have real estate based websites that add new pages when new listings are added to the market and then deletes pages when the property is sold. My concern is that there are a significant amount of 404's created and the listing pages that are added are going to be the same as others in my market who use the same IDX provider. I can go with a different IDX provider that uses IFrame which doesn't create new pages but I used a IFrame before and my time on site was 3min w/ 2.5 pgs per visit and now it's 7.5 pg/visit with 6+min on the site. The new pages create new content daily so is fresh content and better on site metrics (with the 404's) better or less 404's, no dup content and shorter onsite metrics better? Any thoughts on this issue? Any advice would be appreciated
Technical SEO | | AnthonyLasVegas0