Meta robots
-
Hi,
I am checking a website for SEO and I've noticed that a lot of pages from the blog have the following meta robots:
meta name="robots" content="follow"
Normally these pages should be indexed, since search engines will index and follow by default. In this case however, a lot of pages from this blog are not indexed.
Is this because the meta robots is specified, but only contains follow? So will search engines only index and follow by default if there is no meta robots specified at all?
And secondly, if I would change the meta robots, should I just add index or remove the meta robots completely from the code?
Thanks for checking!
-
Thanks, this is a really helpful answer.
-
Hi Mat_C
There is no issue with that Meta Robots tag. This is not the reason why those pages aren't indexed.
I'd look a little deep trying to understand why Google didn't want to index that pages.
Do you have access to that website Search Console? What does index coverage report say?
Have you tried looking for one of those URLs in the "URL Inspection Tool"? There you might find why Google chose not to index it.That said, assuming that the site has as CMS Wordpress, the widely known YOAST plugin allows you to configure to be non-indexable many "known to cause issues" pages, such as tag or archive pages.
Have you checked that this is not the case?Also, there is another common reason why pages aren't indexed: Canonicals chosen by Google. This happens when some pages are almost identical and/or serve for the same user intent, so Google's Algorithms consider them as the same and just set one as the canonical for other, even when there isn't any canonical tag present.
Hope it helps.
Best luck.
GR -
I am pretty sure that's not how Meta robots tags work. If you fail to specify something, Google assumes they are allowed to index by default. By the way, search engines do not index pages which they don't think users will like or be interested in. Just because a search engine 'can' index a URL, that doesn't mean it will!
Follow directives and index directives actually operate on two entirely different sub-sets of data. Follow / nofollow directives are link-level (meaning they apply only to the hyperlinks on a page, not to the page itself). Index / no-index directives are page-level, and apply to the entire page upon which they are situated
Due to this, I don't believe they could or would interfere with each other in the way you described
Interesting experiment though. To test, I'd recommend adding index instead of removing follow. If hat doesn't make any kind of difference, it's not the issue
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt, Disallow & Indexed-Pages..
Hi guys, hope you're well. I have a problem with my new website. I have 3 pages with the same content: http://example.examples.com/brand/brand1 (good page) http://example.examples.com/brand/brand1?show=false http://example.examples.com/brand/brand1?show=true The good page has rel=canonical & it is the only page should be appear in Search results but Google has indexed 3 pages... I don't know how should do now, but, i am thinking 2 posibilites: Remove filters (true, false) and leave only the good page and show 404 page for others pages. Update robots.txt with disallow for these parameters & remove those URL's manually Thank you so much!
Intermediate & Advanced SEO | | thekiller990 -
Not sure how we're blocking homepage in robots.txt; meta description not shown
Hi folks! We had a question come in from a client who needs assistance with their robots.txt file. Metadata for their homepage and select other pages isn't appearing in SERPs. Instead they get the usual message "A description for this result is not available because of this site's robots.txt – learn more". At first glance, we're not seeing the homepage or these other pages as being blocked by their robots.txt file: http://www.t2tea.com/robots.txt. Does anyone see what we can't? Any thoughts are massively appreciated! P.S. They used wildcards to ensure the rules were applied for all locale subdirectories, e.g. /en/au/, /en/us/, etc.
Intermediate & Advanced SEO | | SearchDeploy0 -
Error Meta Description
(adult website) https://www.google.com.br/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=robertinha Why Google is not reading my description of Yoast plugin? Vídeos de sexo - Vídeos porno
Intermediate & Advanced SEO | | stroke
www.robertinha.com.br/
Robertinha.com.br. lupa. facebook twitter plus. Página Inicial; Última Atualização: terça, 14 abril 2015. Página Inicial. Categorias. Amadoras (227) · Coroas (6) ... If I site: meusite.com.br work, he read correctly, but the site search not.
I do not understand https://www.google.com.br/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site:robertinha.com.br Vídeos de sexo - Vídeos porno
www.robertinha.com.br/
Vídeos de sexo grátis: assista agora mesmo vídeos porno com gatas, gostosas, safadas fazendo muito sexo.0 -
WMT Showing Duplicate Meta Description Issues Altough Posts Were Redirected
Dear Moz Community, Some time ago we've change the structure of our website and we've redirected the old URL's to the new ones. About 2,000 posts were redirected at that time. While checking Webmaster Tools a few days ago I've discovered that about 500 duplicate meta-description issues appear in the "HTML Improvements" area. To my surprise, altough the old posts were redirected to the new path, WMT sees the description of the old posts similar with the one of the new post. Moreover, after changing the structure all meta-descriptions were modified and they weren't the same used before the restructure. For example I've redirected /blog/taxi-transfer-from-merton-sw19-to-london-city-airport/ to /destinations/greater-london/merton-sw19/taxi-transfer-to-london-city-airport-from-merton/ Now they are shown as having duplicate content. I've checked the redirects and they are working. I get the same error from the redirected pages for about 150 titles. Did anyone else get this errors or can you please offer me some suggestions about how I can fix this? Thank you in advance! Tiberiu
Intermediate & Advanced SEO | | Tiberiu0 -
How to auto generate a unique meta description?
The site I am working on is a code nightmare for starters. I'm editing a file called layout that controls the section of each page. The programmer from a while back got unique titles by putting this piece of code in: <title><?= $this->metaTag ?></title> In all the different controllers and stuff I can see where the title is the name of the product plus review or something to that effect. How do I do this for the meta description? Right now the meta description is static in the layout file, and so every page has an identical one. I was hoping there was a way to make the meta description automatically use the first 140 characters on the page or something. Something like this:
Intermediate & Advanced SEO | | DanDeceuster0 -
Best SEO META Description for forum topics
What would be the best SEO META Description tag for forum topics on a forum type website? I can think of a few options so far Snippet of first post. Title of the topic with templated trailing text Remove description tag completely Your thoughts and suggestions are greatly appreciated.
Intermediate & Advanced SEO | | Peter2640 -
Meta Description In Blog Feed
The SEOmoz crawl tool is giving me a lot of crawl errors because my blog feed and my blog tags do not have meta descriptions. Can you even give this type of content meta descriptions? If so how can you do it, as this content is created dynamically by Wordpress?
Intermediate & Advanced SEO | | MyNet0 -
Block all search results (dynamic) in robots.txt?
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from: /search?12345&productblue=true&id789 to /product/search/blue_widgets/womens/large As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
Intermediate & Advanced SEO | | rhutchings0