What is the best way to stop a page being indexed?
-
What is the best way to stop a page being indexed? Is it to implement robots.txt at a site level with a Robots.txt file in the main directory or at a page level with the tag?
-
Thanks that's good to know!
-
To prevent all robots from indexing a page on your site, place the following meta tag into the section of your page:
To allow other robots to index the page on your site, preventing only a specific search engine bot, for example here Google's robots from indexing the page:
When Google see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.
Note that because Google have to crawl your page in order to see the noindex meta tag, there's a small chance that Googlebot won't see and respect the noindex meta tag. If your page is still appearing in results, it's probably because Google haven't crawled your site since you added the tag. (Also, if you've used your robots.txt file to block this page, Google won't be able to see the tag either.)
If the content is currently in Google's index, it will remove it after the next time it crawl it. To expedite removal, use the Remove URLs tool in Google Webmaster Tools.
-
Thanks that's good to know.
-
"noindex" takes precedents over "index" so basicly if it says "noindex" anywhere google will follow that.
-
Thanks for the answers guys... Can I ask in the event that the Robots.txt file is implemented at the domain level but the mark up on the page is <meta name="robots" content="index, follow"> which one take wins?
-
Why not both? Some cases one method is preferred over another, or in fact necessary. As with non html documents such as pdf, you may have to use the robots.txt to keep it from being indexed or header tags as well. I'll also give you another option, and that is to password protect a directory.
-
Hi,
While the page-level robots meta tag is the best way to stop the page from being indexed, a domain-level robots.txt can save some bandwidth of the search engines. With robots.txt blocking in place, Google will not crawl the page from within the website but can pickup the URLs mentioned some where else on a third-party website. In cases like these, the page-level robots meta tag comes to the rescue. So, it would be best if the pages are blocked using robots.txt file as well as the page-level meta robots tag. Hope that helps.
Good luck friend.
Best regards,
Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I am looking for best way to block a domain from getting indexed ?
We have a website http://www.example.co.uk/ which leads to another domain (https://online.example.co.uk/) when a user clicks,in this case let us assume it to be Apply now button on my website page. We are getting meta data issues in crawler errors from this (https://online.example.co.uk/) domain as we are not targeting any meta content on this particular domain. So we are looking to block this domain from getting indexed to clear this errors & does this effect SERP's of this domain (**https://online.example.co.uk/) **if we use no index tag on this domain.
Technical SEO | | Prasadgotteti0 -
Alternatives 301? Issues redirection of index.html page with Adobe Business Catalyst
Hi Moz community, As for now we have two different versions of a client's homepage that’s dividing our traffic. One of the urls is the index.html version of the other url. We are using Adobe Business Catalyst for one of our clients and they told us they can’t 301 redirect. Adobe Business Catalyst does 301 redirects, but not to itself like an .htaccess rewrite. Doing a 301 redirect using BC from index.html to / creates an infinite loop and break the page. Are there alternatives to a 301 or any suggestions how to solve this? Thanks for all your answers and thoughts in advance,
Technical SEO | | Anna_Hoesl
Anna0 -
What is the best way to deal with https?
Currently, the site I am working on is using HTTPS throughout the website. The non-HTTPS pages are redirected through a 301 redirect to the HTTPS- this happens for all pages. Is this the best strategy going forward? if not, what changes would you suggest?
Technical SEO | | adarsh880 -
Should We Index These Category Pages?
Currently we have marked category pages like http://www.yournextshoes.com/celebrities/kim-kardashian/ as follow/noindex as they essentially do not include any original content. On the other hand, for someone searching for Kim Kardashian shoes, it's a highly relevant page as we provide links to all the Kim Kardashian shoe sightings that we have covered. Should we index the category pages or leave them unindexed?
Technical SEO | | Jantaro0 -
How should i knows google to indexed my new pages ?
I have added many products in my ecommerce site but most of the google still not indexed yet. I already submitted sitemap a month ago but indexed process was very slow. Is there anyway to know the google to indexed my products or pages immediately. I can do ping but always doing ping is not the good idea. Any more suggestions ?
Technical SEO | | chandubaba1 -
Why has Google stopped indexing my content?
Mystery of the day! Back on December 28th, there was a 404 on the sitemap for my website. This lasted 2 days before I noticed and fixed. Since then, Google has not indexed my content. However, the majority of content prior to that date still shows up in the index. The website is http://www.indieshuffle.com/. Clues: Google reports no current issues in Webmaster tools Two reconsideration requests have returned "no manual action taken" When new posts are detected as "submitted" in the sitemap, they take 2-3 days to "index" Once "indexed," they cannot be found in search results unless I include url:indieshuffle.com The sitelinks that used to pop up under a basic search for "Indie Shuffle" are now gone I am using Yoast's SEO tool for Wordpress (and have been for years) Before December 28th, I was doing 90k impressions / 4.5k clicks After December 28th, I'm now doing 8k impressions / 1.3k clicks Ultimately, I'm at a loss for a possible explanation. Running an SEOMoz audit comes up with warnings about rel=canonical and a few broken links (which I've fixed in reaction to the report). I know these things often correct themselves, but two months have passed now, and it continues to get progressively worse. Thanks, Jason
Technical SEO | | indieshuffle0 -
Is there a way to find out who is linking to a competitors inner page?
Their inner page is ranking #1 for a certain keyword and I want to find out who is linking to that page. Is it possible to do this and if so how? Thanks
Technical SEO | | intmarkacademy0 -
How to block "print" pages from indexing
I have a fairly large FAQ section and every article has a "print" button. Unfortunately, this is creating a page for every article which is muddying up the index - especially on my own site using Google Custom Search. Can you recommend a way to block this from happening? Example Article: http://www.knottyboy.com/lore/idx.php/11/183/Maintenance-of-Mature-Locks-6-months-/article/How-do-I-get-sand-out-of-my-dreads.html Example "Print" page: http://www.knottyboy.com/lore/article.php?id=052&action=print
Technical SEO | | dreadmichael0