What is the best way to stop a page being indexed?
-
What is the best way to stop a page being indexed? Is it to implement robots.txt at a site level with a Robots.txt file in the main directory or at a page level with the tag?
-
Thanks that's good to know!
-
To prevent all robots from indexing a page on your site, place the following meta tag into the section of your page:
To allow other robots to index the page on your site, preventing only a specific search engine bot, for example here Google's robots from indexing the page:
When Google see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it. Other search engines, however, may interpret this directive differently. As a result, a link to the page can still appear in their search results.
Note that because Google have to crawl your page in order to see the noindex meta tag, there's a small chance that Googlebot won't see and respect the noindex meta tag. If your page is still appearing in results, it's probably because Google haven't crawled your site since you added the tag. (Also, if you've used your robots.txt file to block this page, Google won't be able to see the tag either.)
If the content is currently in Google's index, it will remove it after the next time it crawl it. To expedite removal, use the Remove URLs tool in Google Webmaster Tools.
-
Thanks that's good to know.
-
"noindex" takes precedents over "index" so basicly if it says "noindex" anywhere google will follow that.
-
Thanks for the answers guys... Can I ask in the event that the Robots.txt file is implemented at the domain level but the mark up on the page is <meta name="robots" content="index, follow"> which one take wins?
-
Why not both? Some cases one method is preferred over another, or in fact necessary. As with non html documents such as pdf, you may have to use the robots.txt to keep it from being indexed or header tags as well. I'll also give you another option, and that is to password protect a directory.
-
Hi,
While the page-level robots meta tag is the best way to stop the page from being indexed, a domain-level robots.txt can save some bandwidth of the search engines. With robots.txt blocking in place, Google will not crawl the page from within the website but can pickup the URLs mentioned some where else on a third-party website. In cases like these, the page-level robots meta tag comes to the rescue. So, it would be best if the pages are blocked using robots.txt file as well as the page-level meta robots tag. Hope that helps.
Good luck friend.
Best regards,
Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical. https://studyplaces.com/about-us/ The pages affected by this include: https://studyplaces.com/50-best-college-party-songs-of-all-time-and-why-we-love-them/ https://studyplaces.com/15-best-minors-for-business-majors/ As you can see the content on these pages is totally unrelated to the content on the about-us page. Any ideas why this is happening and how to resolve.
Technical SEO | | pnoddy0 -
How do I best optimize my on-page SEO for a magazine-style wordpress theme?
My Wordpress website is set up with a magazine style theme (Newspaper). Maybe that's the issue overall here. Questions: 1) Pages vs Categories vs Posts I currently have a category with a few dozen posts under it. The category page itself has a ~1000 word article on it. It paginates every 10 posts or so at the bottom, but most of the page is duplicate because it's only swapping out a few links. Should I instead make the "category" a page with the posts childed under it? What's the best way to go about that? 2) Canonical and Pagination I get errors about a ton of duplicate content for paginated categories and my author page (all posts are under the admin account, which has ~40 pages or so. Every page is just a list of posts and it bitches about duplicate Titles and Descriptions on every one of the paginated posts). Should I canonical these back to the root author? Same question regarding pagination for categories, assuming I'm not going to be switching them to Pages. 3) Home Page Links Right now my home page just shows a few links to the top posts of all time. After that, it shows the 5 newest posts. On the sidebar it lists a few random pages/posts. There are also a few "category listings" which just shows random posts relevant to that category. Do I want something more static/structured? The navbar does list most main content pages under their appropriate category, but the home page itself is pretty much dynamic.
Technical SEO | | searchspot0 -
Is there a way to index important pages manually or to make sure a certain page will get indexed in a short period of time??
Hi There! The problem I'm having is that certain pages are waiting already three months to be indexed. They even have several backlinks. Is it normal to have to wait more than three months before these pages get an indexation? Is there anything i can do to make sure these page will get an indexation soon? Greetings Bob
Technical SEO | | rijwielcashencarry0400 -
No existing pages in Google index
I have a real estate portal. I have a few categories - for example: flats, houses etc. Url of category looks like that: mydomain.com/flats/?page=1 Each category has about 30-40 pages - BUT in Google index I found url like: mydomain.com/flats/?page=1350 Can you explain it? This url contains just headline etc - but no content! (it´s just generated page by PHP) How is it possible, that Google can find and index these pages? (on the web, there are no backlinks on these pages) thanks
Technical SEO | | visibilitysk0 -
Product landing page URL's for e-commerce sites - best practices?
Hi all I have built many e-commerce websites over the years and with each one, I learn something new and apply to the next site and so on. Lets call it continuous review and improvement! I have always structured my URL's to the product landing pages as such: mydomain.com/top-category => mydomain.com/top-category/sub-category => mydomain.com/top-category/sub-category/product-name Now this has always worked fine for me but I see more an more of the following happening: mydomain.com/top-category => mydomain.com/top-category/sub-category => mydomain.com/product-name Now I have read many believe that the longer the URL, the less SEO impact it may have and other comments saying it is better to have the just the product URL on the final page and leave out the categories for one reason or another. I could probably spend days looking around the internet for peoples opinions so I thought I would ask on SEOmoz and see what other people tend to use and maybe establish the reasons for your choices? One of the main reasons I include the categories within my final URL to the product is simply to detect if a product name exists in multiple categories on the site - I need to show the correct product to the user. I have built sites which actually have the same product name (created by the author) in multiple areas of the site but they are actually different products, not duplicate content. I therefore cannot see a way around not having the categories in the URL to help detect which product we want to show to the user. Any thoughts?
Technical SEO | | yousayjump0 -
Google doesn't rank the best page of our content for keywords. How to fix that?
Hello, We have a strange issue, which I think is due to legacy. Generally, we are a job board for students in France: http://jobetudiant.net (jobetudiant == studentjob in french) We rank quite well (2nd or 3rd) on "Job etudiant <city>", with the right page (the one that lists all job offers in that city). So this is great.</city> Now, for some reason, Google systematically puts another of our pages in front of that: the page that lists the jobs offers in the 'region' of that city. For example, check this page. the first link is a competitor, the 3rd is the "right" link (the job offers in annecy), but the 2nd link is the list of jobs in Haute Savoie (which is the 'departement'- equiv. to county) in which Annecy is... that's annoying. Is there a way to indicate Google that the 3rd page makes more sense for this search? Thanks
Technical SEO | | jgenesto0 -
Changed cms - google indexes old and new pages
Hello again, after posting below problem I have received this answer and changed sitemap name Still I receive many duplicate titles and metas as google still compares old urls to new ones and sees duplicate title and description.... we have redirectged all pages properly we have change sitemap name and new sitemap is listed in webmastertools - old sitemap includes ONLY new sitemap files.... When you deleted the old sitemap and created a new one, did you use the same sitemap xml filename? They will still try to crawl old URLs that were in your previous sitemap (even if they aren't listed in the new one) until they receive a 404 response from the original sitemap. If anone can give me an idea why after 3 month google still lists the old urls I'd be more than happy thanks a lot Hello, We have changed cms for our multiple language website and redirected all odl URl's properly to new cms which is working just fine.
Technical SEO | | Tit
Right after the first crawl almost 4 weeks ago we saw in google webmaster tool and SEO MOZ that google indexes for almost every singlepage the old URL as well and the new one and sends us for this duplicate metatags.
We deleted the old sitemap and uploaded the new and thought that google then will not index the old URL's anymore. But we still see a huge amount of duplicate metatags. Does anyone know what else we can do, so google doe snot index the old url's anymore but only the new ones? Thanks so much Michelle0 -
What is the most effective way of indexing a localised website?
Hi all, I have a website, www.acrylicimage.com which provides products in three different currencies, $, £ and Euro. Currently a user can click on a flag to indicate which region they are in, or if the user has not manually selected the website looks at the users Locale setting and sets the region for them. The website also has a very simple content management system which provides ever so slightly different content depending on which region the user is in. The difference in content might literally be a few words per page, like contact details, measurements i.e. imperial to metric. I dont believe that GoogleBot, or any other bot for that matter, sets a Locale, and therefore it will only ever be indexing the content on our default region - the UK. So, my question really is if I need to be able to index different versions of content on the same page, is the best route to provide alternate urls i.e.: /en/about-us
Technical SEO | | dotcentric
/us/about-us
/eu/about-us The only potential downside I see to this is there are currently a couple of pages that do have exactly the same content regardless of whether you have selected the UK or USA regions - could this be considered content duplication? Thanks for your help. Al0