Timely use of robots.txt and meta noindex
-
Hi,
I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents.
When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good.
When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages.
When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag.
Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this.
I need a clear solution, which solves both issues (index and crawling).
What I try to do now, is the following:
I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory.
Can this work the way I imagine, or do you have a better way of doing so?
Thank you in advance for all your help.
-
Hi Deb,
Thank you for your reply.
I have never thought, that Google would crawl the robots.txt this rarely. I actually read it somewhere, which makes complete sense, that before they start crawling, they validate the process against robots.txt. This is one page only, but basically one of the most important ones.
This is now a shocking experience for me, thank you for drawing my attention to it. Anyway, I have submitted the page through 'Fetch as Google' now.
Regarding your url suggestion, I do not want them to be 404-d, at least not all of them, as for examply the login pages I still want to use, and why we have individual urls, is that because we would like our visitors to return back the page they left, before we asked them to log in. So status 200 is ok, because these pages we have for customers, but the very same pages are totally useless for Google to crawl or to index.
I hope this clarifies.
-
It seems like the latest Robots.txt file has not been cached by Google so far .. this is what it has –
So, you need to use Fetch As Google Bot and Submit this Robots.txt file to index to fix this issue at the earliest.
What concerns me that defunct URLs like this - http://www.kozelben.hu/login?r=%2Fceg%2Fdrink-island-bufe-whisky-bar-alkotas-utca-17-1123-budapest-126126%23addComment or http://www.kozelben.hu/supplier/nearby/supplierid/127493/type/geo are returning 200 Ok server side response code whereas they should be returning 404 server side response. The problem would have stopped here for once and all.
However assuming the fact that the CMS of your website does not offer you any such option [in that case, this is a bad CMS], you need to apply Meta noindex tag against them and wait patiently for search engine to catch them.
_Can’t you fix the 404 thing? Let us know. _
-
Really good article, indeed!
I have been thinking about the whole concept during the weekend, and now I have a further concept, definetely worth considering.
Thank you again, Ryan.
-
Lindsay wrote a great article on the topic which I am sure you will enjoy: http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thank you for the further info, Ryan.
Although I see your point and can accept lots of truth in it, checking all the competitors and even the largest sites all around the web, they still keep using robots.txt (even Google does so).
I however accept noindex to be a superior solution to robots.txt and will use it for all the contents I do not want to be indexed.
I will then see, if I need and how I might need to use robots.txt. I hope, it does not hurt having a noindexed page included in robots.txt (at a later time, when it is already out of the index).
-
I understand your concern Andras. The two questions I would focus on with respect to crawl budget:
1. Is all your content being indexed properly?
2. Is your content being indexed in a timely manner?
If the answer to the above two questions is yes, I would not spend any more time thinking about crawl budget. Either way, using the "noindex" meta tag is going to be the best way to handle the issue you originally presented.
On a related note, does the content on your "useful" pages change frequently? If so, ensure you are optimizing your links (both internal and external) to these pages. When you demonstrate these are important pages to your site, Google will crawl the pages more frequently.
-
Hi Ryan,
Thank you for your reply.
The only worry I have regarding the crawl budget, that I currently have three times more indexed pages than useful pages, due to the issues I have mentioned earlier.
It is true, that I do not have daily content updates on all of my useful pages, however I have thought that Google allocates individual crawling budget to all sites, based on the value he assigns to them.
I just want this budget to be spent wisely, and not causing my useful pages to be crawled less frequently, due to crawling no-value (but noindexed) content instead.
-
Hi Andras,
The first thing to know is a general rule....the best robots.txt file is a blank one. There is almost always a better method of managing a situation without using robots.txt. There are numerous reasons, one of which is search engines do not always see the robots.txt file.
Regarding the noindex meta tag, that is the proper solution. I understand your concern over crawl budget, but I suggest in this instance, your concerns are not warranted. It is a waste of crawl budget to have search engines spend extra time due to slow servers, bad code, thin content, etc. If you have pages which should not be indexed, adding the noindex tag is likely the best solution.
Without being familiar with your site, it is not possible to offer a definitive answer, but generally speaking this response should be accurate. Keep in mind many sites have millions of pages, and Google has the ability to crawl the entire site each month.
-
Can you show us examples of URLs that are causing you trouble? That would be easier for us to provide a solution.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking in Robots.txt and the re-indexing - DA effects?
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic. Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK. I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it. I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value. Has anyone ever blocked and un-blocked and whats the affects pls? All answers greatly received - cheers GB
Technical SEO | | Bush_JSM0 -
Using images from one domain on another?
I run a travel photography business where I sell fine art prints. I've been toying with the idea of creating a few new websites about some of the places I've traveled to. This is for a few reasons. First, because I love talking about my travels. But second, because I feel like it might be a good way to bring in more print sales from those places. The question I have, if I were to use the images from my main photography sales domain on a different domain, how does this affect SEO? These images filenames for the photographs are already optimized well for searching. Thanks!
Technical SEO | | shannmg10 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
What is "evttag=" used for?
I see evttag= used on realtor.com, what looks to be for click tracking purposes. Does anyone know if this is an official standard or something they made up?
Technical SEO | | JDatSB0 -
What Google uses in search result descriptions
Recently, Google has started including certain information from our web pages in their search results description that is a bit puzzling. For example if you google 'Wedding Band Raleigh' the description they are using for our site's (GigMasters) page begins with the text 'Results 1 - 10 of 1005' Not sure why they are pulling that information. That is in on the page but its not high up on the page or marked with any special h1, h2, or h3 tag. We do have that information inside of a div which we have named 'Results'. Maybe that's why? Did we inadvertently use some sort of Google rich snippet or schema.org naming convention?! Any insight would be hugely appreciated.
Technical SEO | | gigmasters0 -
Robots.txt and robots meta
I have an odd situation. I have a CMS that has a global robots.txt which has the generic User-Agent: *
Technical SEO | | Highland
Allow: / I also have one CMS site that needs to not be indexed ever. I've read in various pages (like http://www.jesterwebster.com/robots-txt-vs-meta-tag-which-has-precedence/22 ) that robots.txt always wins over meta, but I have also read that robots.txt indicates spiderability whereas meta can control indexation. I just want the site to not be indexed. Can I leave the robots.txt as is and still put NOINDEX in the robots meta?0 -
Which pages to "noindex"
I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on. I'm thinking that it would be appropriate to use it on: legal pages such as privacy policy and terms of use
Technical SEO | | mmaes
search results page
blog archive and category pages Thanks for any insight of this.0 -
What should be noindexed on a Wordpress blog?
I know this can be a "it depends" answer so I'll try to explain. Qualifications on your answers would be great. I use the Wordpress architecture for myself and clients on sites and blogs. Almost every business site we create has a blog and I'm always working to improve results on them. My strategy has been the following: Categories: General, main content types, general keywords. Index, follow Tags: Very specific, post specific, may only be used once for one post. My categories have descriptions that are displayed on the category pages with excerpts. Tags rarely have a description but are displayed with excerpts on the page. My idea has been to index the categories to crawl the content and they have unique content by showing the category description. Tags shouldn't be archived because they may be all over the place and may have only 1 post with no tag description. I'm trying to reduce duplicate content but I don't want to limit results for my clients and myself. Should I set tags to noindex, follow or should I have them indexed? The only thing I'm thinking with having the tags indexed is that I may be able to get additional traffic through the more specific tags (i.e. tag = meta tags, category = SEO).
Technical SEO | | JaredDetroit0