Timely use of robots.txt and meta noindex
-
Hi,
I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents.
When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good.
When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages.
When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag.
Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this.
I need a clear solution, which solves both issues (index and crawling).
What I try to do now, is the following:
I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory.
Can this work the way I imagine, or do you have a better way of doing so?
Thank you in advance for all your help.
-
Hi Deb,
Thank you for your reply.
I have never thought, that Google would crawl the robots.txt this rarely. I actually read it somewhere, which makes complete sense, that before they start crawling, they validate the process against robots.txt. This is one page only, but basically one of the most important ones.
This is now a shocking experience for me, thank you for drawing my attention to it. Anyway, I have submitted the page through 'Fetch as Google' now.
Regarding your url suggestion, I do not want them to be 404-d, at least not all of them, as for examply the login pages I still want to use, and why we have individual urls, is that because we would like our visitors to return back the page they left, before we asked them to log in. So status 200 is ok, because these pages we have for customers, but the very same pages are totally useless for Google to crawl or to index.
I hope this clarifies.
-
It seems like the latest Robots.txt file has not been cached by Google so far .. this is what it has –
So, you need to use Fetch As Google Bot and Submit this Robots.txt file to index to fix this issue at the earliest.
What concerns me that defunct URLs like this - http://www.kozelben.hu/login?r=%2Fceg%2Fdrink-island-bufe-whisky-bar-alkotas-utca-17-1123-budapest-126126%23addComment or http://www.kozelben.hu/supplier/nearby/supplierid/127493/type/geo are returning 200 Ok server side response code whereas they should be returning 404 server side response. The problem would have stopped here for once and all.
However assuming the fact that the CMS of your website does not offer you any such option [in that case, this is a bad CMS], you need to apply Meta noindex tag against them and wait patiently for search engine to catch them.
_Can’t you fix the 404 thing? Let us know. _
-
Really good article, indeed!
I have been thinking about the whole concept during the weekend, and now I have a further concept, definetely worth considering.
Thank you again, Ryan.
-
Lindsay wrote a great article on the topic which I am sure you will enjoy: http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thank you for the further info, Ryan.
Although I see your point and can accept lots of truth in it, checking all the competitors and even the largest sites all around the web, they still keep using robots.txt (even Google does so).
I however accept noindex to be a superior solution to robots.txt and will use it for all the contents I do not want to be indexed.
I will then see, if I need and how I might need to use robots.txt. I hope, it does not hurt having a noindexed page included in robots.txt (at a later time, when it is already out of the index).
-
I understand your concern Andras. The two questions I would focus on with respect to crawl budget:
1. Is all your content being indexed properly?
2. Is your content being indexed in a timely manner?
If the answer to the above two questions is yes, I would not spend any more time thinking about crawl budget. Either way, using the "noindex" meta tag is going to be the best way to handle the issue you originally presented.
On a related note, does the content on your "useful" pages change frequently? If so, ensure you are optimizing your links (both internal and external) to these pages. When you demonstrate these are important pages to your site, Google will crawl the pages more frequently.
-
Hi Ryan,
Thank you for your reply.
The only worry I have regarding the crawl budget, that I currently have three times more indexed pages than useful pages, due to the issues I have mentioned earlier.
It is true, that I do not have daily content updates on all of my useful pages, however I have thought that Google allocates individual crawling budget to all sites, based on the value he assigns to them.
I just want this budget to be spent wisely, and not causing my useful pages to be crawled less frequently, due to crawling no-value (but noindexed) content instead.
-
Hi Andras,
The first thing to know is a general rule....the best robots.txt file is a blank one. There is almost always a better method of managing a situation without using robots.txt. There are numerous reasons, one of which is search engines do not always see the robots.txt file.
Regarding the noindex meta tag, that is the proper solution. I understand your concern over crawl budget, but I suggest in this instance, your concerns are not warranted. It is a waste of crawl budget to have search engines spend extra time due to slow servers, bad code, thin content, etc. If you have pages which should not be indexed, adding the noindex tag is likely the best solution.
Without being familiar with your site, it is not possible to offer a definitive answer, but generally speaking this response should be accurate. Keep in mind many sites have millions of pages, and Google has the ability to crawl the entire site each month.
-
Can you show us examples of URLs that are causing you trouble? That would be easier for us to provide a solution.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do Meta Keywords matter?
I am a firm believer in the fundamentals of SEO but is there any data to support its impact positively or negatively towards a sites rank?
Technical SEO | | Brandonp0 -
Google not using redirect
We have a GEO-IP redirect in place for our domain, so that users are pointed to the subfolder relevant for their region, e.g: Visit example.com from the UK and you will be redirected to example.com/uk This works fine when you manually type the domain into your browser, however if you search for the site and come to example.com, you end up at example.com I didn't think this was too much of an issue but our subfolders /uk and /au are not getting ranked at all in Google, even for branded keywords. I'm wondering if the fact that Google isn't picking up the redirect means that the pages aren't being indexed properly? Conversely our US region (example.com/us) is being ranked well. Has anyone encountered a similar issue?
Technical SEO | | ahyde0 -
Google Indexing Development Site Despite Robots.txt Block
Hi, A development site that has been set-up has the following Robots.txt file: User-agent: * Disallow: / In an attempt to block Google indexing the site, however this isn't the case and the development site has since been indexed. Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | CarlWint0 -
Using category pages in Wordpress
In our niche we have one main keyword, which represents the entire category. We are using wordpress. I am trying to understand the best URL structure and wonder if the below is a good approach: http://domain.com/keyword This category page will be written to contain an article on the subject. The posts that are put into that category will subsequently appear on this page, below that article. Each of those posts would be targeting a related keyword. e.g. I would write a post which has, as the main target keyword: "MainKeyword training" and another post, which would be targeting "MainKeyword techniques" ... (and so on). Thanks for your advice. Andrew
Technical SEO | | seowhiskey0 -
Robots.txt best practices & tips
Hey, I was wondering if someone could give me some advice on whether I should block the robots.txt file from the average user (not from googlebot, yandex, etc)? If so, how would I go about doing this? With .htaccess I'm guessing - but not an expert. What can people do with the information in the file? Maybe someone can give me some "best practices"? (I have a wordpress based website) Thanks in advance!
Technical SEO | | JonathanRolande0 -
Meta Title Tags
Hi, Are Meta Title Tag deemed by google to be unique if I use the same phrases by in a different order. For example 3 different pages <colgroup><col width="475"></colgroup>
Technical SEO | | Studio33
| Online Invoicing Software | Online Invoicing | Invoicing Software |
| Online Invoicing | Invoicing Software | Online Invoicing Software |
| Invoicing Software | Online Invoicing Software | Online Invoicing | You will not it is the same keywords just in a different order. Is this unique enough or will google not be happy about it. Thanks Andrew0 -
Robots.txt Sitemap with Relative Path
Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?
Technical SEO | | MRCSearch0 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0