Timely use of robots.txt and meta noindex
-
Hi,
I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents.
When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good.
When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages.
When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag.
Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this.
I need a clear solution, which solves both issues (index and crawling).
What I try to do now, is the following:
I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory.
Can this work the way I imagine, or do you have a better way of doing so?
Thank you in advance for all your help.
-
Hi Deb,
Thank you for your reply.
I have never thought, that Google would crawl the robots.txt this rarely. I actually read it somewhere, which makes complete sense, that before they start crawling, they validate the process against robots.txt. This is one page only, but basically one of the most important ones.
This is now a shocking experience for me, thank you for drawing my attention to it. Anyway, I have submitted the page through 'Fetch as Google' now.
Regarding your url suggestion, I do not want them to be 404-d, at least not all of them, as for examply the login pages I still want to use, and why we have individual urls, is that because we would like our visitors to return back the page they left, before we asked them to log in. So status 200 is ok, because these pages we have for customers, but the very same pages are totally useless for Google to crawl or to index.
I hope this clarifies.
-
It seems like the latest Robots.txt file has not been cached by Google so far .. this is what it has –
So, you need to use Fetch As Google Bot and Submit this Robots.txt file to index to fix this issue at the earliest.
What concerns me that defunct URLs like this - http://www.kozelben.hu/login?r=%2Fceg%2Fdrink-island-bufe-whisky-bar-alkotas-utca-17-1123-budapest-126126%23addComment or http://www.kozelben.hu/supplier/nearby/supplierid/127493/type/geo are returning 200 Ok server side response code whereas they should be returning 404 server side response. The problem would have stopped here for once and all.
However assuming the fact that the CMS of your website does not offer you any such option [in that case, this is a bad CMS], you need to apply Meta noindex tag against them and wait patiently for search engine to catch them.
_Can’t you fix the 404 thing? Let us know. _
-
Really good article, indeed!
I have been thinking about the whole concept during the weekend, and now I have a further concept, definetely worth considering.
Thank you again, Ryan.
-
Lindsay wrote a great article on the topic which I am sure you will enjoy: http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thank you for the further info, Ryan.
Although I see your point and can accept lots of truth in it, checking all the competitors and even the largest sites all around the web, they still keep using robots.txt (even Google does so).
I however accept noindex to be a superior solution to robots.txt and will use it for all the contents I do not want to be indexed.
I will then see, if I need and how I might need to use robots.txt. I hope, it does not hurt having a noindexed page included in robots.txt (at a later time, when it is already out of the index).
-
I understand your concern Andras. The two questions I would focus on with respect to crawl budget:
1. Is all your content being indexed properly?
2. Is your content being indexed in a timely manner?
If the answer to the above two questions is yes, I would not spend any more time thinking about crawl budget. Either way, using the "noindex" meta tag is going to be the best way to handle the issue you originally presented.
On a related note, does the content on your "useful" pages change frequently? If so, ensure you are optimizing your links (both internal and external) to these pages. When you demonstrate these are important pages to your site, Google will crawl the pages more frequently.
-
Hi Ryan,
Thank you for your reply.
The only worry I have regarding the crawl budget, that I currently have three times more indexed pages than useful pages, due to the issues I have mentioned earlier.
It is true, that I do not have daily content updates on all of my useful pages, however I have thought that Google allocates individual crawling budget to all sites, based on the value he assigns to them.
I just want this budget to be spent wisely, and not causing my useful pages to be crawled less frequently, due to crawling no-value (but noindexed) content instead.
-
Hi Andras,
The first thing to know is a general rule....the best robots.txt file is a blank one. There is almost always a better method of managing a situation without using robots.txt. There are numerous reasons, one of which is search engines do not always see the robots.txt file.
Regarding the noindex meta tag, that is the proper solution. I understand your concern over crawl budget, but I suggest in this instance, your concerns are not warranted. It is a waste of crawl budget to have search engines spend extra time due to slow servers, bad code, thin content, etc. If you have pages which should not be indexed, adding the noindex tag is likely the best solution.
Without being familiar with your site, it is not possible to offer a definitive answer, but generally speaking this response should be accurate. Keep in mind many sites have millions of pages, and Google has the ability to crawl the entire site each month.
-
Can you show us examples of URLs that are causing you trouble? That would be easier for us to provide a solution.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can you force Google to use meta description?
Is it possible to force Google to use only the Meta description put in place for a page and not gather additional text from the page?
Technical SEO | | A_Q0 -
Robots.txt
Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index. Is this enough for the solve problem?
Technical SEO | | iskq0 -
How oftern should I use press relese
We are using press release agency to publish our new products/brands/offers. I realise this would increase our links. Would this be a good method and would those links be permanent? Can I publish as many as I want so I end up with many good links from trust source? Am I doing the right thing?
Technical SEO | | LauraHT0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
Wordpress noindex
Hi there, Does anyone no of a way to noindex all the "previous entries" pages in a wordpress blog. They usally show on domain.com/page/2/ etc. They are the small snippets that provide a summary of the all your posts. I've not been able to find a plugin to do this. Thanks so much!
Technical SEO | | PeterM220 -
Is it ok to just use the end of the url when using a Rel Cononical Link?
Hi, I am working with an account and the previous SEO used a Rel Canonical link that just uses the end of the url. Instead of the full url When I look it up on the web I see most people are using the full url. Is that the proper way to do it or does is suffice to just use the end of the url? Wanted to check before I take the time to change them all. -Kent
Technical SEO | | KentH0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0 -
Meta Title Keywords and Company name
Currently our meta title says "Network Security Audit | Pivot Point Security" which is pretty broad considering how many services we offer. In trying to restructure our keywords, marketing and SEO focus, I came up with a new title. The problem I have is figuring out which keywords to use in the title, and with a company name with 3 words - I am running out of room. The new title idea is "Information Security Assessments - Penetration Testing | Pivot Point Security" So my questions are the following. Do I need to put the company name? Should I choose different keywords? I'm sort of at a stand still trying to figure out the best possible title since meta keywords or description won't really help ranking.
Technical SEO | | pivotpointsecurity0