Large robots.txt file
-
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately)
Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down.
Does anybody have any experience with a robots.txt of that size?
-
Answered my own questions:
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt?csw=1#file-format
"A maximum file size may be enforced per crawler. Content which is after the maximum file size may be ignored. Google currently enforces a size limit of 500kb."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with large number of old/outdated pages?
Are we redoing a large portion of our site (not ecommerce). We have a large number of pages (about 2000 indexed pages, out of about 3000) that have been forgetten about until recently, are very outdated, don't drive any traffic (according to Google Analytics) But they are ranking very well for the targeting keyword (#3 organic for most). What should I do with those pages? Could you give any guidance on whether we should or what affect it might have one the rest of the website if we delete those pages or simply 301 redirecting all those pages to the home page?
Intermediate & Advanced SEO | | aphoontrakul0 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Large Scale Domain Forwarding
I recently purchased a domain from a domainer who owns and parks many, many exact match domains in my niche. He gets a lot of type in traffic via these domains and is willing to forward them to my domain to help get my site started with traffic. We were planning on forwarding a few dozen domains at the most. I'd like to make sure I'm not raising any red flags with google for forwarding so many domains to a new site. I found this article, which says Panda made some changes with regards to what I'm trying to do here. Not sure if they guy is right though. http://domainate.wordpress.com/2011/10/20/how-google-panda-affected-domain-forwarding-and-what-to-do-about-it/
Intermediate & Advanced SEO | | terran0 -
Does having a file type on the end of a url affect rankings (example www.fourcolormagnets.com/business-cards.php VS www.fourcolormagnets.com/business-cards)????
Does having a file type on the end of a url affect rankings (example www.fourcolormagnets.com/business-cards.php VS www.fourcolormagnets.com/business-cards)????
Intermediate & Advanced SEO | | JHSpecialty0 -
What tips do people have for implementing SEO strategies for large websites?
Hi, I would like some tips on how to manage SEO on Large sites with limited. For example, I have a client with a large Ecommerce store that wants to rank high for every product and every category. Obviosly every page has to be keyword optimised, but what is the best strategy for acquiring links and should we target all deep pages or just the home page and category pages, then use good internal linking to pass the link juice around? All advice welcome! thanks
Intermediate & Advanced SEO | | websearchseo0 -
Robots.txt disallow subdomain
Hi all, I have a development subdomain, which gets copied to the live domain. Because I don't want this dev domain to get crawled, I'd like to implement a robots.txt for this domain only. The problem is that I don't want this robots.txt to disallow the live domain. Is there a way to create a robots.txt for this development subdomain only? Thanks in advance!
Intermediate & Advanced SEO | | Partouter0