Is there a limit to how many URLs you can put in a robots.txt file?
-
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda.
Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
-
Hi Kristen,
I did this recently and it worked. The important part is that you need to block the pages in robots.txt or add a noindex tag to the pages to stop them from being indexed again.
I hope this helps.
-
Hi all, Google Webmaster Tools has a great tool for this. If you go into WMT and select "Google index", then "remove URLs". You can use regex to remove a large batch of URLs then block them in robots.txt to make sure they stay out of the index.
I hope this helps.
-
Great thanks for the input. Per Kristen's post I am worried that it could just block the URLs altogether and they will never get purged from the index.
-
Yes, we have done that and are seeing traction on those urls, but we can't get rid of these old urls as fast as we would like.
Thanks for your input
-
Thanks Kristen, thats what I was afraid I would do. Other than Fetch is there a way to send Google these URLs in mass? There are over 100 million URLs so Fetch is not scalable. They are picking them up slowly, but at current pace it will take a few months and I would like to find a way to make it purge faster.
-
You could add them to the robots.txt but it you have to remember that Google will only read the first 500kb (source) - as far as I understand with the number of url's you want to block you'll pass this limit.
As Google bot is able to understand basic regex expressions it's probably better to use regex (you will probably be able to block all these url's with a few lines of code.
More info here & on Moz: https://moz.com/blog/interactive-guide-to-robots-txtDirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Broken URL Links
Hi everyone, I have a question regarding broken URL links on my website. Late last year I move my site from an old platform to Shopify, and now have broken URL links giving out 4xx errors. When I look at Moz Pro>Campaigns>Insights>links, I can see the top broken URL links, however there is a difference if copy & paste URL directly from Moz Pro and by Export CSV file. For example below, If I copy and paste links direct from Moz Pro, it has the “http://” in front as below: http://www.thehairhub.com.au/WebRoot/ecshared01/Shops/thehairhub/57F3/1D8F/D244/C675/E27D/AC10/003F/35AD/manic-panic-colours.jpg But when I export the list of links as an CSV file, the http:// is removed. www.thehairhub.com.au/WebRoot/ecshared01/Shops/thehairhub/57F3/1D8F/D244/C675/E27D/AC10/003F/35AD/manic-panic-colours.jpg Another Example below: By copy & paste URL direct from Moz Pro
Technical SEO | | johnwall
http://thehairhub.com.au/Shop-Brands/Vitafive-CPR/CPR-Rescue By export CSV file.
thehairhub.com.au/Shop-Brands/Vitafive-CPR/CPR-Rescue Which one do I use to enter into the “Redirect From” field in Shopify URL Redirects? Do I need to have the http:// in front of the URL? Or is it not required for redirects to work? Kind Regards, John Wall
The Hair Hub0 -
Little confused regarding robots.txt
Hi there Mozzers! As a newbie, I have a question that what could happen if I write my robots.txt file like this... User-agent: * Allow: / Disallow: /abc-1/ Disallow: /bcd/ Disallow: /agd1/ User-agent: * Disallow: / Hope to hear from you...
Technical SEO | | DenorL0 -
Why is robots.txt blocking URL's in sitemap?
Hi Folks, Any ideas why Google Webmaster Tools is indicating that my robots.txt is blocking URL's linked in my sitemap.xml, when in fact it isn't? I have checked the current robots.txt declarations and they are fine and I've also tested it in the 'robots.txt Tester' tool, which indicates for the URL's it's suggesting are blocked in the sitemap, in fact work fine. Is this a temporary issue that will be resolved over a few days or should I be concerned. I have recently removed the declaration from the robots.txt that would have been blocking them and then uploaded a new updated sitemap.xml. I'm assuming this issue is due to some sort of crossover. Thanks Gaz
Technical SEO | | PurpleGriffon0 -
URL redirecting domains
Hi Is there anything wrong/dangerous forwarding a clutch of domains to a sub page (landing page) on a different domain ? Say Brand X buys Brand Z and wants to close down Brand Z site but have Brand Z domain fwd to a landing page (explaining the company acquisition) on Brand X site. In addition Brand Z had a few related but unused domains forwarding to Brand Z doman & now also wants those fwd'd to the new landing page on brand X Since the reasons for doing this forwarding are legitimate company reasons relating to an acquisition i would have thought it should be ok but can anyone think of a reason why could be bad since i remember in the old days peeps used to redirect domains for seo reasons so worried fwd'ing a load of domains could cause some sort of negative flag with big G ? Also do domain redirects transfer the authority/juice from the old site/domain to the new destination page (new landing page on brand x site) similar to how a 301 redirect works ? Many Thanks Dan
Technical SEO | | Dan-Lawrence0 -
Content and url duplication?
One of the campaign tools flags one of my clients sites as having lots of duplicates. This is true in the sense the content is sort of boiler plate but with the different countries wording changed. The is same with the urls but they are different in the sense a couple of words have changed in the url`s. So its not the case of a cms or server issue as this seomoz advises. It doesnt need 301`s! Thing is in the niche, freight, transport operators, shipping, I can see many other sites doing the same thing and those sites have lots of similar pages ranking very well. In fact one site has over 300 keywords ranked on page 1-2, but it is a large site with an 12yo domain, which clearly helps. Of course having every page content unique is important, however, i suppose it is better than copy n paste from other sites. So its unique in that sense. Im hoping to convince the site owner to change the content over time for every country. A long process. My biggest problem for understanding duplication issues is that every tabloid or broadsheet media website would be canned from google as quite often they scrape Reuters or re-publish standard press releases on their sites as newsworthy content. So i have great doubt that there is a penalty for it. You only have to look and you can see media sites duplication everywhere, everyday, but they get ranked. I just think that google dont rank the worst cases of spammy duplication. They still index though I notice. So considering the business niche has very much the same content layout replicated content, which rank well, is this duplicate flag such a great worry? Many businesses sell the same service to many locations and its virtually impossible to re write the services in a dozen or so different ways.
Technical SEO | | xtopher660 -
Robots.txt and joomla
Hello, I use joomla for my website and automatically all those files are blocked is that good or bad, so I remove anything and if so why ? User-agent: *
Technical SEO | | seoanalytics
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /images/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/ I also added to my robots.txt files my email address ( is that useful, I am afraid google passes PR to the email address )
and a javascript: void (0) because I have tabs on my webpage ( is that useful )
as well as a .pdf ( is it also useful ) any comments ? does anything need to be changed or is it ok ? Thank you,0 -
Use of Robots.txt file on a job site
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
Technical SEO | | WebTalent0 -
URLs: To Change or Not to Change
Hello, We recently launched a redesigned site in Drupal in December of last year. We are an eco-travel company. My current URL's look like this: /africa-and-middle-east/kenya-tanzania /central-south-america/galapagos-islands My pages have good term targeting grades, and the rankings for the terms we are targeting - "kenya and tanzania safaris" and "galapagos islands cruises" are decent, but not great - most are on page 2 or 3. The one URL where I targeted our most important term, "amazon river cruises," I am still on page 2. /central-south-america/amazon-river-cruises My questions are: Did I miss an opportunity with the rest of the URL's, and should I consider changing the rest to more targeted terms with 301s? Since the new site launched in January, perhaps I have not given enough time for my new URL's to index and mature. Would it be easier to set up landing pages with unique article content that targets terms such as "galapagos islands cruises" and "kenya and tanzania safaris"? If so, how can I do it in such a way as to not "compete" with the pages I want to drive them to? This also raises the question of redirecting the same URL twice i.e. I would have 2 redirects in place for the same url e.g. from the former site to the new site, and yet another redirect to the most-recent URL. Is that a problem? Sorry if I've asked too many questions in one post. 😉 Any advice appreciated.
Technical SEO | | csmithal0