Is there a limit to how many URLs you can put in a robots.txt file?
-
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda.
Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
-
Hi Kristen,
I did this recently and it worked. The important part is that you need to block the pages in robots.txt or add a noindex tag to the pages to stop them from being indexed again.
I hope this helps.
-
Hi all, Google Webmaster Tools has a great tool for this. If you go into WMT and select "Google index", then "remove URLs". You can use regex to remove a large batch of URLs then block them in robots.txt to make sure they stay out of the index.
I hope this helps.
-
Great thanks for the input. Per Kristen's post I am worried that it could just block the URLs altogether and they will never get purged from the index.
-
Yes, we have done that and are seeing traction on those urls, but we can't get rid of these old urls as fast as we would like.
Thanks for your input
-
Thanks Kristen, thats what I was afraid I would do. Other than Fetch is there a way to send Google these URLs in mass? There are over 100 million URLs so Fetch is not scalable. They are picking them up slowly, but at current pace it will take a few months and I would like to find a way to make it purge faster.
-
You could add them to the robots.txt but it you have to remember that Google will only read the first 500kb (source) - as far as I understand with the number of url's you want to block you'll pass this limit.
As Google bot is able to understand basic regex expressions it's probably better to use regex (you will probably be able to block all these url's with a few lines of code.
More info here & on Moz: https://moz.com/blog/interactive-guide-to-robots-txtDirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why put rel=canonical to the same url ?
Hi all. I've heard that it's good to put the link rel canonical in your header even when there is no other important or prefered version of that url. If you take a look at moz.com and see the code, you'll see that they put the <link rel="<a class="attribute-value">canonical</a>" href="http://moz.com" /> ... pointing at the same url ! But if you go to http://moz.com/products/pricing for example, they have no canonical there ! WHY ? Thanks in advance !
Technical SEO | | Tintanus0 -
No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
Hello, New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt) We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin) See image here of Google listing and site URL: http://imgur.com/46wajJw I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools. When can we expect these results to update? Is there a step I may have overlooked? Thank you,
Technical SEO | | adamhdrb
Adam 46wajJw0 -
URL redirecting domains
Hi Is there anything wrong/dangerous forwarding a clutch of domains to a sub page (landing page) on a different domain ? Say Brand X buys Brand Z and wants to close down Brand Z site but have Brand Z domain fwd to a landing page (explaining the company acquisition) on Brand X site. In addition Brand Z had a few related but unused domains forwarding to Brand Z doman & now also wants those fwd'd to the new landing page on brand X Since the reasons for doing this forwarding are legitimate company reasons relating to an acquisition i would have thought it should be ok but can anyone think of a reason why could be bad since i remember in the old days peeps used to redirect domains for seo reasons so worried fwd'ing a load of domains could cause some sort of negative flag with big G ? Also do domain redirects transfer the authority/juice from the old site/domain to the new destination page (new landing page on brand x site) similar to how a 301 redirect works ? Many Thanks Dan
Technical SEO | | Dan-Lawrence0 -
Magento Robots & overly dynamic URL-s
How can i block all URL-s on a Magento store that have 2 or more dynamic parameters in it, since all the parameters have attribute name in it and not some uniform ID Would something like: Disallow: /?&* work? Since the only thing that is constant throughout all the custom parameters is that they are separated with "&" Thanks 🙂
Technical SEO | | tilenkrivec0 -
How to keep a URL social equity during a URL structure/name change?
We are in the process of making significant URL name/structure change to one of our property and we want to keep the social equity (likes, share, +1, tweets) from the old to the new URL. We have been trying many different option without success. We are running our social "button" in an iframe. Thanks
Technical SEO | | OlivierChateau0 -
Would you shorten this url, and if so how?
I designed the structure of my website way before I even thought about SEO. I run a website that requires me to categorize articles is somewhat deep nested categories so an example url would be as follows http://www.yakangler.com/articles/news/new-products/boats/item/1442-jackson-kayak-launches-the-big-tuna Would you shorten the url to somethign like this? http://www.yakangler.com/a/n/np/b/item/1442-jackson-kayak-launches-the-big-tuna If so how would you manage the redirects I'm unsure how to add a 301 redirect in my .htaccess file that wouldn't require me to add one for every single article. Could I do it with a rule that recognizes only the middle part of the url and redirect it accordingly? Thanks for any advice you might have!
Technical SEO | | mr_w0 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0