X-Recipient: archive-cygwin AT delorie DOT com X-Spam-Check-By: sourceware.org From: Barry Kelly To: Cygwin Mailing List Subject: "du -b --files0-from=-" running out of memory Date: Sun, 23 Nov 2008 13:24:03 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-IsSubscribed: yes Mailing-List: contact cygwin-help AT cygwin DOT com; run by ezmlm List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: cygwin-owner AT cygwin DOT com Mail-Followup-To: cygwin AT cygwin DOT com Delivered-To: mailing list cygwin AT cygwin DOT com Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by delorie.com id mANDP8V3017038 I have a problem with du running out of memory. I'm feeding it a list of null-separated file names via standard input, to a command-line that looks like: du -b --files0-from=- The problem is that when du is run in this way, it leaks memory like a sieve. I feed it about 4.7 million paths but eventually it falls over as it hits the 32-bit address space limit. Now, I can understand why a du -c might want to exclude excess hard links to files, but that at most requires a hash table for device & inode pairs - it's hard to see why 4.7 million entries would cause OOM - and in any case, I'm not asking for a grand total. Is there any other alternative to running e.g. xargs -0 du -b, possibly with a high -n to xargs to limit memory leakage? -- Barry -- http://barrkel.blogspot.com/ -- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Problem reports: http://cygwin.com/problems.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/