Mail Archives: cygwin/2006/11/27/14:25:26
On Saturday 25 November 2006 10:12 pm, Linda Walsh wrote:
> Vladimir Dergachev wrote:
> > This is curious - how do you find out fragmentation of ext3 file ? I do
> > not know of a utility to tell me that.
>
> ---
> There's a debugfs for ext2/ext3 that allows you to dump all of the
> segments associated with an inode. "ls -i" dumps the inode number.
> A quick hack (attached) displays segments for either extX or (using
> xfs_bmap) xfs. I couldn't find a similar tool for jfs or reiser (at least
> not in my distro).
>
Cool, thank you !
fragfilt.ext does not quite work for me. Also, looking at the code, it is not
obvious whether it takes indirect blocks into account - but I am not that
fluent in perl, so, perhaps I missed it.
Here is a piece of output from my debugfs:
....
(IND):118948480, (898060-898435):118948488-118948863,
(898436-898660):118949376-118949600, (898661-899083):118949612-118950034,
(IND):118950035, (899084-900107):118950036-118951059, (IND):118951060,
(900108-901131):118951061-118952084, (IND):118952085,
(901132-902155):118952086-118953109, (IND):118953110,
(902156-902741):118953111-118953696, (902742-903179):118953701-118954138,
(IND):118954139, (903180-903760):118954140-118954720,
(903761-904203):118955745-118956187, (IND):118956188,
(904204-904783):118956189-118956768, (904784-905227):118957813-118958256,
(IND):118958257, (905228-906251):118958258-118959281, (IND):118959282,
(906252-906760):118959283-118959791
> > From indirect observation ext3 does not have fragmentation nearly that
> > bad until the filesystem is close to full or I would not be able to reach
> > sequential read speeds (the all-seeks speed is about 6 MB/sec for me, I
> > was getting 40-50 MB/sec). This was on much larger files though.
>
> ---
> On an empty partition, I created a deterministic pathological case. Lots
> little files all separated by holes. ext3 (default mount) just
> allocated 4k blocks in a first come-first serve manner. XFS apparently
> looked for larger allocation units as the file was larger than 4K.
> In that regard, it's similar to NT. I believe both use a form of B-Tree
> to manage free space.
I see. Well, I was concerned with non-pathological case of having lots of
contiguous free space and apparent inability of NTFS to handle slowly grown
files (i.e. writes in append mode). A common usage case are logfiles and
downloads.
>
> > Which journal option was the filesystem mounted with ?
>
> ---
> I can't see how that would matter, but default. For speed of
> test, I mounted both with noatime,async & xfs also got
> nodiratime and logbuffs=8 (or deletes take way long).
Thank you, just wanted to cover all possibilities.
>
> > I actually implemented a workaround that calls "fsutil file createnew
> > FILESIZE" to preallocate space and then write data in append mode
> > (after doing seek 0).
>
> ---
> I wonder if it does the same thing as dd or if it uses
> the special call to tell the OS what to expect. FWIW,
> "cp" used some smallish number of blocks (4 or 8, I think), so
> it is almost guaranteed to give you about the worse possibly
> fragmented file! :-) Most likely the other file utils will
> give similar allocation performance (not so good).
I believe it is a special call that tells the filesystem to reserve needed
space, but does not write anything to disk. I wonder whether it leaks
information from deleted files.
Btw, I found out that IE writes files downloaded from the web into the
temporary directory - and they end up all broken in tiny pieces, but, after
that, it *copies* them to the actual location (instead of doing a move as
would be reasonable). The copy ends up not being fragmented as, my guess, IE
now knows its sides and asks for it.
best
Vladimir Dergachev
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
- Raw text -