Mail Archives: djgpp/1995/01/26/09:30:41
> While the first tests without optimization ran just fine (about three times
> faster than on an SGI IRIS workstation), I encountered a problem which
> seems to be related to optimization. When compiling with the switches
> "-fforce-mem" or "-ffast-math", the program fails to write an output file.
> MSDOS then reports a fatal error like "sector not found" and the FAT is
> usually corrupted after aborting the operation. With "-ffast-math", even
Why would you try these options? Isn't -O2 enough? -fforce-mem is
clearly an experimental feature as this fragment from the Gcc Info
docs says:
I am interested in hearing about the difference this makes.
and under -ffast-math it says:
This option allows GCC to violate some ANSI or IEEE rules and/or
specifications in the interest of optimizing code for speed. For
example, it allows the compiler to assume arguments to the sqrt
function are non-negative numbers and that no floating-point values
are NaNs.
This option should never be turned on by any -O option since
it can result in incorrect output for programs which depend on
an exact implementation of IEEE or ANSI rules/specifications for
math functions.
Can you (or anybody else, for that matter) *really* say that any
non-trivial Fortran program does *not* depend on IEEE/ANSI rules
to run?
Judging from your symptoms, you might consider checking a possibility
that the code generated by GCC somehow clobbers registers used by
system-calls (file write), or that calling a DOS service changes values
of registers which these options depend on being the same as before
the system call. Don't forget that calling DOS involves switch from
protected to real mode and back. Looking at the machine code generated
by GCC where your program writes, or stepping there with a debugger
looking at these registers might reveal that problem. But in general I
would advise to stay away of these options.
- Raw text -