Mail Archives: djgpp/1999/11/23/18:12:08
Eli Zaretskii (eliz AT is DOT elta DOT co DOT il) wrote:
> On 22 Nov 1999, Hans-Bernhard Broeker wrote:
> > The typical method works by transforming the distribution.
> A much simpler way is to generate 6 uniform random number (by calling
> e.g. rand() 6 times), then return their average, suitably scaled
[...]
> The
> theory behind it is the well-known theorem which states that the sum
> of a large number of independent random numbers approaches the normal
> distribution.
Nice theory, but its application has a significant drawback: 6 is not
'large'. Not by any statistical definition I've seen. In the actual
mathematical theorem, it's actually for 'infinitely' many terms only
that the true gaussian is guaranteed to come out of the process.
To be a bit more precise, the sum of 6 flatly distributed numbers can
only be equal to the true gaussian to some approximation. If memory
serves, it's a fifth-order rational polynomial in variable
(x-mean)/sigma. Which means that as soon as you use this distribution
outside of some region around the mean value, you'll get observable
deviations from gaussian behaviour. Even for the sum of 12 flat
randoms, as used by the CERN math library, e.g., the method becomes
invalid outside a region of +/- 6 sigma around the mean, which is why
it's explicitly cut off, there.
With a properly implemented inverse error function, or the 2D trick of
transforming via polar coordinates posted here by someone else, the
accuracy will be quite a lot better than that, usually. The only real
justification for the sum-of-6 version would be speed, then. But that
would only hold if the random number generator itself is significantly
faster than, say a sin() or log() evaluation. For good RNGs, that
won't usually hold.
--
Hans-Bernhard Broeker (broeker AT physik DOT rwth-aachen DOT de)
Even if all the snow were burnt, ashes would remain.
- Raw text -