delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1999/11/23/18:12:08

From: broeker AT acp3bf DOT knirsch DOT de (Hans-Bernhard Broeker)
Newsgroups: comp.os.msdos.djgpp
Subject: Re: randN
Date: 23 Nov 1999 12:07:51 +0100
Organization: RWTH Aachen, III. physikalisches Institut B
Lines: 38
Message-ID: <81dsi7$490@acp3bf.knirsch.de>
References: <383904BA DOT AE45DCDE AT mpx DOT com DOT au> <81bigt$1i5 AT acp3bf DOT knirsch DOT de> <Pine DOT SUN DOT 3 DOT 91 DOT 991123091617 DOT 13225A-100000 AT is>
NNTP-Posting-Host: acp3bf.physik.rwth-aachen.de
X-Trace: nets3.rz.RWTH-Aachen.DE 943355276 7409 137.226.32.75 (23 Nov 1999 11:07:56 GMT)
X-Complaints-To: abuse AT rwth-aachen DOT de
NNTP-Posting-Date: 23 Nov 1999 11:07:56 GMT
X-Newsreader: TIN [version 1.2 PL2]
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp
Reply-To: djgpp AT delorie DOT com

Eli Zaretskii (eliz AT is DOT elta DOT co DOT il) wrote:
> On 22 Nov 1999, Hans-Bernhard Broeker wrote:

> > The typical method works by transforming the distribution.

> A much simpler way is to generate 6 uniform random number (by calling
> e.g. rand() 6 times), then return their average, suitably scaled 
[...]
> The
> theory behind it is the well-known theorem which states that the sum
> of a large number of independent random numbers approaches the normal
> distribution.

Nice theory, but its application has a significant drawback: 6 is not
'large'. Not by any statistical definition I've seen. In the actual
mathematical theorem, it's actually for 'infinitely' many terms only
that the true gaussian is guaranteed to come out of the process.

To be a bit more precise, the sum of 6 flatly distributed numbers can
only be equal to the true gaussian to some approximation. If memory
serves, it's a fifth-order rational polynomial in variable
(x-mean)/sigma. Which means that as soon as you use this distribution
outside of some region around the mean value, you'll get observable
deviations from gaussian behaviour. Even for the sum of 12 flat
randoms, as used by the CERN math library, e.g., the method becomes
invalid outside a region of +/- 6 sigma around the mean, which is why
it's explicitly cut off, there.

With a properly implemented inverse error function, or the 2D trick of
transforming via polar coordinates posted here by someone else, the
accuracy will be quite a lot better than that, usually. The only real
justification for the sum-of-6 version would be speed, then. But that
would only hold if the random number generator itself is significantly
faster than, say a sin() or log() evaluation. For good RNGs, that
won't usually hold.
-- 
Hans-Bernhard Broeker (broeker AT physik DOT rwth-aachen DOT de)
Even if all the snow were burnt, ashes would remain.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019