Mail Archives: djgpp/1998/01/23/19:15:26
In <34C8B7D1 DOT A19B36EA AT pentek DOT com> Charles Krug <charles AT pentek DOT com>
writes:
>
>Andrew Deren wrote:
>
>> On 20 Jan 1998, Gili wrote:
>>
>> > Hi,
>> >
>> > I have noticed that lots algorithms provide their efficency
as O(n),
>> > where N is the number of elements they are operating on. The last
math
>> > course I have taken is Calculus 2 and I do not recall ever seeing
the
>> > O(n) function. What does it mean? How efficient is something with
>> > O(n)? Thanks in advanced,
>>
>> O(n) is what computer scienties use to define the running time or an
>> algorithm.
>
>It's actually from discrete math, and is usually called the
"complexity" of an
>algorithm. There is quite a bit of math behind it, but what it really
means is
>this.
>
>Suppose that you have an algorithm that takes some function of n to
execute,
>where n is the number of things being operated on. Take, for example,
this
>function:
> n^2 +
nlog(n) +
>3000.
>
>If n is allowed to grow very large, (how large depends upon the
constants), one
>term will dominate. That term is used to describe the "complexity" of
an
>algorithm. They are customarily listed in this order:
>
>2^n, n!, n^3, n^2, nlog(n), n, log(n), 1.
>
>If you look at a bubble sort, you realize that you have n outer loops,
each
>containing (n-1) inner loops, giving an expanded polynomial of (n^2
-n). This
>therefore has the complexity n^2. If you look at quicksort, you see
log(n)
>outer loops, each containing (n-1) inner loops, giving a complexity of
>log(n)(n-1) or nlog(n) - log(n). Which has a complexity of nlog(n).
>
>
>--
>Charles Krug, Jr.
>
>
Also be aware that that is the worst-case condition for the algorithm.
For example an O(NlogN) algorithm may be more efficient than an O(N^2)
algorithm if N<N' where N' is where N(alg 1, t) = N(alg 2, t).
- Raw text -