Algorithm Analysis

Computer Science isn’t just about programming; there’s some real science involved. Among the many things scientists do is analysis. Algorithms can be analyzed. Let’s see how.

Introduction

It is important to be able to measure, or at least make educated statements about, the space and time complexity of an algorithm. This will allow you to compare the merits of two alternative approaches to a problem you need to solve, and also to determine whether a proposed solution will meet required resource constraints before you invest money and time coding.

Analysis is done before coding. Profiling (a.k.a. benchmarking) is done after the code is written.

Measuring Time

The absolute running time of an algorithm cannot be predicted, since this depends on the programming language used to implement the algorithm, the computer the program runs on, other programs running at the same time, the quality of the operating system, and many other factors. We need a machine-independent notion of an algorithm’s running time.

The current state-of-the-art in analysis is finding a measure of an algorithm’s relative running time, as a function of how many items there are in the input, i.e., the number of symbols required to reasonably encode the input, which we call $n$. The $n$ could be:

The first three only make sense when each of the elements (items, characters, coefficients/exponents) have a bounded-size. For example: if all items are 64-bit integers, then the number of items can be used for $n$, but if numeric values are unbounded, you have to count bits!

We count the number of abstract operations as a function of $n$.

Example: Printing each element of an array

for (int i = 0; i < a.length; i++) {
    System.out.println(a[i]);
}

Here $n = a.\mathtt{length}$ (provided we know that all of the items in the array have a fixed size, which is often the case). We have

so we write $T(n) = 4n + 1$.

Or should we?

All operations are not created equal. The printing completely overwhelms the increments, compares and indexing operations. We might as well talk about having $n$ compare-index-print-increment operations. Then $T(n) = n + 1$. While we’re at it, we might as well realize that initialization doesn’t mean much, so we’re safe to write $T(n) = n$.

Example: Multiplying two square matrices

for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        double sum = 0;
        for (int k = 0; k < n; k++) {
            sum += a[i][k] * b[k][j];
        }
        c[i][j] = sum;
    }
}

Working from the inside-out we see each iteration of the $k$-loop has 4 array indexing operations, one multiplication, one addition, and one assignment. Or, heck, one big sum += a[i][k]*b[k][j] operation. Inside the $j$-loop there are n of these, together with an initialization of sum and an assignment to c[i][j]. Does this mean $n+2$? Hardly. Those other two operations don’t count for much. By a similar argument we can ignore all the simple for-loop initializing, testing, and incrementing operations. They are fast. All we need to do is count the number of sum += a[i][k]*b[k][j] operations — that’s really what counts.

So we have $T(n) = n^{3}$.

But is this a fair measure of the complexity? We counted operations relative to the width of the matrices. We can also count operations relative to the number of items in the two matrices, which is $2n^2$. Under this measure we’d say $T(n) = n^{1.5}$.

Exercise: Prove this.

Example: An Interesting Nested Loop

for (int i = 1; i <= n; i *= 2) {
    for (int j = 0; j < n; j++) {
        count++;
    }
}

Here the outer loop is done $\log n$ times and the inner loop is done $n$ times, so $T(n) = n \log n$. (Yes, the default base for logarithms in Computer Science is 2.)

Example: Getting the value of the last element from an array

if (a.length > 0) {
    return a[a.length - 1];
} else {
    throw new NoSuchElementException();
}

Here $n = a\mathtt{.length}$, and $T(n) = 1$.

Example: Addition

x + y

If $x$ and $y$ are primitive int values that each fit in a word of memory, any computer can add them in one step: $T(n) = 1$, But if these are BigInteger values, then what matters is how many digits (or bits) there are in each value. Since we just have to add the digits in each place and propagate carries, we get $T(n) = n$.

Example: Finding the index of a given value in an array

for (int i = 0; i < a.length; i++) {
    if (a[i] == x) {
        return Optional.of(i);
    }
}
return Optional.empty();

Here we need to introduce the $B$ (best-case) and $W$ (worst-case) functions: $B(n) = 1$ and $W(n) = n$. (What about the average case? It’s often difficult and sometimes impossible to compute an average case. You have to know something about the expected distributions of items.)

Time Complexity Classes

We can group complexity functions by their “growth rates” (something we’ll formalize in a minute). For example:

ClassExampleDescription
Constant$\lambda n . 1$Fixed number of steps, regardless of input size
Logarithmic$\lambda n . \log n$Number of steps proportional to log of input size
Polynomial$\lambda n . n^k$, $k \geq 1$Number of steps a polynomial of input size
Exponential$\lambda n . c^n$, $c > 1$Number of steps exponential of input size

Those are four useful families to be sure, but we can define lots more:

Class Example ($\lambda n .$) Description
Constant$1$ Doing a single task at most a fixed number of times. Example: retrieving the first element in a list.
Inverse Ackermann$\alpha(n)$
Iterated Log$\log^{*} n$
Loglogarithmic$\log \log n$
Logarithmic$\log n$ Breaking down a large problem by cutting its size by some fraction. Example: Binary Search.
Polylogarithmic$(\log n)^c$, $c>1$
Fractional Power$n^c$, $0<c<1$
Linear$n$ "Touches" each element in the input. Example: printing a list.
N-log-star-n$n \log^{*} n$
Linearithmic$n \log n$ Breaking up a large problem into smaller problems, solving them independently, and combining the solutions. Example: Mergesort.
Quadratic$n^2$ "Touches" all pairs of input items. Example: Insertion Sort.
Cubic$n^3$  
Polynomial$n^c$, $c\geq 1$  
Quasipolynomial$n^{\log n}$  
Subexponential$2^{n^\varepsilon}$, $0<\varepsilon<1$  
Exponential$c^n$, $c>1$ Often arises in brute-force search where you are looking for subsets of a collection of items that satisfy a condition.
Factorial$n!$ Often arises in brute-force search where you are looking for permutations of a collection of items that satisfy a condition.
N-to-the-n$n^n$
Double Exponential$2^{2^n}$
Ackerman$A(n)$
Runs Forever$\infty$

Comparison

To get a feel for the complexity classes consider a computer that performed one million abstract operations per second:

$T(n)$ 10 20 50 100 1000 1000000
1 1
μs
1
μs
1
μs
1
μs
1
μs
1
μs
$\log n$ 3.32
μs
4.32
μs
5.64
μs
6.64
μs
9.97
μs
19.9
μs
$n$ 10
μs
20
μs
50
μs
100
μs
1
msec
1
second
$n \log n$ 33.2
μs
86.4
μs
282
μs
664
μs
9.97
msec
19.9
seconds
$n^2$ 100
μs
400
μs
2.5
msec
10
msec
1
second
11.57
days
$1000 n^2$ 100
msec
400
msec
2.5
seconds
10
seconds
16.7
minutes
31.7
years
$n^3$ 1
msec
8
msec
125
msec
1
second
16.7
minutes
317
centuries
$n^{\log n}$ 2.1
ms
420
ms
1.08
hours
224
days
2.5×107
Ga
1.23×1097
Ga
$1.01^n$ 1.10
μs
1.22
μs
1.64
μs
2.7
μs
20.9
ms
7.49×104298
Ga
$0.000001 \times 2^n$ 1.02
ns
1.05
msec
18.8
minutes
40.2
Ga
3.40×10272
Ga
3.14×10301001
Ga
$2^n$ 1.02
ms
1.05
seconds
35.7
years
4.02×107
Ga
3.40×10278
Ga
3.14×10301007
Ga
$n!$ 3.63
seconds
771
centuries
9.64×1041
Ga
2.96×10135
Ga
1.28×102545
Ga
$n^n$ 2.78
hours
3323
Ga
2.81×1062
Ga
3.17×10177
Ga
3.17×102977
Ga
$2^{2^n}$ 5.70×10285
Ga
2.14×10315630
Ga

By the way, Ga is a gigayear, or one billion years.

There is something. Compare the $2^n$ row with the $0.000001\cdot 2^n$ row. The latter represents something running one million times faster than the former, but still, even for an input of size 50, requires a run time in the thousands of centuries.

Asymptotic Analysis

Since we are really measuring growth rates, we usually ignore:

so for example

The justification for doing this is

How can we formalize this? That is, how do we formalize things like “the set of all quadratic functions” or the “set of all cubic functions” or “the set of all logarithmic functions”?

Big-O: Asymptotic Upper Bounds

A function $f$ is in $O(g)$ whenever there exist constants $c$ and $N$ such that for every $n > N$, $f(n)$ is bounded above by a constant times $g(n)$. Formally:

$$ O(g) =_{def} \{ f \mid \exists c\,N . \forall n > N . |f(n)| \leq c|g(n)| \} $$

asymptotic.gif

This means that $O(g)$ is the set of all functions for which $g$ is an asymptotic upper bound of $f$.

Examples

Notation

By convention, people usually drop the lambdas when writing expressions involving Big-O, so you will see things like $O(n^2)$.

Does this really let us throw away constant factors and small terms?

Well let’s see why we should be able to, but only argue with one special case. (You can do the formal proof on your own.) Let’s show that $7n^4 + 2n^2 + n + 20 \in O(n^4)$. We have to choose $c$ and $N$. Let’s choose $c=30$ and $N=1$. So since $n \geq 1$: $$ \begin{eqnarray} |7n^4 + 2n^2 + n + 20| & \leq & 7n^4 + 2n^2 + n + 20 \\ & \leq & 7n^4 + 2n^4 + n^4 + 20n^4 \\ & \leq & 30n^4 \end{eqnarray} $$

Is Big-O Useful?

Big-Ω: Asymptotic Lower Bounds

There is a Big-$\Omega$ notation for lower bounds.

A function $f$ is in $\Omega(g)$ whenever there exist constants $c$ and $N$ such that for every $n > N$, $f(n)$ is bounded below by a constant times $g(n)$.

$$ \Omega(g) =_{def} \{ f \mid \exists c\,N . \forall n > N . |f(n)| \geq c|g(n)| \} $$

Lower bounds are useful because they say that an algorithm requires at least so much time. For example, you can say that printing an array is $O(2^n)$ because $2^n$ really is an upper bound, it’s just not a very tight one! But saying printing an array is $\Omega(n)$ means it requires at least linear time which is more accurate. Of course just about everything is $\Omega(1)$.

Big-$\Theta$

When the lower and upper bounds are the same, we can use Big-Theta notation.

$$\Theta(g) =_{def} \{ f \mid \exists c_1\,c_2\,N. \forall n > N . c_{1} |g(n)| \leq |f(n)| \leq c_2 |g(n)| \}$$

Example: printing each element of an array is $\Theta(n)$. Now that’s useful!

Exercise: Is it true to say $\Theta(g) = O(g) \cap \Omega(g)$? Why or why not?

Most complexity functions that arise in practice are in Big-Theta of something. An example of a function that isn’t in Big-$\Theta$ of anything is $\lambda n . n^2 \cos n$.

Exercise: Why isn’t that function in Big-$\Theta$ of anything?

Less-Common Complexity Functions

There are also little-oh and little-omega. $$\omicron(g) =_{def} \{ f \mid \lim_{x\to\infty} \frac{f(x)}{g(x)} = 0 \}$$ $$\omega(g) =_{def} \{ f \mid \lim_{x\to\infty} \frac{f(x)}{g(x)} = \infty \}$$

Exercise: Find functions $f$ and $g$ such that $f \in O(g)$ differs from $f \in o(g)$

There’s also the Soft-O:

$$\tilde{O}(g) =_{def} O(\lambda n.\,g(n) (\log{g(n)})^k)$$

Since a log of something to a power is bounded above by that something to a power, we have

$$\tilde{O}(g) \subseteq O(\lambda n.\,g(n)^{1+\varepsilon})$$

Determining Asymptotic Complexity

Normally you can just look at a code fragment and immediately "see" that it is $\Theta(1)$, $\Theta(\log n)$, $\Theta(n)$, $\Theta(n \log n)$, or whatever. But when you can’t tell by inspection, you can write code to count operations for given input sizes, obtaining $T(n)$. Then you "guess" different values of f for which $T \in \Theta(f)$. To do this, generate values of $\frac{T(n)}{f(n)}$ for lots of different $f$s and $n$s, looking for the $f$ for which the ratio is nearly constant. Example:

PrimeTimer.java
package edu.lmu.cs.algorithms;

import java.util.Arrays;

/**
 * An application class that illustrates how to do an empirical
 * analysis for determining time complexity.
 */
public class PrimeTimer {

    /**
     * A method that computes all the primes up to n, and returns the
     * number of "primitive operations" performed by the algorithm.
     */
    public static long computePrimes(int n) {
        boolean[] sieve = new boolean[n];
        Arrays.fill(sieve, true);
        sieve[0] = sieve[1] = false;
        long ticks = 0;
        for (int i = 2; i * i < sieve.length; i++) {
            ticks++;
            if (sieve[i]) {
                ticks++;
                for (int j = i + i; j < sieve.length; j += i) {
                    ticks++;
                    sieve[j] = false;
                }
            }
        }
        return ticks;
    }

    /**
     * Runs the computePrimes() methods several times and displays the
     * ratio of the number of primitive operations to n, n*log2(log2(n)),
     * n*log2(n), and n*n, for each run.  The idea is that if the ratio is
     * nearly constant for any one of the expressions, that expression is
     * probably the asymptotic time complexity.
     */
    public static void main(String[] args) {
        System.out.println("        n         T(n)/n  T(n)/nloglogn"
                + "     T(n)/nlogn          T(n)/n^2");
        for (int n = 100000000; true; n += 10000000) {
            double time = (double)computePrimes(n);
            double log2n = Math.log(n) / Math.log(2.0);
            double log2log2n = Math.log(log2n) / Math.log(2.0);
            System.out.printf("%9d%15.9f%15.9f%15.9f%18.12f\n",
                    n, time / n, time / (n * log2log2n), time / (n * log2n),
                    time / ((double)n * n));
        }
    }
}
$ javac edu/lmu/cs/algorithms/PrimeTimer.java && java edu.lmu.cs.algorithms.PrimeTimer
        n         T(n)/n  T(n)/nloglogn     T(n)/nlogn          T(n)/n^2
100000000    2.483153670    0.524755438    0.093437967    0.000000024832
110000000    2.488421536    0.525042572    0.093154203    0.000000022622
120000000    2.492800475    0.525216963    0.092881654    0.000000020773
130000000    2.496912269    0.525397614    0.092636275    0.000000019207
140000000    2.500607893    0.525543667    0.092406844    0.000000017861
150000000    2.504261540    0.525726296    0.092202718    0.000000016695
160000000    2.508113469    0.525989754    0.092029052    0.000000015676
170000000    2.511378312    0.526164370    0.091854066    0.000000014773
180000000    2.514174300    0.526271114    0.091679818    0.000000013968
190000000    2.517111653    0.526434419    0.091526593    0.000000013248
200000000    2.519476410    0.526502103    0.091366731    0.000000012597
210000000    2.521918771    0.526607744    0.091222446    0.000000012009
220000000    2.524643468    0.526791900    0.091099845    0.000000011476
230000000    2.526841665    0.526883963    0.090968655    0.000000010986
240000000    2.529254646    0.527037032    0.090854692    0.000000010539
250000000    2.531615772    0.527194102    0.090747527    0.000000010126
260000000    2.533554869    0.527276931    0.090633206    0.000000009744
270000000    2.535273511    0.527326521    0.090518378    0.000000009390
280000000    2.537081721    0.527406431    0.090413568    0.000000009061
290000000    2.538856217    0.527490156    0.090313866    0.000000008755
300000000    2.540369853    0.527529768    0.090210757    0.000000008468
.
.
.
580000000    2.573933419    0.529233035    0.088416447    0.000000004438
590000000    2.574801915    0.529278658    0.088371416    0.000000004364
600000000    2.575498323    0.529291236    0.088321815    0.000000004292
610000000    2.576229731    0.529313262    0.088274708    0.000000004223
620000000    2.576874421    0.529319651    0.088225880    0.000000004156
630000000    2.577634338    0.529351816    0.088182205    0.000000004091
640000000    2.578348789    0.529376678    0.088138140    0.000000004029
650000000    2.579215052    0.529434671    0.088100389    0.000000003968
660000000    2.579956808    0.529469006    0.088059472    0.000000003909
670000000    2.580654654    0.529496175    0.088018114    0.000000003852
680000000    2.581385994    0.529532005    0.087978922    0.000000003796
690000000    2.582226029    0.529591860    0.087944424    0.000000003742
700000000    2.582908279    0.529621034    0.087905512    0.000000003690
710000000    2.583473030    0.529627754    0.087863538    0.000000003639
.
.
.

What we’re seeing here is that the ratio $\frac{T(n)}{n}$ is increasing, so the complexity is probably more than linear. The ratios $\frac{T(n)}{n \log n}$ and $\frac{T(n)}{n^2}$ are decreasing so those functions are probably upper bounds. But $\frac{T(n)}{n \log{\log{n}}}$ is also increasing, but the rate of increase seems to be slowing and who knows, might even converge. We could bet that the complexity $\in \Theta(n \log{\log{n}})$, but we should really do a formal proof.

The Effects of Increasing Input Size

Suppressing leading constant factors hides implementation dependent details such as the speed of the computer which runs the algorithm. Still, you can some observations even without the constant factors.

For an algorithm of complexity If the input size doubles, then the running time
$1$ stays the same
$\log n$ increases by a constant
$n$ doubles
$n^2$ quadruples
$n^3$ increases eight fold
$2^n$ is left as an exercise for the reader

The Effects of a Faster Computer

Getting a faster computer allows to solve larger problem sets in a fixed amount of time, but for exponential time algorithms the improvement is pitifully small.

For an algorithm of complexity If you can solve a problem of this size on your 100MHz PC Then on a 500MHz PC you can solve a problem set of this size And on a supercomputer one thousand times faster than your PC you can solve a problem set of this size
$n$ 100 500 100000
$n^2$ 100 223 3162
$n^3$ 100 170 1000
$2^n$ 100 102 109

More generally,

$T(n)$ On Present Computer On a computer 100 times faster On a computer 1000 times faster On a computer one BILLION times faster
$n$ N 100N 1000N 1000000000N
$n^2$ N 10N 31.6N 31623N
$n^3$ N 4.64N 10N 1000N
$n^5$ N 2.5N 3.9N 63N
$2^n$ N N + 6.64 N + 9.97 N + 30
$3^n$ N N + 4.19 N + 6.3 N + 19

What if we had a computer so fast it could do ONE TRILLION operations per second?

$T(n)$204050607080
$n^5$ 3.2
μs
102
μs
313
μs
778
μs
1.68
msec
3.28
msec
$2^n$ 1.05
msec
1.1
seconds
18.8
minutes
13.3
days
37.4
years
383
centuries
$3^n$ 3.5
msec
4.7
months
227
centuries
1.3 × 107
centuries
7.93 × 1011
centuries
4.68 × 1016
centuries

As you can see, the gap between polynomial and exponential time is hugely significant. We can solve fairly large problem instances of high-order polynomial time algorithms on decent computers rather quickly, but for exponential time algorithms, we can network together thousands of the world’s fastest supercomputers and still be unable to deal with problem set sizes of over a few dozen. So the following terms have been used to characterize problems:

Polynomial-time AlgorithmsExponential-time Algorithms
GoodBad
EasyHard
TractableIntractable

Further Study