Yet Another Haskell Tutorial/Complexity

= Brief Complexity Theory =

Complexity Theory is the study of how long a program will take to run, depending on the size of its input. There are many good introductory books to complexity theory and the basics are explained in any good algorithms book. I'll keep the discussion here to a minimum.

The idea is to say how well a program scales with more data. If you have a program that runs quickly on very small amounts of data but chokes on huge amounts of data, it's not very useful (unless you know you'll only be working with small amounts of data, of course). Consider the following Haskell function to return the sum of the elements in a list:

sum [] = 0 sum (x:xs) = x + sum xs How long does it take this function to complete? That's a very difficult question; it would depend on all sorts of things: your processor speed, your amount of memory, the exact way in which the addition is carried out, the length of the list, how many other programs are running on your computer, and so on. This is far too much to deal with, so we need to invent a simpler model. The model we use is sort of an arbitrary "machine step." So the question is "how many machine steps will it take for this program to complete?" In this case, it only depends on the length of the input list.

If the input list is of length $$0$$, the function will take either $$0$$ or $$1$$ or $$2$$ or some very small number of machine steps, depending exactly on how you count them (perhaps $$1$$ step to do the pattern matching and $$1$$ more to return the value $$0$$). What if the list is of length $$1$$. Well, it would take however much time the list of length $$0$$ would take, plus a few more steps for doing the first (and only element).

If the input list is of length $$n$$, it will take however many steps an empty list would take (call this value $$y$$) and then, for each element it would take a certain number of steps to do the addition and the recursive call (call this number $$x$$). Then, the total time this function will take is $$nx+y$$ since it needs to do those additions $$n$$ many times. These $$x$$ and $$y$$ values are called ''constant values'', since they are independent of $$n$$, and actually dependent only on exactly how we define a machine step, so we really don't want to consider them all that important. Therefore, we say that the complexity of this  function is $$\mathcal{O}(n)$$ (read "order $$n$$"). Basically saying something is $$\mathcal{O}(n)$$ means that for some constant factors $$x$$ and $$y$$, the function takes $$nx+y$$ machine steps to complete.

Consider the following sorting algorithm for lists (commonly called "insertion sort"):

sort [] = [] sort [x] = [x] sort (x:xs) = insert (sort xs) where insert [] = [x] insert (y:ys) | x <= y   = x : y : ys                        | otherwise = y : insert ys The way this algorithm works is as follow: if we want to sort an empty list or a list of just one element, we return them as they are, as they are already sorted. Otherwise, we have a list of the form . In this case, we sort   and then want to insert in the appropriate location. That's what the function does. It traverses the now-sorted tail and inserts wherever it naturally fits.

Let's analyze how long this function takes to complete. Suppose it takes $$f(n)$$ stepts to sort a list of length $$n$$. Then, in order to sort a list of $$n$$-many elements, we first have to sort the tail of the list first, which takes $$f(n-1)$$ time. Then, we have to insert into this new list. If  has to go at the end, this will take $$\mathcal{O}(n-1)=\mathcal{O}(n)$$ steps. Putting all of this together, we see that we have to do $$\mathcal{O}(n)$$ amount of work $$\mathcal{O}(n)$$ many times, which means that the entire complexity of this sorting algorithm is $$\mathcal{O}(n^2)$$. Here, the squared is not a constant value, so we cannot throw it out.

What does this mean? Simply that for really long lists, the function won't take very long, but that the  function will take quite some time. Of course there are algorithms that run much more slowly that simply $$\mathcal{O}(n^2)$$ and there are ones that run more quickly than $$\mathcal{O}(n)$$.

Consider the random access functions for lists and arrays. In the worst case, accessing an arbitrary element in a list of length $$n$$ will take $$\mathcal{O}(n)$$ time (think about accessing the last element). However with arrays, you can access any element immediately, which is said to be in constant time, or $$\mathcal{O}(1)$$, which is basically as fast an any algorithm can go.

There's much more in complexity theory than this, but this should be enough to allow you to understand all the discussions in this tutorial. Just keep in mind that $$\mathcal{O}(1)$$ is faster than $$\mathcal{O}(n)$$ is faster than $$\mathcal{O}(n^2)$$, etc.