Big-O proofs


This tweet from Emily Kager appeared in my timeline a couple of weeks ago:

My sister (freshman in college) texted me for Big-O proof help for her Algorithms class saying she’s scared CS isn’t for her because she doesn’t love this class. RT if you don’t think about Big-O proofs at your CS job.

I have complex feelings about this. On the one hand, I sympathize with Ms Kager’s sister—it’s horrible to be struggling in class—and it is true that most programmers never do a big-O proof again after they graduate, and many never think about asymptotic complexity at all. But on the other hand, someone who can get to grips with big-O proofs will be a better programmer for it. (I’ll try to explain why below.)

This is all going to be unhelpful for Ms Kager’s sister: no-one ever loved a class just because someone told them it was useful. But I do have some advice, which is that it is never too late to learn something. The reason someone is struggling with analysis of algorithms is probably because they are missing some crucial prerequisite: maybe it’s discrete mathematics, or maybe it’s a lack of hands-on experience with algorithms, or maybe something else. But it’s always possible to come back to the subject later in your career: when something about it grabs your interest the textbooks will still be there.

So, of course I have had never had to write down a big-O proof at work, at least not in the kind of formal style that I might have used in my undergraduate algorithms homework. Nonetheless, I think about the asymptotic runtime of the code I’m working on all the time, and this means doing informal analysis of algorithms in my head, and the reason that I am able to quickly and reliably do this kind of analysis in my head is that I practised writing all those formal proofs. I can be confident that my informal approach gets the right answer because I know that I could turn my quick rules of thumb into detailed proofs if I ever needed to convince a beginner or a skeptic. Perhaps it would take me a while, or I’d have to refer to a textbook for the difficult cases, but I’d get there in the end.

Expertise in some area always comes with a penumbra, that is, a wider area in which you know a lot (but not as much as a real expert) and a still wider area in which you know a little (and could pick up more if you needed to). An expert in everyday analysis of algorithms almost certainly has some ability at formal proofs too.

And every once in a while, you get lucky and happen upon a problem where the issue is more complex than the usual logarithmic vs linear, or linear vs quadratic, and then you have some trickier analysis to do, and are glad that you have the skills to do it.

We automate tasks using computer programs in order to save people time and money and effort. Or to give people capabilities that would otherwise require too much of these resources. So the need to check that you are using resources efficiently is ever-present in software development. Normally this does not require particularly sophisticated mathematical techniques: the vast majority of tasks fall into one of three groups:

  1. If it’s a search task (find an item matching some criteria in a collection), then it ought to run in time that’s logarithmic in the size of the collection, that is, in \(O(\log n)\) time.

  2. If it’s a data-processing task (read some data and compute some results), then it ought to run in time that’s linear in the size of the data, that is, in \(O(n)\) time.

  3. If it’s a sorting task (read some data and collate it somehow), then it ought to run in \(O(n \log n)\) time.

In each case you’re looking for particular kinds of failure. If it’s a search task, are you accidentally looking at each item in the collection, making the runtime \(Ω(n)\)? If it’s a data processing task, are you accidentally doing something taking \(Ω(n)\) time for each item, making the overall runtime \(Ω(n^2)\)? These complexity problems are ubiquitous. In particular, programs whose runtime is \(Ω(n^2)\) when it should be \(O(n\log n)\) or \(O(n)\) (that is, they are “accidentally quadratic”) are common enough to sustain a blog. Here are a few examples that I’ve encountered at Code Review:

The idea of giving all these links is not to shame the programmers who accidentally wrote quadratic programs, but to try to give some idea of how common this pitfall is, and how many different kinds of algorithm might be affected. Modern programming languages like Python make it possible to write complex operations tersely, but the flipside of this is that you have to pay attention to the performance of the operations you are using. It’s easy to write if item in collection: and fail to notice that collection is a list, and so this will take time that’s proportional to the length of the list.