In our previous article we looked at the absolute basics: definitions of data structures and algorithms, how to prove our algorithms are sound, and ways to identify the types of objects we’ll want to use when solving problems.
To recall, we stated that algorithms require both correctness and efficiency to be sound. We spent a good deal of time talking about how to prove correctness. Today we’re going to talk about how to prove efficiency. But first, the Daily Problem…
This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.
Answers to the Daily Problem (1-5)
The question was:
The knapsack problem is as follows: given a set of integers
S = {s1, s2, . . . , sn}
, and a target numberT
, find a subset ofS
which adds up exactly toT
. For example, there exists a subset withinS = {1,2,5,9,10}
that adds up toT = 22
but notT = 23
. Find counterexamples to each of the following algorithms for the knapsack problem. That is, give anS
andT
where the algorithm does not find a solution which leaves the knapsack completely full, even though a full-knapsack solution exists.
Here’s the actual problem, with the answers (and explanations) inlined:
(a) Put the elements of
S
in the knapsack in left to right order if they fit, i.e. the first-fit algorithm.
T = 23; S = {22,21,2}
. Going from first to last, we add 22
, but at this point there are no other elements that will add up to 23
(even though there does exist a valid solution of 21
and 2
). Our counterexample succeeds!
(b) Put the elements of
S
in the knapsack from smallest to largest, i.e. the best-fit algorithm.
T = 5; S = {1,2,3,5}
. Going from smallest to largest, we add 1
and then 2
. At this point adding 3
or 5
will put us over 5
, even though obviously choosing just the number 5
would satisfy T
. Countered again!
(c) Put the elements of
S
in the knapsack from largest to smallest.
T = 23; S = {22,21,2}
. Same answer as A, and it works here too since they both happen to be ordered from largest to smallest values within S
.
If any of that doesn’t make sense, be sure to let me know on Twitter.
Proving an algorithm’s efficiency
As we mentioned in the last article, the key to proving efficiency involves Big O Notation. What is that exactly? The O specifically refers to the letter commonly used for the name of the function that analyzes an algorithm’s worst case (or upper bound) on the relative amount of time the algorithm will take to complete (there’s also Ω which refers to the best case / upper bound and Θ which refers to the average case / tight bound, but in an interview you’re only ever going to be quizzed on the worst case, if at all).
I say relative because our O function doesn’t return a number such as the number of seconds to complete our algorithm. Instead, it’s relative to the number of operations within our algorithm.
How do we count the number of operations in an algorithm?
We can count operations in an algorithm with 3 simple rules (based on how computer RAM evaluates the cost of computing power):
- Add 1 for simple operations. Adding two numbers (
+
), branching conditionally (if
), and variable assignment (var x = 1
). - Add
n
for looping over a set. If you seefor
orwhile
in an algorithm, it’s over somen
number of items, so in the worst case, you need to consider the posssibility that you have to iterate over alln
items. - Whatever your calculations were for 1 and 2, multiply them if the algorithm is called again recursively. For example, to calculate the fibonacci sequence on
n
items, each recursive call of fibonacci requires finding the previous value of the fibonacci sequence. If each call requires twice the number of calls, that’s exponential (2^n
specifically) number of calls to fibonacci!
But here’s the thing, calculating the precise number of operations for an algorithm is unnecessarily cumbersome. Therefore, when we refer to Big O Notation, we’re calculating the time complexity of an algorithm with the multiplicative constants ignored and the lower factors ignored. So if you found out your algorithm ran in precisely 2n^3 + 4n^2 + 14763n + 1209489
operations, our Big O analysis can reduce this function all the way down to O(n^3)
.
Patterns for identifying algorithmic efficiency
So now that we know how to evaluate an algorithm’s Big O worst case performance, what do we make of it’s efficiency? Let’s look at each of the major patterns you’ll see for your Big O analysis:
Constant - O(1)
These are the fastest algorithms because they are comprised of simple operations. Your classic addition function, function add(x,y) { return x + y }
runs in constant time, it’s just the +
operation! In the real world, these can run on any machine, slow or fast, with essentially any kind of input (provided it actually works), as these will always be able to run efficiently.
Logarithmic - O(log n)
Imagine you have a deck of cards. Cut the deck in half. Now cut that half of a deck in half. Now cut that portion in half. Now cut that portion… okay you get the idea. If you plotted that on a graph, it would look just like the log(n)
.
If you have an algorithm that is dividing the problem space by some number (like by 2 if you’re doing a binary search on a tree), you can bet that algorithm runs in logarithmic time. Obviously this function is now dependent on n
so it’s going to be slower than constant-time functions, but these are so fast on modern machines that you can basically feed it any n
number of items and it will run just fine.
Linear - O(n)
Loop over each item in an integer array and print out the number. How many items do you have to print? For an array of 3 numbers, you have to count 3. For 5, it’s 5. For n
it’s, well, n
! This is why we say loops like for
and while
evaluate in O(n)
since we have to touch, at worst, all n
items.
These algorithms still run pretty fast. Given that CPUs can now run millions of operations per second, it’s safe to say a n
of billions of items is still pretty fast in the real world.
Superlinear - O(n log n)
How would you get an algorithm like this? Again, imagine a deck of cards. I want you to find a card in the deck quickly. Even worse: I shuffled the deck. But you can find it pretty quickly if you follow these steps: Cut the deck in half until you just have pairs of cards. Sort the cards for each of the pairs (pretty easy to compare if one card is smaller than the other, right?). Now that all of the pairs are set, combine two pairs and repeat the process. Eventually you’re going to have the whole deck sorted to easily find your card.
Continuously cutting a deck in half sounds very much like our example where the answer was O(log n)
.
Looking at the whole deck? Sounds like O(n)
.
And given that we have to look at all of the cards for each layer that we have to construct, that’s n
number of times we have to make log n
partitions. Hence, n log n
! Again, since this is just slightly larger than a linear algorithm, we can run an algorithm like this very quickly on even billions of items.
Quadratic - O(n^2)
You’ve got 5 numbers. You need to multiple each number by every number (including itself). So the first number is multiplied 5 times, the second number is multiplied 5 times, the third is too, and the fourth, and the fifth. 5 numbers multiplied 5 times is 25, which is the same is 5^2. So if we generalize this for n
numbers to multiply by every other number, you’ve got your O(n^2)
algorithm.
This is where time needs to start being considered. For tens of thousands of items, we’re still moving pretty quickly with generally under a billion operations. Once you get above a million items for this kind of algorithm, we’re looking at trillions of total operations to be run. Even your MacBook Pro can’t count all of Apple’s money in an instant; it’s a pretty big number!
Exponential - O(2^n)
Remember the fibonacci sequence from earlier? A classic example of an exponential algorithm (hint: we can do much better but the naive implementation is exponential). When you have to enumerate all subsets of n
items, you’re dealing with something exponentially painful. And we’re talking real painful. With quadratic algorithms we could work with a million numbers. But exponential? Just 40 items gets us back up to over a trillion. Oof.
Factorial - O(n!)
Arrange a deck of cards from smallest to largest. That’s one configuration. Now arrange it from largest to smallest. That’s another configuration. Now shuffle the deck. Yet another configuration. Now organize that deck where it’s from smallest to largest but order each card where hearts come first, diamonds come second, clubs come third, and spades come last in a given number. These configurations are called permutations of the deck. And in fact finding the number of permutations for n number of items takes O(n!)
amount of time. So to find all permutations of a standard 52 deck of cards is 52!
.
Do you just how massive 52!
is?
There are more permutations of a deck of cards than there are atoms on this earth.
You could shuffle a deck of cards every second since the universe has started and still not see every permutation of a deck of cards.
In other words, don’t write factorial algorithms. Beyond basically 20 items, it’s not only painful, it’s essentially impossible. Modern computers simply can’t handle that number of operations. So if you see your algorithm is pemutating over all of your items, and you have more than a handful of items, you should probably look for another solution to your problem.
Wrapping up with the next Daily Problem
So now that you understand about Big O Notation and the kinds of bounds you should be striving for, let’s put that to the test:
For each of the following pairs of functions
f(n)
andg(n)
, state whetherf(n) = O(g(n))
,f(n) = Ω(g(n))
,f(n) = Θ(g(n))
, or none of the above.(a)
f(n) = n^2 + 3n + 4, g(n)= 6n + 7
(b)
f(n) = n√n, g(n) = n^2 − n
(c)
f(n) = 2^n − n^2, g(n) = n^4 + n^2
Think you’ve got your answers? Good luck, and we’ll see you next week!
Get the FREE UI crash course
Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.