Computing Fibonacci Numbers Using Dynamic Programming, Matrices And Eigenvalues In O(logn)
The Fibonacci numbers are one of the most well studied recurrence relations in history. It’s also one of the coolest ways to get kids interested in math. The Fibonacci sequence is the following,
Informally, the next term of the sequence is the sum of the last two terms in the sequence. More formally, suppose is the nth Fibonacci numnber and and , then
There are a few ways to go about computing this number. Let us start easy and discuss the most intuitive way of going about computing this number.
The naive way to computing the nth Fibonacci number is the following,
The above algorithm does deliver the correct solution. However, the time complexity is , which is quite terrible. The number of subproblems grows exponentially.
In the above image, I’ve indicated the same subproblems computed by the same color. is represented in green and is computed thrice. At the second step, we need to compute 2 subproblems. At the third step, it becomes 8 subproblems. More generally, at the th step, we need to compute subproblems. The total time complexity is the sum of the work done (time complexity) at all levels.
A more clever implementation is the following which takes only time.
The second algorithm trumps the first by not recomputing a Fibonacci number more than once. This is done by storing solutions to the smaller subproblems first and using it to compute solutions to bigger problems. This is an example of dynamic programming.
The next approach we will look at is how to compute the nth Fibonacci number using linear algebra. Specifically, we will look at the ideas of eigenvalues and eigenvectors for our problem and show how we can compute in .
Using Linear Algebra for Fibonacci
Let be the term of the sequence and let . Let us consider a linear map such that
Suppose we plug in and , we get that
We also get that
We can show that
Every linear map induces a matrix representation and vice versa. Let be this matrix. From some calculations, one can find that
Just like how , we get
One way to go about it is compute in a clever way. Suppose is a power of two, then we would first find , multiply it with itself upon which we get and we repeat the process. We get the sequence of matrices,
Since our matrix is has dimension , multiplying two matrices of that dimension costs operations. We would have to carry out multiplications that each take cost to compute . Suppose is not a power of two, our algorithm can still be tweaked and it would still run in .
These is another way to arrive at a closed form formula for the Fibonacci number using the ideas of eigenvalues and eigenvectors. I’m going to give an extremely short introduction to the topic. I’ll make a more detailed posts later on about eigenvalues and eigenvectors.
Given a matrix , a vector is an eigenvector and is an eigenvalue corresponding to if
Since our matrix has full rank (rank is two), there are two non-zero eigenvalues and . Let and be the corresponding eigenvector and eigenvalue pairs of such that and . From some calculation, we can find that
Suppose we do the following,
The matrix is full rank, hence its inverse must certainly exist. We will call this inverse . The following is . You can verify if is the identity matrix.
We can finally show
We will also use another fact about diagonal matrices. Although we have used this fact for a diagonal matrix of size 2, using induction it is fairly straightforward to show it is true for a diagonal matrix of any dimension.
Using the above fact, we have shown
We have shown that
The value is called the golden ratio and tends to show up in many places. We can also write as
The main bottle neck using this approach is computing the value of and which can done in using the same doubling power technique we used to compute .
Warning: We have made a few very strong assumptions here to say we can compute it in . One of the core assumptions is computing the product of two numbers in . As you can see with the closed form of Fibonacci which we derived, the numbers tend to have exponential growth. In reality, multiplying large numbers cannot be in . For instance, we assumed that we could multiply to get in operations. Representing on a real computer would take bits and multiplying it should take at least that, showing the cost of multiplication is .