Dynamic Programming
Dynamic programming (DP) is an absolutely necessary skill to have in competitive programming. Pretty much every ICPC contest that I've participated in has at least one problem needing DP, though most of the time there are multiple. A lot of you probably have heard of it, but aren't completely sure how to use it to solve problems. I'll be explaining what exactly DP is and how you can implement it.
What is Dynamic Programming?
The purpose of DP is to take a recursive function that may be either exponential or factorial in runtime and make it run in polynomial time. The key is that the function is recursive. What actually happens with recursive functions is that the "solution" is written in terms of smaller "solutions". These smaller "solutions" are made up of even smaller "solutions", and this continues until reaching the base case. We generally call this breaking up the problem into subproblems.
Let's take a look at the Fibonacci numbers. Let $fib(n)$ be the $n^{th}$ Fibonacci number. The recursive definition is defined as below:
$$\begin{align} fib(1) &= 1 \\ fib(2) &= 1 \\ fib(n) &= fib(n-1) + fib(n-2) \end{align}$$
Here's the recursion tree for the call $fib(7)$:
You can see that manually calculating the $fib(n)$ using recursion can get unwieldy pretty quickly. In fact, the runtime is approximately $O(2^n)$. However, if we look closely, a lot of the work is being done multiple times! For example, $fib(3)$ appears 5 times in the tree. This is where DP comes in to play.
Since the result of $fib(n)$ always returns the same value for any $n$, we can actually save the value after computing it once and reuse it as many times as we need. This is called memoization because we are keeping a memo of results. Here's the updated code:
It turns out that this change actually reduces the time to $O(n)$, which is quite a large time save!
Another thing to notice is that for calculating $fib(n)$, we are only looking at calculations for numbers smaller than $n$. We can exploit this structure to calculate $fib(n)$ in a bottom-up approach. In the bottom-up approach, we start with our base cases, and work our way up to $n$ using the recursive definition. Here's the code showing this:
Now, instead of going from $n$ down to 1, we go from 1 up to $n$. Again, the runtime is $O(n)$, but we actually remove the overhead from the recursive calls. This bottom-up approach is usually what we refer to when we say DP.
It turns out that this change actually reduces the time to $O(n)$, which is quite a large time save!
Another thing to notice is that for calculating $fib(n)$, we are only looking at calculations for numbers smaller than $n$. We can exploit this structure to calculate $fib(n)$ in a bottom-up approach. In the bottom-up approach, we start with our base cases, and work our way up to $n$ using the recursive definition. Here's the code showing this:
Now, instead of going from $n$ down to 1, we go from 1 up to $n$. Again, the runtime is $O(n)$, but we actually remove the overhead from the recursive calls. This bottom-up approach is usually what we refer to when we say DP.
How to Implement Dynamic Programming
Fibonacci numbers is way too simple of an example to really show the full idea of DP, but it shows the basic formula for any DP problem. There are 5 things that you need to determine before you can implement the program:
- Recurrence relation
- Table size
- Update order
- Answer location
- Runtime
Recurrence Relation
This is the most difficult step of the whole process. The key here is to try to think of the recurrence as an English sentence. The more complex the recurrence gets, the more powerful this advice becomes.
For the arguments of the recurrence, you will always need to have a "current element", which I generally label as $i$. Other possible arguments could be some $j$, the index of another element, or flags for what state you are in. These are only a couple examples of the many different arguments there can be.
The recurrence should cover all possible cases for element $i$. Usually this means either include it/exclude it or match it with some other element.
These tips will make more sense once we get to some more examples.
Table Size
The DP table is the array that holds all the values from each recursive call. Each argument in your function is one dimension in your DP table, and the size of the dimension is the largest value for that particular argument. For Fibonacci, it's a 1D table of size $n+1$ to hold values $0...n$. 0 is not used at all, but is convenient so we do not have to worry about offsetting the index by one.
Update Order
Based on the recurrence relation, you can see how you need to update the table. For Fibonacci, the $n^{th}$ term depends on the $(n-1)^{th}$ term and the $(n-2)^{th}$ term. Hence, lower values of $n$ need to be computed before the higher values. Hence, the update order should be from 1 to $n$.
Sometimes the recurrence doesn't have a strict pattern. If this is the case, then just write the recursion with memoization.
Answer Location
Since this is based on a recursive algorithm, wherever you start the recursion is going to be the cell that contains the answer that you are trying to calculate.
Runtime
The runtime for the DP program is simply the total number of cells in the table multiplied by the amount of work done for each cell. Since Fibonacci has $O(n)$ cells and does $O(1)$ work per cell, the final runtime is $O(n)$.
Example Problems
1. Rod Cutting
Given a rod of length $n$ and a list of profits for integer lengths $c_1, c_2, ..., c_n$, cut the rod into integer lengths to maximize the total profit. For example, cutting a rod of length 5 into lengths of 2 and 3 will give you $c_2 + c_3$ profit. Constraints: $n \leq 2000$, $c_i \leq 10^9$.
The first thing to think about is to try to break this into subproblems. Now, if you had a rod of length $n$, then what happens when you cut it? It will split into 2 separate rods, each of a length shorter than $n$. We now have two smaller subproblems that are very similar to the original problem! We can write the English description of our recurrence relation as $P(n)$, the maximum profit for a rod of length $n$.
The next thing we need to do is actually define the recurrence. Since we do not know where to actually cut the rod, what should we do? The answer: just try everything! With recursion, this is extremely slow, but since we are using DP, we don't need to worry about that. So, here's the recurrence:
$$\begin{align} P(1) &= c_1 \\ P(i) &= \max\{\max_{1 \leq j \leq i-1}\{P(i-j) + P(j)\}, c_i\}\end{align}$$
One possible case is that we don't cut the rod. Then, $c_i$ would be our profit. Alternatively, we can decide to cut the rod such that $j$ is the length of one of the cut rods and $i-j$ is the length of the other side. Using our English description, the maximum profit is either the profit of length $i$ or the sum of the maximum profit of a rod of length $j$ and a rod of length $i-j$. This covers all possible cases for the recursive part. The base case is very simple. A rod of length 1 cannot be cut anymore, so the maximum profit is just $c_1$.
Something to note is that we're actually doing extra work for calculating the maximum profit for both sides of the rod. Instead, we can assume we only cut one of the sides and just take the profit of the other side. This will still cover all cases, since we still try to cut the rod in all possible locations.
$$\begin{align} P(1) &= c_1 \\ P(i) &= \max\{\max_{1 \leq j \leq i-1}\{P(i-j) + c_j\}, c_i\}\end{align}$$
Now, we need to determine the table size. Since we only have one argument, $i$, we will need a 1D table of size $n+1$ to hold values $0...n$.
The next step is to determine the update order. Notice how $P(n)$ relies on all solutions $P(1)...P(n-1)$. This means we need to have the lower values of $n$ solved before any higher values of $n$. Hence, the update order is from 1 to $n$.
Finally, we can calculate the runtime for the algorithm. There are a total of $O(n)$ cells, and each cell takes $O(i)$ time to calculate. Hence the overall runtime is $O(n^2)$.
2. Longest Increasing Subsequence (LIS)
Given $n$ integers $a_1, a_2, ..., a_n$, find the length of the longest subsequence $a_{l_1}, a_{l_2}, ..., a_{l_k}$ such that $l_i < l_{i+1}$ and $a_{l_i} < a_{l_{i+1}}$ for all $1 \leq i \leq k$. Constraints: $n \leq 2000$, $a_i \leq 10^9$.
Sidenote: Both subsequence and subarray are sequences of elements of the array. However, chosen elements in a subsequence do not necessarily need to be next to each other (i.e. there can be gaps), but subarrays require selecting all element between some indices $i$ and $j$.
Again, let's think about how to break this into subproblems. The important fact to notice is that for any element $a_i$, it will either be included in the LIS or it will not. If $a_i$ is included in the subsequence, then everything selected after it must be greater and everything selected before it must be less. Since looking both forwards and backwards can be complicated, let's always move forwards, meaning we consider the elements in increasing $i$ order. Now, when considering $a_i$, we first need to check if it actually can be part of the LIS, so we must verify that $a_i$ is greater than the previous selected element. We can do this by keeping track of the index of the previous selected element. Let this index be $j$. Now we can write the English description of our recurrence $LIS(i, j)$, which is the length of the longest increasing subsequence of $a_i, a_{i+1}, ..., a_n$ such that $a_j$ was the last selected element.
Next, we need to define the recurrence. By including $a_i$ in the LIS, the previous $j$ gets updated to $i$, and the length increases by 1. Otherwise, not including it means it doesn't really affect the LIS, and we just move to the next element.
$$\begin{align} LIS(i, j) = \begin{cases} &0 & \text{if }i = n+1 \\ &LIS(i+1, j) & \text{if }a_i \leq a_j \\ &\max\{LIS(i+1, j), 1 + LIS(i+1, i)\} &\text{if }a_i > a_j \end{cases} \end{align}$$
Our base case $LIS(n+1, j)$ is 0 because the subarray we are looking at is empty, so the LIS is empty. For the first part of the recursive case if $a_i \leq a_j$, the only choice we can make is to skip $a_i$ since it would not fit in the LIS. For the second part, we either choose $a_i$, hence updating $j$ to be $i$, or we skip over $a_i$. To write the second part in English, it is the max between the length of the LIS of $a_{i+1}, ..., a_n$ such that $a_j$ was the last selected element and the length of the LIS of $a_{i+1}, ..., a_n$ such that $a_i$ was the last selected element.
One thing to notice is that for the very first element we select, we do not have a previous element to compare it to. To get around this, we can create a dummy element $a_0$ and set it to $-\infty$. The starting recursive call will be $LIS(1, 0)$, which is the length of the LIS of $a_1, ..., a_n$ such that $a_0$ ($-\infty$) was the last selected element.
The table size here is 2D. The $i$ dimension will be size $n+2$ since we are looking at the range $0...n+1$. The $j$ dimension will be size $n+1$ since we are looking at the range $0...n$.
The update order is a bit trickier since there are 2 arguments. Looking closer at the recurrence, we can see that $LIS(i, j)$ relies on $LIS(i+1, j)$. So the value for $i+1$ must be solved before the value for $i$ is calculated. $j$ can actually be anything here, so that means we should solve $LIS(i+1,j)$ for all $j$ before solving $LIS(i, j)$ for all $j$. Hence we will have two loops, the inner loop going over $j$ from 0 to $n$, and the outer loop going over $i$ from $n+1$ down to 1.
Finally, we need to calculate runtime. We have $O(n)$ by $O(n)$ table, and each cell takes $O(1)$ time to compute. Hence, there are $O(n^2)$ cells each taking $O(1)$ time. Hence, the overall runtime is $O(n^2)$.
There is a speedup allowing LIS to be calculated in $O(n \log n)$ time, but that deserves its own post.
There is a speedup allowing LIS to be calculated in $O(n \log n)$ time, but that deserves its own post.
3. Pebble Solitaire
Given a 1D board configuration of '-', representing an open space, and 'o', representing a pebble, determine the minimum number of pebbles left after making an optimal sequence of valid moves. A valid move is a pebble "hopping" over a neighboring stone to an open space, thus removing the neighbor pebble. For example, a board looking like -oo- can turn into either o--- or ---o. Constraints: the board size is always 23.
The recursive nature of this is much easier to see compared to the previous problems. For any board configuration, we just need to check for all possible moves and try all of them. Just doing the recursion is not nearly fast enough, since some board states could be repeated, so we need to keep track of states that we've visited before. The question is, how do we actually represent the board state so that this is possible?
Since the board itself is a sequence of characters, we can just directly save that as the board state. We also want to know the minimum number of pebbles left after making some optimal sequence of moves, so we save that number along with the state. In Java, we can use a HashMap<String, Integer> to keep track of these board states.
The update order for this problem is all over the place since it depends on what the board look like. Hence, we need to write the recursion for this and use memoization to store what states we've already visited.
One advantage to using the HashMap is that you don't need to reserve space for every possible state. Instead, you only keep entries for the states that are actually reachable from the initial board state. However, the max table size is important to calculate so we have an idea of the runtime. The board size is always 23 spaces, and every space can be either a pebble or an open space. This means that for each space, there are 2 possibilities. Hence there are $2^23 = 8388608$ maximum board states, which can fit in memory quite nicely.
The overall runtime for this is going to be proportional to the total number of states visited. Since there are at most 8 million states (though much lower in practice), the program will run in time.
Some further improvements to the program would be to represent the board in binary, reducing the size and time by removing all the string operations. This would be needed for slightly larger board sizes such as 26-27.
Practice Problems
Easy/Medium:
Hard:
Comments
Post a Comment