Dynamic Programming is a methodology in which we break an optimization problem into sub-problems. We can store the solution of each sub-problem and use that to solve the actual problem.

It is a very useful technique for optimizing problems where you need to find a minimum or maximum solution. It will look through all the possible sub-problems just like recursion. However, the solutions of subproblems will be stored in a lookup table and the results will never get recomputed. This will guarantee both efficiency and correctness.

We are going to start with the formal definition first. You can only use DP if a problem has overlapping subproblems and an optimal substructure. Let’s talk about these two properties first.

**Optimal Substructure**

Optimal Substructure is a core property of both recursion and Dynamic programming. Most recursion problems have optimal substructure. It simply means that you can find the optimal solution to a problem by considering the optimal solution of its subproblems.

For example, you can consider the shortest path problem in a graph. If there is a node x that lies in the shortest path from u to v then we can compute the solution by finding the shortest path from u to x and then from x to v. This is the core of Bellman-Ford and Floyd-Warshall algorithms.

If you can solve a problem recursively, then it will generally have an optimal substructure.

**Overlapping subproblems **

This is the second property of any dynamic programming problem. Overlapping subproblems simply means that we are computing the same problem multiple times. In DP, we will store the solutions of subproblems in a table.

Dynamic programming is only useful when there are subproblems. There is no point in storing solutions if you are not going to need them. For example, let’s try to solve the Fibonacci problem by using recursion.

Recursion Tree of fib(4):

fib(4)

/ \

fib(3) fib(2)

/ \ / \

fib(2) fib(1) fib(1) fib(0)

/ \

fib(1) fib(0)

We can see that a lot of values are getting recalculated in this recursive solution. The function fib(2) has been called 2 times. We can store the value of fib(2) in a separate table. This will ensure that we can reuse the old stored value instead of computing the values again.

The recursive solution to this problem is simple and neat. However, the time complexity of this solution is actually exponential. We are getting a tree of height n. The easiest way to find the big oh complexity is by simply estimating the number of nodes in the tree. There are going to be 2^{n} nodes in this tree. Thus, the time complexity of this solution is O(2^{n}).

We can optimize this code by using Dynamic programming. This problem uses a recursive solution which guarantees that it will have an optimal substructure. Also, we can find an overlapping subproblem. Thus, this problem fulfills both criteria and we can use DP for optimizing the solution.

It is very easy to identify the subproblem of this question. The fib(n) is simply going to be the nth Fibonacci number. The result will ultimately depend only on one variable. Thus, we can use a simple array for storing the results.

You can check the code below. We have only added a simple array in our code for storing results.

We have reduced the time complexity to O(n) by storing the results. This is very easy to see as we have stored the results in our DP array. Thus, every value will get computed once only. This is much better when compared to the exponential solution.

Basic dynamic programming can help you in improving the performance of your algorithms. You should first ensure that your algorithm fulfils the two criteria. If your solution has optimal substructure and overlapping subproblems, then you can use DP for optimizing it.

Practice Problems

- SPOJ – COINS
- SPOJ – ACODE
- CSES – Dice combinations
- Atcoder – Frog 1

Happy Coding ðŸ˜Š

By Programmers Army