In programming interviews, problem-solving skills are vital. Many product-based companies like to evaluate their applicants' basic problem-solving abilities. And optimization problems are the must-haves for these programming interviews since they feature complex and large solution spaces.
Dynamic programming is an algorithmic paradigm that is majorly used to formulate solutions to these optimization problems. Hence, it is critical to master this problem-solving approach to become a good competitive programmer. So, in this article on ‘What is Dynamic Programming’, we will discover the dynamic programming paradigm in detail.
What Is Dynamic Programming?
Dynamic programming is an algorithmic paradigm that divides broader problems into smaller subproblems and stores the result for later use, eliminating the need for any re-computation. This problem-solving approach is quite similar to the divide and conquer approach.
We solve problems in both these paradigms by integrating the answers to smaller subproblems. However, unlike divide and conquer, the subproblems in dynamic programming repeat themselves multiple times. This means dynamic programming has different properties than the divide and conquer approach. If the problem abides by properties given below, only then it can be solved using a dynamic programming paradigm:
- Optimal Substructure: A problem is said to have an optimal substructure if we can formulate a recurrence relation.
- Overlapping Subproblem: A problem has an optimal substructure if we can formulate a recurrence relation for it.
Learn more about dynamic programming and other core software development topics in our Caltech Coding Bootcamp.
Let’s understand this approach through an example.
Dynamic Programming Interpretation of Fibonacci Series
Consider the fibonacci series: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, and so on.
The numbers in the series above are not generated randomly. There is a mathematical logic behind it. Each number in this series is equal to the sum of two previous numbers before it. Mathematically it can be represented as:
Base Case: Fib(0) = 0 and Fib(1) = 1
Recursive Step: Fib(n) = Fib(n-1) + Fib(n-2)
The recursive step is also called as a recurrence relation, which means the Fibonacci series follows the first property of dynamic programming.
Now consider the problem where you are supposed to calculate Fib(5).
To calculate Fib(5), we must first compute Fib(4) and Fib(3). Then, we'll have to run yet another series of computations for the calculation of our subproblems. The image given below depicts all of the computations required to compute Fib(5):
If you look closely at this recursion call stack, you will see that some subproblems have been repeated a few times. The animation given below represents the recurrence of subproblems by using circular shapes.
With this, we can say that the Fibonacci series can be implemented using the dynamic programming paradigm since it follows both the properties of dynamic programming.
In this ‘What is Dynamic Programming’ article, we will discover how dynamic programming works in the next section.
How Does Dynamic Programming Work?
The steps given below formulate a dynamic programming solution for a given problem:
- Step 1: It breaks down the broader or complex problem into several smaller subproblems.
- Step 2: It computes a solution to each subproblem.
- Step 3: After calculating the result, it remembers the solution to each subproblem (Memorization).
- Step 4: Reutilize stored result if given subproblem recur.
- Step 5: Integrate solutions of subproblems to formulate a broader problem’s solution.
In this solution-building approach, we are utilizing memory to store the results. Hence, the space complexity will be increased. But, owing to the same utilization of space, the time complexity will be decreased significantly.
For example, consider the programs given below:
- Implementation of Fibonacci Series Using Recursion:
int Fib(int num)
if ( num == 0 )
else if ( num == 1 )
return ( Fib(num-1) + Fib(num - 2) );
In the code above, the number of computations and function calls performed will rise with the value of num. Thus, the time complexity will be of increasing nature, that is, O(2n).
By using the dynamic programming approach, we can reduce this complexity. Instead of generating a recursive tree, again and again, we can reuse the previously calculated result. If we follow this approach, we will get time complexity O(n).
- Implementation of Fibonacci Series Using Dynamic Programming
int fib(int n)
if(fib(n) != -1)
int res = fib(n-1) + fib(n-2);
fib(n) = res;
In the above code, we employed the memorization approach by storing the result. This strategy is also known as a top-down approach. According to this approach, we move from the top and break the problem into subproblems. Let’s look into the theoretical details of this approach in detail.
Different Approaches of Dynamic Programming
There are two approaches to formulate a dynamic programming solution:
1. Top-Down Approach
The top-down approach follows the memorization technique. It consists of two distinct events: recursion and caching. ‘Recursion’ represents the process of computation by calling functions repeatedly, whereas ‘caching’ represents the storing of intermediate results.
- Easy to understand and implement
- Solves the subproblem only if the solution is not memorized.
- Debugging is easier.
- Uses recursion, which takes up more memory space in the call stack, degrading the overall performance.
- Possibility of a stack overflow error.
2. Bottom-Up Approach
This approach uses the tabulation technique to implement the dynamic programming solution. It addresses the same problems as before, but without recursion. The recursion is replaced with iteration in this approach. Hence, there is no stack overflow error or overhead of recursive procedures. We maintain a table (3D matrix) to solve the problem in this method.
Key Differences: Top-Down vs Bottom-Up Approach
Uses memorization technique
Uses tabulation technique
Structured programming languages such as COBOL, Fortran, C, and others mostly use this technique
Object-oriented programming languages such as C++, C#, and Python mostly use this technique
This approach uses decomposition to formulate a solution
This approach uses composition to develop a solution
A lookup table is maintained and checked before computation of any subproblem
The solution is built from a bottom-most case using iteration
In this ‘What is Dynamic Programming’ article, you learned about dynamic programming and its different implementation approaches. You also discovered how dynamic programming works with the help of an illustrative example of the Fibonacci series. We discovered how dynamic programming reduces the time complexity of basic naive recursive solution through the same example. Finally, through a tabular representation of the top-down vs bottom-up approach, we tried to understand the difference between both dynamic programming approaches.
If you are looking for a quick and easy way to grasp the fundamental notion of what dynamic programming is, we advise you to watch this video.
Every day, new apps, products, and tools are being introduced into the market. Also, numerous programming languages and development frameworks are being utilized in software development every day. Hence, it's crucial for you to go beyond basic data structure concepts and cover the foundations of interactive application development. Simplilearn's Post Graduate Program In Full Stack Web Development can prove to be the right solution for you to master the art of software development. This bootcamp program, delivered in collaboration with the world-renowned Caltech CTME, can help elucidate the necessary skills and increase your odds of becoming a software developer.
If you have any questions or need clarification on any section of this article on ‘What is Dynamic Programming’, please leave them in the comments section at the bottom of this page; we will respond to them soon.