Dynamic Programming


The dynamic programming approach is analogous to divide and conquer in breaking down the problem into smaller and smaller feasible sub-problems. These sub-problems, unlike divide and conquer, are not solved individually. Instead, the outcomes of these smaller subproblems are memorized and applied to other sub-problems that are similar or overlap.

We employ dynamic programming when we have problems that can be broken into comparable sub-problems, and the results may be reused. These methods are primarily used for optimization. The dynamic algorithm will try to evaluate the findings of the previously solved sub-problems before solving the in-hand sub-problem. The best answer is achieved by combining the solutions of sub-problems.

As a result, we can conclude that 

the problem should be broken into smaller, overlapping subproblems.

An optimal solution can be found using the best solution of smaller sub-problems.

Memoization is used in dynamic algorithms.

Comparison

Unlike greedy algorithms, which are driven by local optimization, dynamic algorithms are motivated by the overall optimization of the issue.

Dynamic algorithms, in contrast, to divide and conquer algorithms, which combine solutions to obtain an overall solution, use the output of a smaller sub-problem to try to optimize a larger sub-problem. Dynamic algorithms use memoization to recall the results of previously solved sub-problems.

Example

The dynamic programming approach can be used to solve the following computer problems:

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling