# Dynamic Programming

### DP Intro

• Use a table to store solutions to subproblems
• Use solutions to subproblems to solve larger problems

• Example: 3 solutions to fibonacci: F(n) = F(n-1) + F(n-2), F(0) = F(1) = 1
1. Recursve
2. Memoized
3. DP
• Code and pretty

• Now, extend to optimization problem

### Overview

• Dynamic programming (DP) can be used to solve certain optimization problems

• We will also look at DP solutions to several problems:

• We will briefly consider these:

• We will skip these for now:

• We will look at Memoized algorithms , a technique used in DP

• We also looked at a linear solution to maximal subarrays, but it's not really a DP solution

• We will characterize problems for which DP is appropriate

### Overview

• Dynamic Programming
• General algorithmic design technique: Approach can be used to solve many kinds of problems
• Frequently used for optimization problems: finding best way to do something

• Like Divide and Conquer
• General design technique
• Uses solutions to subproblems to solve larger problems
• Difference: DP subproblems typically overlap

• Typically used when brute-force solution is to enumerate all possibilities
• May not know which subproblems to solve, so we solve many or all!
• Reduce number of possibilities by:
• Finding optimal solutions to subproblems
• Avoiding non-optimal subproblems (when possible)
• Frequently gives a polynomial algorithm for brute force exponential one

### Student Outcomes

• Characterize problems that can be solved using dynamic programming
• Recognize problems that can be solved using dynamic programming
• Develop DP solutions to specified problems
• Distinguish between finding the value of a solution and constructing a solution to a problem
• Simulate execution of DP solutions to specified problems

### Memoized Algorithms

• Memoized algorithms , are a technique used in DP

• Used to avoid computing recursive results more than once

• Solutions to recursive subproblems are stored in an array (ie in a memo)

### Principle of Optimalty

• To have a Dymamic Programming solution, a problem must have the Principle of Optimality

• This means that an optimal solution to a problem can be broken into one or more subproblems that are solved optimally
• Example - Rod Cutting Problem: an optimal solution to $n=7$, (ie 2, 2, 3) can be broken into a first cut and a subproblem of $n=5$, whose optimal solution is (2, 3).

• Key to finding a DP solution is to express the solution to a larger problem in terms of smaller problems.

• General Approach: first solve all of the subproblems, then choose the best subproblems to make up a solution to the origina, larger problem

### Dynamic Programming - Steps

• Express a solution mathematically

• Express a solution recursively

• Either develop a bottom up algorithm
• Find a bottom up algorithm to find the optimal value
• Find a bottom up algorithm to construct the solution

• Or develop a memoized recursive algorithm

### Dynamic Programming - Summary

• Optimal substructure: optimal solution to a problem uses optimal solutions to related subproblems, which may be solved independently
• First find optimal solution to smallest subproblem, then use that in solution to next largest sbuproblem

• Use stored solutions of smaller problems in solutions to larger problems

• Cut and paste proof: optimal solution to problem must use optimal solution to subproblem: otherwise we could remove suboptimal solution to subproblem and replace it with a better solution, which is a contradiction

• Usually uses overlapping subproblems
• Example: Fibb(5) depends on Fibb(4) and Fibb(3) and Fibb(4) depends on Fibb(3) and Fibb(2).
• In this case, Fibb(3) overlaps as part of the solution of both Fibb(5) and Fibb(3)
• Divide and conquer: subproblems usually not overlapping

• Two approaches:
• Top down: memoize recursion
• Bottom up: find and store optimal solutions to subproblems, then use stored solutions to find optimal solution to problem