Is this all that's needed for the definition? No, also need ...
Initial Conditions: $F(0)=F(1)=1$
Recursive algorithm:
function fibonacci(n: Natural) is
if n = 0 then
return 1
elsif n = 1 then
ans := 1
else
ans1 := fibonacci(n - 1)
ans2 := fibonacci(n - 2)
ans := ans1 + ans2
return ans
Notice that we have taken a recursive definition and produced a recursive algorithm
Where are initial conditions?
Performance: Number of calls of Fibonacci:
C(n) =$C(n - 1) + C(n - 2) + 1$
Is this all that's needed for the definition? No, also need ...
Initial Conditions: $C(0)=C(1)=1$
Performance: Number of Pluses:
P(n) =$P(n - 1) + P(n - 2) + 1 $
Initial Conditions: $P(0)=P(1)=0$
Performance: Number of assignments:
A(n) =$A(n - 1) + A(n - 2) + 3 $
Initial Conditions: $A(0)=0, A(1)=1$
What's the Point? And Where Are We Going?
The point: These are all related to each other:
Recursive Functions and Procedures
Recurrence Equations describing performance
Recursively Definitions of functions
Where are we going: DP involves taking a recursively defined function
And implementing it ... Recursively?
No, ... with an EFFICIENT Iterative Solution!
DP Intro
Dynamic Programming:
Efficient (ie non-recursive) solution to recursively defined problem
Typically used for optimization problems
Optimization means ... Find the BEST solution
DP approach: Time/Space tradeoff
Use a table to store solutions to subproblems
Use solutions to subproblems to solve larger problems
For simplicity: Start with non-optimization problem: stick with Fibonacci
Look at three kinds of solutions:
Recursve
Memoized
Dynamic Programming
Important: Three general approaches to implementing recursive problems!
Start by looking at improving Fibonacci
Improving Fibonacci
Why is Fibonacci slow?
Answer: Many subproblems are computed many times
Let's look at the PPTS at the tree of recursive calls
To have a Dymamic Programming solution, a problem must have the Principle of Optimality
This means that an optimal solution to a problem can be broken into one or more subproblems
that are solved optimally
Example - Rod Cutting Problem: an optimal solution to $n=7$, (ie 2, 2, 3)
can be broken into a first cut and a subproblem of $n=5$, whose optimal solution is (2, 3).
Key to finding a DP solution is to express the solution to a larger problem in terms of smaller
problems.
General Approach: first solve all of the subproblems, then choose the best subproblems to make
up a solution to the origina, larger problem
Dynamic Programming - Steps
Express a solution mathematically
Express a solution recursively
Either develop a bottom up algorithm
Find a bottom up algorithm to find the optimal value
Find a bottom up algorithm to construct the solution
Optimal substructure: optimal solution to a problem uses optimal solutions to related
subproblems, which may be solved independently
First find optimal solution to smallest subproblem, then use that in solution to next
largest sbuproblem
Use stored solutions of smaller problems in solutions to larger problems
Cut and paste proof: optimal solution to problem must use optimal solution to subproblem:
otherwise we could remove suboptimal solution to subproblem and replace it with a better
solution, which is a contradiction
Usually uses overlapping subproblems
Example: Fibb(5) depends on Fibb(4) and Fibb(3) and Fibb(4) depends on Fibb(3) and Fibb(2).
In this case, Fibb(3) overlaps as part of the solution of both Fibb(5) and Fibb(3)
Divide and conquer: subproblems usually not overlapping
Two approaches:
Top down: memoize recursion
Bottom up: find and store optimal solutions to subproblems,
then use stored solutions to find optimal solution to problem
Trades time for space
Many times gives polynomial time rather than exponential for brute force algorithm
Does not solve all optimization problems
Dynamic Programming: Name
Origin: Richard Bellman, 1957
Not related to computer programming
Programming referred to a series of choices
Dynamic: choices are made on the fly, not in beginning