Title: Chapter 15 Dynamic Programming
1Chapter 15 Dynamic Programming
2Introduction
- Optimization problem there can be many possible
solution. Each solution has a value, and we wish
to find a solution with the optimal (minimum or
maximum) value - Dynamic Programming VS. Divide-and-Conquer
- Solve problems by combining the solutions to
sub-problems - The sub-problems of D-A-C are non-overlap
- The sub-problems of D-P are overlap
- Sub-problems share sub-sub-problems
- D-A-C solves the common sub-sub-problems
repeatedly - D-P solves every sub-sub-problems once and stores
its answer in a table - Programming refers to a tabular method
3Development of A Dynamic-Programming Algorithm
- Characterize the structure of an optimal solution
- Recursively define the value of an optimal
solution - Compute the value of an optimal solution in a
bottom-up fashion - Construct an optimal solution from computed
information
4Assembly-Line Scheduling
5Problem Definition
2n possible solutions
Time between adjacent stationsare 0
ti,j time to transfer from assembly line 1?2
or 2?1 ai,j processing time in each station
e1, e2 time to enter assembly lines 1 and 2 x1,
x2 time to exit assembly lines 1 and 2
6(No Transcript)
7Step 1
- The structure of the fastest way through the
factory (from the starting point) - The fastest possible way through S1,1 (similar
for S2,1) - Only one way ? take time e1
- The fastest possible way through S1,j for j2,
3, ..., n (similar for S2,j) - S1,j-1 ? S1,j T1,j-1 a1,j
- If the fastest way through S1,j is through S1,j-1
? must have taken a fastest way through S1,j-1 - S2,j-1 ? S1,j T2,j-1 t2,j-1 a1,j
- Same as above
- An optimal solution contains an optimal solution
to sub-problems ? optimal substructure (Remind
D-A-C)
8(No Transcript)
9S1,1 9 9 9 18
S1,1 ? 2 7 9S2,1 ? 4 8 12
S1,2
S2,1 2 9 12 2 9 23
S1,1 2 5 9 2 5 16
S2,2
S2,1 5 12 5 17
10S1,2 3 18 3 21
S1,2 ? 18S2,2 ? 16
S1,3
S2,2 1 3 16 1 3 20
S1,2 3 6 16 3 6 25
S2,3
S2,2 6 16 6 22
11Step 2
- A recursive solution
- Define the value of an optimal solution
recursively in terms of the optimal solution to
sub-problems - Sub-problem here finding the fastest way through
station j on both lines - fij fastest possible time to go from starting
pint through Si,j - The fastest time to go all the way through the
factory f - f min(f1n x1, f2n x2)
- Boundary condition
- f11 e1 a1,1
- f21 e2 a2,1
12Step 2 (Cont.)
- A recursive solution (Cont.)
- The fastest time to go through Si,j (for j2,...,
n) - f1j min(f1j-1 a1,j, f2j-1 t2,j-1
a2,j) - f2j min(f2j-1 a2,j, f1j-1 t1,j-1
a2,j) - lij the line number whose station j-1 is used
in a fastest way through Si,j (i1, 2, and j2,
3,..., n) - l the line whose station n is used in a
fastest way through the entire factory
13Step 3
- Computing the fastest times
- you can write a divide-and-conquer recursive
algorithm now... - But the running time is ?(2n)
- ri(j) the number of recurrences made to fij
recursively - r1(n) r2(n) 1
- r1(j) r2(j) r1(j1) r2(j1) ? ri(j) 2n-j
- Observe that for j?2, fij depends only one
f1j-1 and f2j-1 - compute fij in order of increasing station
number j and store fij in a table - ?(n)
14(No Transcript)
15(No Transcript)
16Step 4
- Constructing the fastest way through the factory
line 1, station 6line 2, station 5line 2,
station 4line 1, station 3line 2, station
2line 1, station 1
17Matrix-Chain Multiplication
18Overview
- Given a sequence (chain) ltA1, A2,, Angt of n
matrices to be multiplied, compute the product
A1A2An in a way that minimizes the number of
scalar multiplications - Matrix multiplication
- Two matrices A and B can be multiplied only if
they are compatible - The number of columns of A must equal the number
of rows of B - A(pq) B(qr) ? C(pr)
- The number of scalar multiplication is pqr
- Example ltA1, A2, A3gt (10100, 1005, 550)
- ((A1A2)A3) ? 101005 10550 5000 2500
7500 - (A1(A2A3)) ? 100550 1010050 25000 50000
75000
19Matrix-Chain Multiplication Problem
- Given a sequence (chain) ltA1, A2,, Angt of n
matrices to be multiplied, where i1,2,, n,
matrix Ai has dimension pi-1pi, fully
parenthesize the product A1A2An in a way that
minimizes the number of scalar multiplications - Determine an order for multiplying matrices that
has the lowest cost - Counting the number of parenthesizations
?(2n)
Impractical to check all possible
parenthesizations
20Matrix Multiplication
21Step 1
- The structure of an optimal parenthesization
- Notation Ai..j result from evaluating
AiAi1Aj (i ? j) - Any parenthesization of AiAi1Aj must split the
product between Ak and Ak1 for some integer k in
the range i ? k lt j - Cost cost of computing Ai..k cost of
computing Ak1..j cost of multiplying Ai..k and
Ak1..j together - Suppose that an optimal parenthesization of
AiAi1Aj splits the product between Ak and Ak1.
- The parenthesization of the prefix sub-chain
AiAi1Ak must be an optimal parenthesization of
AiAi1Ak - The parenthesization of the prefix sub-chain
Ak1Ai1Aj must be an optimal parenthesization
of Ak1Ai1Aj
22Illustration of Optimal SubStructure
Minimal Cost_A1..6 Cost_A7..9p0p6p9
A1A2A3A4A5A6A7A8A9
Suppose
((A7A8)A9)
is optimal
((A1A2)(A3((A4A5)A6)))
23Step 2
- A Recursive Solution
- Sub-problem determine the minimum cost of a
parenthesization of AiAi1Aj (1 ? i ? j ? n) - mi..j the minimum number of scalar
multiplications needed to compute the matrix
Ai..j - si, j k, such that mi, j mi, k mk1,
j pi-1pkpj - We need to compute m1..n
- A recursive solution takes exponential time
- Encounter each sub-problem many times in
different branches of its recursion tree ?
overlapping sub-problems
24Step 3
- Computing the optimal costs
- How much sub-problems in total?
- One for each choice of i and j satisfying 1 ? i ?
j ? n ? ?(n2) - MATRIX-CHAIN-ORDER(p)
- Input a sequence p ltP1, P2,, Pngt (lengthp
n1) - Try to fill in the table m in a manner that
corresponds to solving the parenthesization
problem on matrix chains of increasing length - Lines 4-12 compute mi, i1, mi, i2, each
time
25O(n3), ? (n3) ??(n3) running time ?(n2) space
26(No Transcript)
27Step 4
- Constructing an optimal solution
- Each entry si, j records the value of k such
that the optimal parenthesization of AiAi1Aj
splits the product between Ak and Ak1 - A1..n ? A1..s1..n As1..n1..n
- A1..s1..n ? A1..s1, s1..n As1,
s1..n1..s1..n - Recursive
28Elements of Dynamic Programming
- Optimal substructure
- Overlapping subproblems
29Optimal Substructure
- A problem exhibits optimal substructure if an
optimal solution contains within it optimal
solutions to subproblems - Build an optimal solution from optimal solutions
to subproblems - Example
- Assembly-line scheduling the fastest way through
station j of either line contains within it the
fastest way through station j-1 on one line - Matrix-chain multiplication An optimal
parenthesization of AiAi1Aj that splits the
product between Ak and Ak1 contains within it
optimal solutions to the problem of
parenthesizing AiAi1Ak and Ak1Ak2Aj
30Common Pattern in Discovering Optimal Substructure
- Show a solution to the problem consists of making
a choice. Making the choice leaves one or more
subproblems to be solved. - Suppose that for a given problem, the choice that
leads to an optimal solution is available. - Given this optimal choice, determine which
subproblems ensue and how to best characterize
the resulting space of subproblems - Show that the solutions to the subproblems used
within the optimal solution to the problem must
themselves be optimal by using a cut-and-paste
technique and prove by contradiction
31Illustration of Optimal SubStructure
Minimal Cost_A1..6 Cost_A7..9p0p6p9
A1A2A3A4A5A6A7A8A9
Suppose
((A7A8)A9)
is optimal
((A1A2)(A3((A4A5)A6)))
32Characterize the Space of Subproblems
- Rule of thumb keep the space as small as
possible, and then to expand it as necessary - Assembly-line scheduling S1,j and S2,j are
enough - Matrix-chain multiplication how about A1A2Aj?
- A1A2Ak and Ak1Ak2Aj ?need to vary at both
hand - Therefore, the subproblems should have the form
AiAi1Aj
33Characteristics of Optimal Substructure
- How many subproblems are used in an optimal
solution to the original problem? - Assembly-line scheduling 1 (S1,j-1 or S2,j-1)
- Matrix-chain scheduling 2 (A1A2Ak and
Ak1Ak2Aj) - How may choice we have in determining which
subproblems to use in an optimal solution? - Assembly-line scheduling 2 (S1,j-1 or S2,j-1)
- Matrix-chain scheduling j - i (choice for k)
- Informally, the running time of a
dynamic-programming algorithm relies on the
number of subproblems overall and how many
choices we look at for each subproblem - Assembly-line scheduling ?(n) 2 ?(n)
- Matrix-chain scheduling ?(n2) O(n) O(n3)
34Dynamic Programming VS. Greedy Algorithms
- Dynamic programming uses optimal substructure in
a bottom-up fashion - First find optimal solutions to subproblems and,
having solved the subproblems, we find an optimal
solution to the problem - Greedy algorithms use optimal substructure in a
top-down fashion - First make a choice the choice that looks best
at the time and then solving a resulting
subproblem
35Subtleties Need Experience
- Sometimes an optimal substructure does not exist
- Consider the following two problems in which we
are given a directed graph G(V, E) and vertices
u, v ?V - Unweighted shortest path Find a path from u to v
consisting the fewest edges. Such a path must be
simple (no cycle). - Optimal substructure? YES
- We can find a shortest path from u to v by
considering all intermediate vertices w, finding
a shortest path from u to w and a shortest path
from w to v, and choosing an intermediate vertex
w that yields the overall shortest path - Unweighted longest simple path Find a simple
path from u to v consisting the most edges. - Optimal substructure? NO. WHY?
36UnWeighted Shortest Path
E
B
G
H
A
F
I
C
D
A?B?E?G?H is optimal for A to H Therefore, A?B?E
must be optimal for A to E G?H must be optimal
for G to H
37No Optimal Substructure in Unweighted Longest
Simple Path
Unweighted longest simple path is NP-complete it
is unlikely that it can be solved in polynomial
time
Sometimes we cannot assemble a legal solution to
the problem from solutions to subproblems
(q?s?t?r r?q?s?t q?s?t?r?q?s?t)
38Independent Subproblems
- In dynamic programming, the solution to one
subproblem must not affect the solution to
another subproblem - The subproblems in finding the longest simple
path are not independent - q?t q?r r?t
- q?s?t?r we can no longer use s and t in the
second subproblem ... Sigh!!!
39Overlapping SubProblems
- The space of subproblems must be small in the
sense that a recursive algorithm for the problem
solves the same subproblems over and over, rather
than always generating new subproblems - Typically, the total number of distinct
subproblems is a polynomial in the input size - Divide-and-Conquer is suitable usually generate
brand-new problems at each step of the recursion - Dynamic-programming algorithms take advantage of
overlapping subproblems by solving each
subproblem once and then storing the solution in
a table where it can be looked up when needed,
using constant time per lookup
40m3,4 is computed twice
41Comparison
42Recursive Procedure for Matrix-Chain
Multiplication
- The time to compute m1..n is at least
exponential in n - Prove T(n) ?(2n) using the substitution method
- Show that T(n) ? 2n-1
43Reconstructing An Optimal Solution
- As a practical matter, we often store which
choice we made in each subproblem in a table so
that we do not have to reconstruct this
information from the costs that we stored - Self study the costs of reconstructing optimal
solutions in the cases of assembly-line
scheduling and matrix-chain multiplication,
without lij and si, j (Page 347)
44Memoization
- A variation of dynamic programming that often
offers the efficiency of the usual
dynamic-programming approach while maintaining a
top-down strategy - Memoize the natural, but inefficient, recursive
algorithm - Maintain a table with subproblem solutions, but
the control structure for filling in the table is
more like the recursive algorithm - Memoization for matrix-chain multiplication
- Calls in which mi, j ? ? ?(n2) calls
- Calls in which mi, j lt ? ? O(n3) calls
- Turns an ?(2n)-time algorithm into an O(n3)-time
algorithm
45(No Transcript)
46LOOKUP-CHAIN(p, i, j)
- if mi, j lt ?
- then return mi, j
- if ij
- then mi, j ? 0
- else for k ? i to j-1
- do q?LOOKUP-CHAIN(p, i, k) LOOKUP-CHAIN(p,
k1, j) pi-1pkpj - if q lt mi, j
- then mi, j ? q
- return mi, j
Comparison
47(No Transcript)
48Dynamic Programming VS. Memoization
- If all subproblems must be solved at least once,
a bottom-up dynamic-programming algorithm usually
outperforms a top-down memoized algorithm by a
constant factor - No overhead for recursion and less overhead for
maintaining table - There are some problems for which the regular
pattern of table accesses in the
dynamic-programming algorithm can be exploited to
reduce the time or space requirements even
further - If some subproblems in the subproblem space need
not be solved at all, the memoized solution has
the advantage of solving only those subproblems
that are definitely required
49Self-Study
- Two more dynamic-programming problems
- Section 15.4 Longest common subsequence
- Section 15.5 Optimal binary search trees