Title: Design and Analysis of Computer Algorithm Lecture 61
1Design and Analysis of Computer AlgorithmLecture
6-1
- Pradondet Nilagupta
- Department of Computer Engineering
This lecture note has been modified from lecture
note by Prof. Somchai Prasitjutrakul and Prof.
Dimitris Papadias
2Dynamic Programming
3Dynamic Programming
- An algorithm design method that can be used when
the solution to a problem may be viewed as the
result of a sequence of decision. - Dynamic Programming algorithm store results, or
solutions, for small subproblems and looks them
up, rather than recomputing them, when it needs
later to solve larger subproblems - Typically applied to optimiation problems
4Topics Cover
- Matrix-Chain Multiplication
- Longest Common Subsequence
- Optimal Binary Search Tree
5Principle of Optimalilty
- An optimal sequence of decisions has the property
that whatever the initial state an decision are,
the remaining decisions must constitute an
optimal decision sequence with regard to the
state resulting from the first decision. - Essentially, this principles states that the
optimal solution for a larger subproblem contains
an optimal solution for a smaller subproblem.
6Dynamic Programming VS. Greedy Method
- Greedy Method
- only one decision sequence is ever genertated.
- Dynamic Programming
- Many decision sequences may be genertated.
7Dynamic Programming VS. Divide-and Conquer
- Divide-and-Conquer
- partition the problem into independent
subproblems, solve the subproblems recursively,
and then combine their solutions to solve the
original problem - Dynamic Programming
- applicable when the subproblems are not
independent, that is, when subproblems share
subsubproblems.
8Dynamic Programming VS. Divide-and Conquer (cont.)
- Solves every subsubproblem just once and then
saves its answer in a table, thereby avoiding the
work of recomputing the answer every time the
subsubproblem is encountered
94-Steps of Developing Dynamic Programming
Algorithm
- Characterize the structure of an optimal solution
- Recursively define the value of an optimal
solution - Compute the value of an optimal solution in a
bottom-up fashion - Construct an optimal solution from computed
information
10Dynamic programming
- It is used, when the solution can be recursively
described in terms of solutions to subproblems
(optimal substructure) - Algorithm finds solutions to subproblems and
stores them in memory for later use - More efficient than brute-force methods, which
solve the same subproblems over and over again
11Example C(n,k)
- C(n-1,k-1) C(n-1,k) if oltkltn
- C(n,k) 1 if k 0 or k n
- 0 otherwise
- C(n,k)
-
- if k 0 or k n then return 1
- if k lt 0 or k gtn then return 0
- return( C(n-1,k-1) C(n-1,k) )
12Top-Down Recursive
- C(n-1,k-1) C(n-1,k) if oltkltn
- C(n,k) 1 if k 0 or k n
- 0 otherwise
0
1
2
3
0
1
2
C(6,3)
3
4
5
6
13Top-Down Recursive Tree Structure
C(6,3)
C(5,2)
C(5,3)
C(4,1)
C(4,2)
C(4,2)
C(4,3)
C(3,1)
C(3,0)
C(2,0)
C(2,1)
C(1,0)
C(1,1)
14Top-Down Recursive Memorization
- C(n-1,k-1) C(n-1,k) if oltkltn
- C(n,k) 1 if k 0 or k n
- 0 otherwise
0
1
2
3
0
1
1
1
2
1
2
1
C(6,3)
3
3
3
1
1
4
4
6
4
5
10
10
20
6
15Top-Down Memorization Tree Structure
C(6,3)
C(5,2)
C(5,3)
C(4,1)
C(4,2)
C(4,2)
C(4,3)
C(3,1)
C(3,0)
C(2,0)
C(2,1)
No need to recalculate
C(1,0)
C(1,1)
Memory every value
16Pseudo Code of Top-Down Memorization
- Lookup_C( n, k )
-
- if k 0 or k n then return 1
- if k lt 0 or k gt n then return 0
- if cn,k lt 0 then
- cn,k Lookup_C(n-1,k-1) Lookup_C(n-1,k) )
- return cn,k
17Bottom-Up Dynamic Programming
- C(n-1,k-1) C(n-1,k) if oltkltn
- C(n,k) 1 if k 0 or k n
- 0 otherwise
0
1
2
3
0
1
1
1
1
2
1
2
1
C(6,3)
3
3
3
1
1
4
4
6
4
1
5
10
10
1
5
20
6
1
6
15
18Bottom-Up Dynamic Programming
7
8
C(3,1)
C(3,2)
4
5
6
C(2,1)
C(2,2)
2
3
C(1,1)
1
C(0,0)
19Top Down VS Bottom Up
- Bottom-up dynamic programming
- all subproblems must be solved
- regular pattern of table access can be exploited
to reduce time or space - Top-down memoization
- solve only subproblems that are definitely
required - recursion overhead
20Dynamic Programming
- Generally used for solving optimization problem
- Two key ingredients
- optimal substructure (principle of optimality)
- overlapping subproblems
21Optimal Substructure
- An optimal solution to the problem contains
within it optimal solutions to subproblems.
y
S
T
X
Shortest path problem
22Longest Simple Path Problem
y
S
T
X
23Longest Simple Path Problem
y
S
T
X
24Longest Simple Path Problem
y
S
T
X
25Longest Simple Path Problem
y
S
T
X
26Overlapping Subproblems
- Space of subproblems must be small (polynomial
in the input size) - Recursive algorithm solves the same subproblems
over and over. - Dynamic prog. solves all subproblems but solves
each once and then stores the solution in some
data structure.
27Matrix-Chain Multiplication
- Input Given Matrices A1, A2,, An
- where Ai has dimensions d i-1 x di
- Goal Determine the order of multiplication to
minimize - the number of scalar multiplication.
- Assumption The multiplication of a p x q matrix
by a q x r matrix requires pqr scalar
multiplications.
28Example
- A A1 A2 A3 A4
- 10 x 20 20 x 50 50 x 1 1 x 100
Order 1 A1 x (A2 x (A3 x A4 )) Cost(A3 x A4 )
50 x 1 x 100 Cost(A2 x (A3 x A4 )) 20 x 50 x
100 Cost(A1 x (A2 x (A3 x A4 ))) 10 x 20 x
100 Total Cost 125000
29Example (cont.)
Order 2 (A1 x (A2 x A3)) x A4 Cost(A2 x A3 )
20 x 50 x 1 Cost(A1 x (A2 x A3 )) 10 x 20 x
1 Cost((A1 x (A2 x A3)) x A4 ) 10 x 1 x
100 Total Cost 2200
30Principle of Optimality
- Principle of Optimality
- If (A1 x (A2 x A3 )) x A4 is optimal for A1 x A2
x A3 x A4 , - then (A1 x (A2 x A3 )) is optimal for A1 x A2 x
A3 - Reason
- If there is a better solution to the subproblem,
we - can use it instead! Contradicting the optimality
of - (A1 x (A2 x A3 )) x A4
31Example
- The product A1 A2 A3 A4 can be fully
parenthesized in 5 distinct ways - (A1 (A2(A3A4)))
- (A1 ((A2A3)A4))
- ((A1 A2)(A3A4))
- ((A1 (A2A3)A4))
- (((A1 A2)A3)A4)
32Counting the number of parenthesization
- How many full parenthesizations are there in a
matrix chain of length n ? - ( ( X X X X X ) )
n
P(n)
P(k)xP(n-k)
k
n-k
33Optimal Parenthesization
- If the optimal parenthesization of A1 x A2 x x
An is split between Ak and Ak1 , then - optimal parenthesization optimal parenthesization
- for for A1 x x Ak
- A1 x A2 x x An optimal parenthesization
- for Ak1 x x An
The only uncertainty is the value of k Try all
possible values of k. The one that returns the
minimum is the right choice.
34Define
- A1 has dimension p0 x p1
- A2 has dimension p1 x p2
- Ai has dimension p i-1 x pi
- Ai Aj has a solution p i-1 x pj
- Let m i, j be the minimum number of scalar
operation for Ai ... Aj - m 1 , n solution
35Optimal Sub
( A1 A2 A3 A4 A5 A6 )
m1,3
m4,6
( A1 A2 A3 ) ( A4 A5 A6 )
p0 x p3
p3 x p6
m1,3 m4,6 p0p3p6
36Optimal substructure
- (A1 A2 A3 A4 A5 A6 ) m1 ,6 ?
- ( (A1) (A2 A3 A4 A5 A6 ) ) m1,1 m2,6 p0
p1 p6 - ( (A1 A2) (A3 A4 A5 A6 ) ) m1,2 m3,6 p0
p2 p6 - ( (A1 A2 A3) (A4 A5 A6 ) ) m1,3 m4,6 p0
p3 p6 - ( (A1 A2 A3 A4) (A5 A6 ) ) m1,4 m5,6 p0
p4 p6 - ( (A1 (A2 A3 A4 A5) (A6 ) ) m1,5 m6,6 p0
p5 p6
37A recursive formulation
- 0 (i j)
- mi,j
- min mi,k mk 1, j pi-1pkpj
(i lt j) - iltkltj
- mi, k optimal cost for Ai x x Ak
- mk1, j optimal cost for Ak1 x x Aj
- pi-1pkpj cost for (Ai x x Ak) x (Ak1 x x
Aj)
38Time Complexity for recursive Algorithm (Top
Down )
Unacceptable
Overlapping subproblems
39Example Overlapping Subproblem
- m1,6 --gt m1,1, m2,6, m1,2, m3,6,
m1,3, m4,6,m1,4, m5,6, m1,5, m6,6 - m2,6 --gt m2,2, m3,6, m2,3, m4,6,m2,4,
m5,6, m2,5, m6,6
40Solve the problem
- How many subproblems are there?
- Or How many mi,j are there? O(n2)
- Using bottom up Dynamic programming
41Create Table
j
Form a 2-dim table (mi j) with rows
corresponding to i and columns to j.
1
2
3
4
5
i
1
2
Fill the table up in order of increasing values
of j - i mi,js where j-i 0 mi,js where
j-i 1 mi,js where j-i 2
3
4
5
42Finding M1,6
Solution
1
2
3
4
5
6
M1,6
1
0
x
x
x
x
x
2
0
x
x
x
x
3
0
x
x
x
4
0
x
x
5
0
x
6
0
43Example
- A A1 x A2 x A3 x A4
- 10x20 20x50 50x1 1x100
- case j-i 1
- m1,2 m1,1 m2,2 1000
- 1000
- m2,3 20 x 50 1000
- m3,4 50 x 100 5000
44Example (cont.)
- case j-i 2
- m1,1 m2,3 10 x 20 x 1
- m1,3 min
- m1,2 m3,3 10 x 50 x 1 min
1200,10500 1200 - m2,2 m3,4 20 x 50 x 100
- m2,4 min
- m2,3 m4,4 20 x 1 x 100 min
10500,3000 3000
45Example (cont.)
- case j-i 3
- m1,1 m2,4 10 x 20 x 100
- m2,4 min m1,2 m3,4 10 x 50 x 100
- m1,3 m4,4 10 x 1 x 100
- min 23000,65000,2200 2200
46How to construct an optimal solution?
- Keep another 2-dimensional table Split1..n
1..n - such that SplitI, j, i lt j, tells you the value
of k - in splitting Ai x x Aj
- Previous example again
- j- i 1 Split1, 2 1
- Split2, 3 2
- Split3, 4 3
- j- i 2 Split1, 3 1
- Split2, 4 3
- j- i 3 Split1, 4 3
-
47Matrix-Chain Multiplication Dynamic Programming
- Matrix-Chain-Order( p, n )
-
- for i 1 to n
- mi,i 0
- for len 2 to n
- for i 1 to n - len 1
- j i len - 1
- mi,j 8
- for k i to j-1
- q mi,k mk1,j pi-1pkpj
- if q lt mi,j then
- mi,j q
- si,j k
- return s
48Longest Common Subsequence
49Subsequence
- X lt s, o, m, c, h, a, i gt
- subsequences of X
- lt s, o, m, c, h, a, i gt ? lt s, o, m gt
- lt s, o, m, c, h, a, i gt ? ltc, h, a, igt
- lt s, o, m, c, h, a, i gt ? lts, o, h, a, igt
50Common Subsequence
- X lt s, o, m, c, h, a, i gt, Y lt c, h, u, a, n
gt - common subsequences of X and Y
- lt s, o, m, c, h, a, i gt ? lt c gt
- lt c, h, u, a, n gt
- lt s, o, m, c, h, a, i gt ? ltc, agt
- lt c, h, u, a, n gt
51Longest Common Subsequence
- Instance Two sequences X and Y
- Question What is a longest common subsequence of
X - and Y
- Example
- If X ltA, B, C, B, D, A, Bgt
- and
- Y ltB, D, C, A, B, Agt
- then a longest common subsequence is either
- ltB, C, B, Agt
- or
- ltB, D, A, Bgt
52What is the LCS?
- Brute-force algorithm For every subsequence of
x, - check if its a subsequence of y
- How many subsequences of x are there?
- What will be the running time of the brute-force
alg?
53LCS Algorithm
- if X m, Y n, then there are 2m
subsequences of x we must compare each with Y (n
comparisons) - So the running time of the brute-force algorithm
is O(n 2m) - Notice that the LCS problem has optimal
substructure solutions of subproblems are parts
of the final solution. - Subproblems find LCS of pairs of prefixes of X
and Y
54Longest Common Subsequence
- Suppose that
- X ltx1, x2, , xmgt
- Y lty1, y2, , yngt
- and that they have a longest common subsequence
- Z ltz1, z2, , zkgt
- If xm yn then zk xm yn and zk-1 is a LCS of
xm-1 and yn-1 - Otherwise Z is either a LCS of Xm-1 or LCS of X
and Yn-1
55A recursive solution (1/2)
xi
X
xi yj LCS(X,Y) LCS(X i-1,Y j-1) xi
Y
yi
When we calculate ci,j, we consider two
cases First case xiyj one more symbol in
strings X and Y matches, so the length of LCS Xi
and Yj equals to the length of LCS of smaller
strings Xi-1 and Yi-1 , plus 1
56A recursive solution (2/2)
xi
xi ltgt yj LCS(X,Y) LCS(X i-1,Y j) or LCS(X,Y)
LCS(X i,Y j-1)
X
Y
yi
Second case xi ltgt yj As symbols dont match,
our solution is not improved, and the length of
LCS(Xi , Yj) is the same as before (i.e. maximum
of LCS(Xi, Yj-1) and LCS(Xi-1,Yj)
57LCS recursive solution
- 0 if ij 0
- l(i,j) l( i - 1, j - 1) 1 if i, j gt 0 and
xiyj - max( l(i - 1,j), l(i , j - 1) ) if i, j gt 0
and xiltgtyj - We start with i j 0 (empty substrings of x
and y) - Since X0 and Y0 are empty strings, their LCS is
always empty (i.e. l0,0 0) - LCS of empty string and any other string is
empty, so for every i and j l0, j li,0 0
58LCS Length Algorithm
- LCS-Length(X, Y)
- 1. m length(X) // get the of symbols in X
- 2. n length(Y) // get the of symbols in Y
- 3. for i 1 to m li,0 0 // special case
Y0 - 4. for j 1 to n l0,j 0 // special case
X0 - 5. for i 1 to m // for all Xi
- 6. for j 1 to n // for all Yj
- 7. if ( Xi Yj )
- 8. li,j li-1,j-1 1
- 9. else li,j max( li-1,j, li,j-1 )
- 10. return l
59Example
- For our worked example we will use the sequences
- X lt0, 1, 1, 0, 1, 0, 0, 1gt
- and
- Y lt1, 1, 0, 1, 1, 0gt
- Then our initial empty table is
60The first Table
- First we fill in the border of the table with
thezeros.
61- If xi yj then put the symbol in the
square, together with the value l(i - 1, j -1)
1 - Otherwise put the greater of the value l(i - 1,
j) or l(i, j -1) into the square with the
appropriate arrow.
62- It is easy to compute the first row, starting in
the (1, 1) position
63The Final array
- After filling it in row by row we eventually
reach the final array
64Finding the LCS
- The LCS can be found (in reverse) by tracing the
path of the arrows from l(m, n). Each diagonal
arrow encountered gives us another element of the
LCS. - LCS(8,6) 5
- Proceeding in this way, we nd that the LCS is
11010 - Notice that if at the very nal stage of the
algorithm (where we had a free choice) we had
chosen to make l(8, 6) point to l(8, 5) we would
have found a different LCS 11011