Title: Algorithms
1Algorithms
2Dynamic Programming
- General approach combine solutions to
subproblems to get the solution to an problem - Unlike Divide and Conquer in that
- subproblems are dependent rather than independent
- bottom-up approach
- save values of subproblems in a table and use
more than one time
3Usual Approach
- Start with the smallest, simplest subproblems
- Combine appropriate subproblem solutions to get
a solution to the bigger problem - If you have a DC algorithm that does a lot of
duplicate computation, you can often find a DP
solution that is more efficient
4A Simple Example
- Calculating binomial coefficients
- n choose k is the number of different
combinations of n things taken k at a time - These are also the coefficients of the binomial
expansion (xy)n
5Two Definitions
6Algorithm for Recursive Definition
- function C(n,k)
- if k 0 or k n then
- return 1
- else
- return C(n-1, k-1) C(n-1, k)
7C(5,3)
C(4,2)
C(4,3)
C(3,1)
C(3,2)
C(3,2)
C(3,3)
8Complexity of D C Algorithm
- Time complexity is ?(n!)
- But we did a lot of duplicate computation
- Dynamic programming approach
- Store solutions to sub-problems in a table
- Bottom-up approach
9k
n ??k
0 1 2 3
0 1 2 3 4 5
n
10Analysis of DP Version
- Time complexity
- O(n k)
- Storage requirements
- full table O(n k)
- better solution
11Typical Problem
- Dynamic Programming is often used for
optimization problems that satisfy the principle
of optimality - Principle of optimality
- In an optimal sequence of decisions or choices,
each subsequence must be optimal
12Optimization Problems
- Problem has many possible solutions
- Each solution has a value
- Goal is to find the solution with the optimal
value
13Steps in DP Solutions to Optimization Problems
- 1 Characterize the structure of an optimal
solution - 2 Recursively define the value of an optimal
solution - 3 Compute the value of an optimal solution in a
bottom-up manner - 4 Construct an optimal solution from computed
information
14Matrix Chain Multiplication
- A1 ??A2 ??A3 ?????????An
- Matrix multiplication is associative
- All ways that a sequence can be parenthesized
give the same answer - But some are much less expensive to compute
15Matrix Chain Multiplication Problem
- Given a chain lt A1,A2, . . .,Angt of n matrices,
where i 1, 2, . . ., n and matrix Ai has
dimension pi-1 ??pi, fully parenthesize the
product A1 ??A2 ?????????An in a way that
minimizes the number of scalar multiplications
16Example
- Matrix Dimensions
- A 13 x 5
- B 5 X 89
- C 89 X 3
- D 3 X 34
- M A B C D 13 x 34
17- Parenthesization Scalar multiplications
- 1 ((A B) C) D 10,582
- 2 (A B) (C D) 54,201
- 3 (A (B C)) D 2, 856
- 4 A ((B C) D) 4, 055
- 5 A (B (C D)) 26,418
- 1 13 x 5 x 89 to get (A B) 13 x 89 result
- 13 x 89 x 3 to get ((AB)C) 13 x 3 result
- 13 x 3 x 34 to get (((AB)C)D) 13 x 34
-
18Number of Parenthesizations
19T(n) ways to parenthesize
n 1 2 3 4 5 10 15
T(n) 1 1 2 5 14 4,862 2,674,440
20Steps in DP Solutions to Optimization Problems
1 Characterize the structure of an optimal
solution 2 Recursively define the value of an
optimal solution 3 Compute the value of an
optimal solution in a bottom-up manner 4
Construct an optimal solution from computed
information
21Step 1
- Show that the principle of optimality applies
- An optimal solution to the problem contains
within it optimal solutions to sub-problems - Let Ai..j be the optimal way to parenthesize
AiAi1. . .Aj - Suppose the optimal solution has the first split
at position k - A1..k Ak1..n
- Each of these sub-problems must be optimally
parenthesized
22Step 2
- Define value of the optimal solution recursively
in terms of optimal solutions to sub-problems. - Consider the problem Ai..j
- Let mi,j be the minimum number of scalar
multiplications needed to compute matrix Ai..j - The cost of the cheapest way to compute A1..n is
m1,n
23Step 2 continued
- Define mi,j
- If i j the chain has one matrix and the cost
mi,i 0 for all i ? n - If i lt j
- Assume the optimal split is at position k
where i ??k lt j - mi,j mi,k mk1,j pi-1 pkpj
- to compute Ai..k and Ak1..j
24Step 2 continued
- Problem we dont know what the value for k is,
but there are only j-i possibilities
25Step 3
- Compute optimal cost by using a bottom-up
approach - Example 4 matrix chain
- Matrix Dimensions
- A1 100 x 1
- A2 1 x 100
- A3 100 x 1
- A4 1 x 100
- po p1 p2 p3 p4
- 100 1 100 1 100
26j
j
1 2 3 4
1 2 3 4
i
i
1 2 3 4
1 2 3 4
s
m
po p1 p2 p3
p4 100 1 100 1 100
27MATRIX-CHAIN-ORDER(p) 1 n ??lengthp - 1 2 for
i ??1 to n 3 do mi,i ??0 4 for l ??2 to
n 5 do for i ??1 to n - l 1 6
do j??i l - 1 7 mi,j
??? 8 for k ??i to j - 1 9
do q ??mi,kmk1,j pi-1
pkpj 10 if q lt
mi,j 11 then
mi,j q 12
si,j k 13 return m and s
28Time and Space Complexity
- Time complexity
- Triply nested loops
- O(n3)
- Space complexity
- n x n two dimensional table
- O(n2)
29Step 4 Constructing the Optimal Solution
- Matrix-Chain-Order determines the optimal number
of scalar multiplications - Does not directly compute the product
- Step 4 of the dynamic programming paradigm is to
construct an optimal solution from computed
information. - The s matrix contains the optimal split for every
level
30MATRIX-CHAIN-MULTIPLY(A,s,i,j) 1 if j gt i 2
then X ??MATRIX-CHAIN-MULTIPLY(A,s,i,si,j) 3
Y ??MATRIX-CHAIN-MULTIPLY(A,s,si,j1,j) 4
return MATRIX-MULTIPLY(X,Y) 5 else
return Ai Initial call MATRIX-CHAIN-MULTIPLY(
A,s,1,n) where A lt A1,A2, . . .,Angt
31Algorithms
- Dynamic Programming Continued
32Comments on Dynamic Programming
- When the problem can be reduced to several
overlapping sub-problems. - All possible sub-problems are computed
- Computation is done by maintaining a large matrix
- Usually has large space requirement
- Running times are usually at least quadratic
33Memoization
- Variation on dynamic programming
- Ideamemoize the natural but inefficient
recursive algorithm - As each sub-problem is solved, store values in
table - Initialize table with values that indicate if the
value has been computed
34MEMOIZED-MATRIX-CHAIN(p) 1 n ? length(p) - 1 2
for i ??1 to n 3 do for j ??i to n 4
do mi,j ??? 5 return LOOKUP-CHAIN(p,1,n)
35LOOKUP-CHAIN(p,i,j) 1 if mi,j ??? 2 then
return mi,j 3 if i j 4 then mi,j
??? 5 else for k ??i to j - 1 6
do q ??LOOKUP-CHAIN(p,i,k)
LOOKUP-CHAIN(p,k1,j)
pi-1pkpj 7
if q lt mi,j 8
then mi,j ??q 9 return mi,j
36Space and Time Requirementsof Memoized Version
- Running time ?(n3)
- Storage ?(n2)
37Longest Common Subsequence
- Definition 1 Subsequence
- Given a sequence
- X lt x1, x2, . . . , xmgt
- then another sequence
- Z lt z1, z2, . . . , zkgt
- is a subsequence of X if there exists a strictly
increasing sequence lti1, i2, . . . , ikgt of
indices of x such that for all j 1,2,...k we
have xij zj -
38Example
- X ltA,B,D,F,M,Qgt
- Z ltB, F, Mgt
- Z is a subsequence of X with index sequence
lt2,4,5gt
39More Definitions
- Definition 2 Common subsequence
- Given 2 sequences X and Y, we say Z is a common
subsequence of X and Y if Z is a subsequence of X
and a subsequence of Y - Definition 3 Longest common subsequence problem
- Given X lt x1, x2, . . . , xmgt and Y lt y1, y2,
. . . , yngt find a maximum length common
subsequence of X and Y
40Example
- X ltA,B,C,B,D,A,Bgt
- Y ltB,D,C,A,B,Agt
41Brute Force Algorithm
1 for every subsequence of X 2 Is there
subsequence in Y? 3 If yes, is it longer
than the longest subsequence found so
far? Complexity?
42Yet More Definitions
- Definition 4 Prefix of a subsequence
- If X lt x1, x2, . . . , xmgt , the ith prefix of
X for i 0,1,...,m is Xi lt x1, x2, . . . , xigt - Example
- if X ltA,B,C,D,E,F,H,I,J,Lgt then
- X4 ltA,B,C,Dgt and X0 ltgt
43Optimal Substructure
- Theorem 16.1 Optimal Substructure of LCS
- Let X lt x1, x2, . . . , xmgt and Y lt y1, y2, .
. . , yngt be sequences and let Z lt z1, z2, . .
. ,zkgt be any LCS of X and Y - 1. if xm yn then zk xm yn and zk-1 is
an LCS of xm-1 and yn-1 - 2. if xm ? yn and zk ? x m Z is an LCS of
Xm-1 and Y - 3. if xm ? yn and zk ? yn Z is an LCS of Xm
and Yn-1
44Sub-problem structure
- Case 1
- if xm yn then there is one sub-problem to
solve - find a LCS of Xm-1 and Yn-1 and append xm
- Case 2
- if xm ? yn then there are two sub-problems
- find an LCS of Xmand Yn-1
- find an LCS of Xm-1 and Yn
- pick the longer of the two
45Cost of Optimal Solution
- Cost is length of the common subsequence
- We want to pick the longest one
- Let ci,j be the length of an LCS of the
sequences Xi and Yj - Base case is an empty subsequence--then ci,j
0 because there is no LCS
46The Recurrence
47Dynamic Programming Solution
- Table c0..m, 0..n stores the length of an LCS
of Xi and Yj ci,j - Table b1..m, 1..n stores pointers to optimal
sub-problem solutions
48Y
?????????C B R F T S Q
0 1 2 3 4 5 6 7
0 A 1 B 2 D 3 F 4 M 5 Q 6
X
Matrix C
49Y
???C B R F T S Q 1
2 3 4 5 6 7
j
i
A 1 B 2 D 3 F 4 M 5 Q 6
X
Matrix B
50LCS-LENGTH(X,Y) 1 m ??lengthX 2 n
??lengthY 3 for i ?? to m 4 do
ci,0 ?? 5 for j ?? to n 6 do
c0,j ?? 7 for i ?? to m 8 do for j
?? to n 9 do if xi yj ??????????????????????
?????????????????then ci,j ??ci-1,j-1
1 ???????????????????????????????????????????????b
i,j ?? 12 else if ci-1,j
??ci,j-1 13
then ci,j ??ci-1,j 14 bi,j
?? 15 else ci,j ??ci,j-1 16
bi,j ?? 17 return c and b
51PRINT-LCS (b,X,i,j) 1if i 0 or j 0 2 then
return 3 if bi,j ? 4 then
PRINT-LCS(b,X,i-1,j-1) 5 print
xi 6 else if bi,j ??? 7 then
PRINT-LCS(b,X,i-1,j) 8 else PRINT-LCS(b,X,i,j-1)
52Code Complexity
- Time complexity
- Space complexity
- Improved space complexity?
53Optimal Polygon Triangulation
- Turns out to be very similar to matrix chain
multiplication - Mapping one problem to another kind of problem
for which you already have a good solution is a
useful concept
54Lots of definitions
- A polygon is a piecewise-linear, closed curve in
the plane. - The pieces are called sides.
- A point joining two consecutive sides is called a
vertex. - The set of points in the plane enclosed by a
simple polygon forms the interior of the polygon. - the set of point ons the polygon forms its
boundary - the set of points surrounding the polygon form
its exterior
55Convex Polygons
- A simple polygon is convex if given any two
points on its boundary or in its interior, all
points on the line segment drawn between them are
contained in the polygons boundary or interior.
56Labeling a convex polygon
v5
v0 vn
v4
v1
v3
P ltv0, v1, . . . vn-1gt
v2
57Chords
v5
v0
v5
v0
v4
v4
v1
v1
v3
v3
v2
v2
58Triangulation
- A triangulation of a polygon is a set T of chords
of the polygon that divide the polygon into
disjoint triangles. - No chords intersect
- The set of chords T is maximal every chord not
in T intersects some chord in T - Every triangulation of an n-vertex convex polygon
- has n-3 chords
- divides polygon into n-2 triangles
59Optimal (polygon) triangulation problem
- Given a convex polygon P ltv0, v1, . . . vn-1gt
and a weight function w defined on triangles
formed by sides and chords of P. - Find a triangulation that minimizes the sum of
the weights of the triangles in the triangulation - One possible weight function is to minimize the
sum of the lengths of the sides
60Correspondance of Parenthesization
- We will show how to view both parenthesization
and triangulation as parse trees and show the
correspondence between the two views. - The result will be that a slight modification of
the matrix chain problem can be used to solve the
triangulation problem.
61((A1(A2A3)(A4(A5A6)))
A1(A2A3)
A4(A5A6)
A1
A2A3
A4
A5A6
A2
A3
A5
A6
62v0
v6
v1
v5
v2
v4
v3
63((A1(A2A3)(A4(A5A6)))
v0
v6
v1
v5
v2
v4
v3
64A1 A2 A3 A4 A5 A6
6 matrices 7 dimensions
p0 p1 p2 p3 p4 p5 p6
v0
v6
v1
v5
v2
v4
6 sides 7 vertices
v3
65(No Transcript)