Advanced Algorithms - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Advanced Algorithms

Description:

Online algorithms. Linear programming algorithms ... The optimum cost can be described by the following recursive formulation. ... – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 34
Provided by: feodorf
Category:

less

Transcript and Presenter's Notes

Title: Advanced Algorithms


1
Advanced Algorithms
(Feodor F. Dragan) Department of Computer Science
Kent State University
2
Textbook Thomas Cormen, Charles Leisterson,
Ronald Rivest, and Cliff Stein,
Introduction to Algorithms,
McGraw Hill Publishing Company
and MIT Press, 2001 (2nd
Edition).
  • Grading
  • Homework 40
  • Midterm Exam 30
  • Final Exam 30

All the course slides will be available
at http//www.cs.kent.edu/dragan/CS6-76101-AdvAlg
.html
3
  • Course Outline
  • We will cover the following topics (the
    topics and order listed are tentative and
  • subject to change some topics may only be
    quickly surveyed to add breadth,
  • while others will be covered in reasonable
    depth).
  • Dynamic Programming
  • Optimal greedy algorithms
  • Amortized analysis
  • Parallel and circuit algorithms
  • Network flow algorithms
  • Randomized algorithms
  • Number theoretic cryptographic algorithms
  • String matching algorithms
  • Computational geometry algorithms
  • Algorithms for NP-hard and NP-complete problems
  • Approximation algorithms
  • Online algorithms
  • Linear programming algorithms

4
What is a computer program exactly?
Input
Output
Some mysterious processing
Programs Data Structures Algorithms
5
CHAPTER 15Dynamic Programming
  • We begin discussion of an important algorithm
    design technique, called dynamic programming (or
    DP for short).
  • It is not a particular algorithm, it is a
    metatechnique.
  • Programming tableau method, not writing a
    code.
  • The technique is among the most powerful for
    designing algorithms for optimization problems.
    This is true for two reasons.
  • Dynamic programming solutions are based on a few
    common elements.
  • Dynamic programming problems are typical
    optimization problems (find the minimum or
    maximum cost solution, subject to various
    constraints.
  • The technique is related to divideandconquer,
    in the sense that it breaks problems down into
    smaller problems that it solves recursively.
    However, because of the somewhat different nature
    of dynamic programming problems, standard
    divideandconquer solutions are not usually
    efficient.

6
The basic elements that characterize a dynamic
programming algorithm are
  • Substructure Decompose your problem into
    smaller (and hopefully simpler) subproblems.
    Express the solution of the original problem in
    terms of solutions for smaller problems. (Unlike
    divideandconquer problems, it is not usually
    sufficient to consider one decomposition, but
    many different ones.)
  • Tablestructure Store the answers to the
    subproblems in a table. This is done because
    (typically) subproblem solutions are reused many
    times, and we do not want to repeatedly solve the
    same problem.
  • Bottomup computation Combine solutions on
    smaller subproblems to solve larger subproblems,
    and eventually to arrive at a solution to the
    complete problem. (Our text also discusses a
    topdown alternative, called memoization.)

7
  • The most important question in designing a DP
    solution to a problem is how to set up the
    subproblem structure. This is called the
    formulation of the problem.
  • Dynamic programming is not applicable to all
    optimization problems.

There are two important elements that a problem
must have in order for DP to be applicable.
  • Optimal substructure Optimal solution to the
    problem contains within it optimal solutions to
    subproblems. This is sometimes called the
    principle of optimality. It states that for the
    global problem to be solved optimally, each
    subproblem should be solved optimally.
  • Polynomially many subproblems An important
    aspect to the efficiency of DP is that the total
    number of distinct subproblems to be solved
    should be at most a polynomial number.

8
Strings
  • One important area of algorithm design is the
    study of algorithms for character strings.
  • There are a number of important problems here.
    Among the most important has to do with
    efficiently searching for a substring or
    generally a pattern in large piece of text. (This
    is what text editors do when you perform a
    search.)
  • In many instances you do not want to find a
    piece of text exactly, but rather something that
    is similar. This arises for example in genetics
    research. Genetic codes are stored as long DNA
    molecules. The DNA strands consists of a string
    of molecules of four basic types C, G, T, A.
    Exact matches rarely occur in biology because of
    small changes in DNA replication. For this
    reason, it is of interest to compute similarities
    between strings that do not match exactly.
  • One common method of measuring the degree of
    similarity between two strings is to compute
    their longest common subsequence.

9
Longest Common Subsequence
10
Dynamic Programming Approach
11
Dynamic Programming Approach (cont.)
12
Implementing the Rule
13
Extracting the Actual Sequence
14
Time and Space Bounds and an Example
  • The running time of the algorithm LCS is clearly
    O(mn) since there are two nested loops with m and
    n iterations, respectively. The running time of
    the algorithm getLCS is O(mn). Both algorithms
    use O(mn) space.

Y 0 1 2 3 4 n
Y 0 1 2 3 4 n
B D C B
B D C B
XBACDB YBDCB LCSBCB
0 1 2 3 4 5
BACDB
0 1 2 3 4 5
BACDB
X
m
m
LCS Length Table
with back pointers included
Longest common subsequence example.
READ Ch. 15.4 in CLRS.
15
Chain Matrix Multiplication
  • This problem involves the question of
    determining the optimal sequence for performing a
    series of operations.
  • This general class of problem is important in
    compiler design for code optimization and in
    databases for query optimization.
  • We will study the problem in a very restricted
    instance, where the dynamic programming issues
    are easiest to see.
  • Suppose that we wish to multiply a series of
    matrices
  • Matrix multiplication is an associative but not
    a commutative operation. This means that we are
    free to parenthesize the above multiplication
    however we like, but we are not free to rearrange
    the order of the matrices.
  • Also recall that when two (non-square) matrices
    are being multiplied, there are restrictions on
    the dimensions. A matrix has p rows and
    q columns. You can multiply a matrix
    A times a matrix B, and the result will
    be a matrix C. The number of
    columns of A must equal the number of rows of B.
  • In particular, for and
  • There are total entries in C and each

    takes O(q) time to compute, thus the
    total time
    (e.g. number of multiplications) to
    multiply these two matrices is

A B C
q
p
p
r
r
q
16
Problem Formulation
17
Naive Algorithm
18
Dynamic Programming Approach
19
Dynamic Programming Formulation
20
Implementing the Rule
21
Extracting Optimum Sequence
22
Chain Matrix Multiplication Example
23
Recursive Implementation
24
Recursive Implementation (cont.)
25
Memoization
26
Memoization (cont.)
27
Polygons and Triangulations
Polygon
28
Triangulations (cont.)
29
Minimum-Weight Convex Polygon Triangulation
30
Correspondence to Binary Trees
31
Correspondence to Binary Trees (cont.)
32
Dynamic Programming Solution
33
Dynamic Programming Solution (cont.)
  • This subdivides the polygon into the
    subpolygons and
    whose minimum weight are already known to
    us as ti,k and tk1,j.
  • In addition we should consider the weight of
    newly added triangle
  • Thus, we have the following recursive rule
  • Note that this has exactly the same structure
    as the recursive definition used in the
    chain-matrix multiplication algorithms.
  • The same algorithm can be applied
    with only minor changes.


READ Ch. 15.3 in CLRS and again all slides about
triangulations (this part was taken from the
first edition of the book (Ch. 16.4), you will
not find this in the second edition).
Homework 1 See on the class web-page.
Write a Comment
User Comments (0)
About PowerShow.com