Chapter 4: Basic Graph Algorithms and Computational Complexity - PowerPoint PPT Presentation

1 / 99
About This Presentation
Title:

Chapter 4: Basic Graph Algorithms and Computational Complexity

Description:

Minimum Chromatic Number. The chromatic number, c, of a graph G(V,E) is the ... What is the meaning of the chromatic number and the clique number in an interval ... – PowerPoint PPT presentation

Number of Views:643
Avg rating:3.0/5.0
Slides: 100
Provided by: SABB6
Category:

less

Transcript and Presenter's Notes

Title: Chapter 4: Basic Graph Algorithms and Computational Complexity


1
Chapter 4 Basic Graph Algorithms and
Computational Complexity
Massoud Pedram Dept. of EE University of Southern
California
2
Outline
  • Analysis of Algorithms
  • Graph Algorithms
  • Dynamic Programming
  • Mathematical Programming
  • Stochastic Search
  • Greedy Algorithms
  • Divide and Conquer
  • Floorplanning

3
Analysis of Algorithms
  • Big-O notation
  • O(f(n)) means that as n ? ?, the execution time t
    is at most cf(n) for some constant c
  • Small-o notation
  • o(f(n)) means that as n ? ?, the execution time t
    is at least cf(n) for some constant c
  • Theta notation
  • ?(f(n)) means that as n ? ?, the execution time t
    is at most c1f(n) and at least c2f(n) for some
    constants c1 and c2 where n input size

4
Big-O Notation
  • Express the execution time as a function of the
    input size n. Since only the growth rate matters,
    we can ignore the multiplicative constants and
    the lower order terms, e.g.,
  • 3n26n2.7 is O(n2)
  • n1.1 10000000000n is O(n1.1)
  • n1.1 is O(n1.1)

5
Effect of Multiplicative Constant
6
Growth Rate of Some Functions
  • O(log n), O(log2n), O(n0.5), O(n),
  • O(n log n), O(n1.5), O(n log2n), O(n2),
  • O(n3), O(n4)
  • O(nlog n), O(2n), O(3n), O(4n), O(nn),
  • O(n!)

Polynomial Functions
Exponential Functions
7
Exponential Functions
  • Exponential functions increase rapidly, e.g., 2n
    will double whenever n is increased by 1

8
NP-Complete
  • The class NP-Complete is a set of problems that,
    we believe, has no polynomial time algorithms
  • Therefore, they are hard problems
  • If a problem is NP-complete, there is no hope to
    solve it efficiently

9
Solution Type for NP-Complete Problems
  • Exponential time algorithms
  • Special case algorithms
  • Approximation algorithms
  • Heuristic algorithms

10
Graph Algorithms
11
Basic Graph Definitions
  • A graph G(V,E) is a collection of vertices in V
    connected by a set of edges E
  • Dual of a graph G is obtained by replacing each
    face of G with one vertex and connecting two such
    vertices by an edge if the two corresponding
    faces share a common edge in G

12
Types of Graph Commonly Used in VLSI CAD
  • Interval graphs
  • Permutation graphs
  • Comparability graphs
  • Vertical and horizontal constraint graphs
  • Neighborhood graphs and rectangular dualization

13
Interval Graphs
A
B
C
E
D
G
F
A
B
C
E
F
G
D
V Set of intervals E (vi,vj) vi and vj
intersect
14
Permutation Graphs
A
B
C
D
E
A
B
C
D
E
A
E
B
D
C
V Set of lines E (vi,vj) vi and vj
intersect
15
Comparability Graphs
  • Orientable Property Each edge can be assigned a
    one-way direction in such a way that the
    resulting graph G(V,F) satisfies the following
    property
  • An undirected graph which is transitively
    orientable is called a comparability graph

16
Horizontal Constraint Graphs
C
D
A
B
Weighted directed graph V Set of modules
E (vi,vj) vj on the right of vi
weight(vi, vj) width of module vi
17
Vertical Constraint Graphs
C
D
A
B
Weighted directed graph V Set of modules
E (vi,vj) vj above vi weight(vi, vj)
height of module vi
18
Neighborhood Graphs
C
D
A
B
V Set of modules E (vi,vj) vj and vi share
an edge
19
Rectangular Dualization
Rectangular dualization does the inverse
operation of the neighborhood graph construction
Each rectangle represents a vertex vi and vj
share an edge exactly if e(vi,vj) exists Some
graphs have no rectangular duals
20
Basic Graph Algorithms Commonly Used in VLSI CAD
  • Minimum Spanning Tree (MST) Problems
  • Steiner Minimum Tree (SMT) Problems
  • Rectilinear SMT Problems
  • Partitioning Problems
  • Network Flow Problems

21
Minimum Spanning Tree (MST)
  • Problem Given a connected undirected weighted
    graph G(V,E), construct a minimum weighted tree T
    connecting all vertices in V
  • Note A graph is connected exactly if there is at
    least one path for any pair of vertices

22
Minimum Spanning Tree (MST)
  • Kruskals algorithm
  • O(m log n)
  • Prims algorithm
  • O(m log n)
  • Can be improved to O(n log n)
  • A greedy algorithm
  • Note m no. of edges n no. of vertices

23
Kruskals Algorithm
  • Sort the edges
  • T ?
  • Consider the edges in ascending order of their
    weights until T contains n-1 edges
  • Add an edge e to T exactly if adding e to T does
    not create any cycle in T

24
Kruskals Algorithm Example
25
Kruskals Algorithm Example (Contd)
26
Kruskals Algorithm Example (Contd)
27
Prims Algorithm
  • T ?
  • S v for an arbitrary vertex v
  • Repeat until S contains all the vertices
  • Add the lightest edge e(v1,v2) to T where v1?S
    and v2?(V-S). Add v2 to S

28
Prims algorithm Example
29
Prims algorithm Example (Contd)
30
Steiner Minimum Tree (SMT)
  • Problem Given an undirected weighted graph
    G(V,E) and a demand set D ? V, find a minimum
    cost tree T which spans a set of vertices V?V
    such that D?V
  • Elements of V-D, which have a degree larger than
    2 in T, are called the Steiner points

31
Steiner Minimum Tree (SMT)
  • When D V, SMT MST
  • When D 2, SMT Single Pair Shortest Path
    Problem
  • The SMT problem is NP-Complete

32
Rectilinear Steiner Tree (RST)
  • A Steiner tree whose edges are constrained to be
    vertical or horizontal

33
Rectilinear Steiner Minimum Tree (RSMT)
  • Similar to the SMT problem, except that the tree
    T is a minimum cost rectilinear Steiner tree
  • This problem is NP-Complete but we can get an
    approximate solution to this problem by making
    use of the MST

34
Approximate Solution to RSMT
  • Construct a MST T
  • Obtain a RST T from T by connecting the vertices
    in T using vertical and horizontal edges

35
Approximate Solution to RSMT
  • It is proved that
  • WMST ? 1.5 WRSMT
  • Let W be the weight of the Steiner tree obtained
    by this method, i.e., by rectilinearizing the
    MST
  • What is the smallest b such that
  • W ? bWMST

36
Other Graph Problems
  • Minimum Clique Covering Problem
  • Maximum Independent Set Problem
  • Minimum Coloring Problem
  • Maximum Clique Problem
  • Network Flow Problem

37
Minimum Clique Covering
  • A clique of a graph G(V,E) is a set of vertices
    V? V such that every pair of vertices in V are
    joined by an edge
  • The clique cover number, k, of a graph G(V,E) is
    the minimum no. of cliques needed to cover all of
    the vertices of G
  • The problem of finding the clique cover number is
    NP-complete for general graphs

38
Maximum Independent Set (MIS)
  • An independent set of a graph G(V,E) is a set of
    vertices V? V such that no pair of vertices in
    V are joined by an edge
  • The MIS problem is to find the independence
    (stability) number, a, of a graph, i.e., the size
    of the largest independent set in the graph
  • This problem is NP-complete for general graphs
  • What is the relationship between the clique cover
    number and the independence number? Answer

39
Graph Problems in Interval Graphs
  • What is the meaning of a clique cover in an
    interval graph?
  • What is the meaning of a MIS in an interval
    graph?
  • The clique covering and MIS in an interval graph
    can be found in O(n log n) time where n is the
    no. of intervals. How?
  • Answer for MIS
  • Sort the 2n points in ascending order
  • Scan list from left to right until encounter a
    right endpoint
  • Output the interval having this right endpoint as
    a member of MIS and delete all intervals
    containing this point
  • Repeat until done

40
Minimum Chromatic Number
  • The chromatic number, c, of a graph G(V,E) is the
    minimum no. of colors needed to color the
    vertices such that every pair of vertices joined
    by an edge are colored differently
  • The problem of finding the chromatic number is
    NP-complete for general graphs

41
Maximum Clique
  • The maximum clique problem is to find the clique
    number, w, of a graph, i.e., the size of the
    largest clique in the graph
  • This problem is NP-complete for general graphs
  • What is the relationship between the chromatic
    number and the clique number? Answer

42
Graph Problems in Interval Graphs (Contd)
  • What is the meaning of the chromatic number and
    the clique number in an interval graph?
  • The chromatic number and the clique number of an
    interval graph can be found in O(n log n) time.
    How?
  • Answer for maximum clique simple O(n2)
    algorithm
  • A Sort intervals (I)
  • cliq0 max-cliq0
  • for i1 to 2n do
  • if AiLeft then cliq
  • if cliq gt max-cliq then max-cliqcliq
  • else cliq--
  • Return max-cliq

43
Perfect Graphs
  • A graph G(V,E) is called perfect of all of its
    induced sub-graphs satisfy the following two
    properties
  • P1
  • P2
  • Interval graphs, permutation graphs and
    orientable graphs are examples of the perfect
    graphs Cycle of odd length gt 3 is not a perfect
    graph
  • The four key graph problems can be solved in
    polynomial time for the class of perfect graphs

44
Partitioning Problem
  • Problem Given an undirected graph G(V,E),
    partition V into two sets V1 and V2 of equal
    sizes such that the number of edges E1 between
    V1 and V2 is minimized
  • E1 is called the cut

45
Partitioning Problem
  • More general versions of this problem
  • Specify the sizes of V1 and V2
  • Partition V into k sets where k ? 2
  • All these problems are NP-complete

46
Network Flow Problem
  • Problem Given a weighted directed graph G(V,E).
    One vertex s?V is designated as the source and
    one vertex t?V as the sink. Find the maximum flow
    from s to t such that the flow along each edge
    does not exceed the capacity (weight) of the edge

47
Network Flow Problem
  • For example
  • What is the maximum flow in this network? (see
    next slide)
  • Network flow problem can be solved in polynomial
    time

48
Network Flow Problem
  • The maximum flow is 9
  • A well known theorem
  • Maximum Flow Minimum Cut

49
Network Flow Problem
  • The minimum cut is 9
  • Notice that we only count the edges going from s
    to t

50
Network Flow Problem
  • Unfortunately, we cannot use this network flow
    method to solve the partitioning problem. Why?

51
Dynamic Programming
52
Dynamic Programming
  • In dynamic programming, an array A1..n (can be
    of higher dimension) of data is defined and
    stored. The computation of Ak is dependent on
    the values of Aj where j lt k, so we can compute
    their values one by one, i.e., A1, then A2,
    then A3,

53
Dynamic Programming
  • For example
  • How many shortest multi-bend routes can there be
    from s to t?

54
Dynamic Programming
  • We define an array A1..8, 1..6 where Ax,y is
    the number of routes from s to (x,y)
  • How about if we only allow 1-bend?

A1,k 1 Aj,1 1 Aj,k Aj-1,k
Aj,k-1
55
Dynamic Programming in Maze Routing
  • Criteria Follow Shortest Manhattan Distance
  • Always route towards t with minimum distance

s
A1,k 1 Aj,1 1 Aj,k Aj-1,k
Aj,k-1
t
56
Dynamic Programming in Maze Routing
  • Allow detours, i.e., allow routing away from t
    (routes do not follow the shortest Manhattan
    distance

57
Mathematical Programming
  • In mathematical programming, the problem is
    expressed as an objective function and a set of
    constraints. The followings are common problems
    that are solvable in polynomial time
  • Linear Programming
  • Quadratic Programming
  • Geometric Programming

58
Mathematical Programming
  • For Example

59
Mathematical Programming
Objective Minimize W?H Constraints W
? w1 w2 W ? w3 H ? A3/w3 A1/w1
H ? A3/w3 A2/w2
2
1
3
60
Linear Programming (LP)
  • Both the objective and constraint functions are
    linear
  • LP can be solved optimally
  • Many solvers are available that can efficiently
    solve problems with a large number of variables
  • http//www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/
    Categories/linearprog.html

61
Integer Linear Programming (ILP)
  • ILP is a variant of LP in which all of the
    variables are integer
  • ILP is NP-complete
  • LP that contains both integer and real variables
    is called mixed integer linear programming

62
Stochastic Search
  • Local Search
  • Tabu Search
  • Genetic Algorithm
  • Simulated Annealing

63
Local Search
  • Local search is a simple searching algorithm that
    always move downhill in the solution space
  • The notion of neighborhood N(f) of a feasible
    solution f is used. Given f, N(f) is a set of
    feasible solutions that are close to f based on
    some criteria

64
Local Search
  • Algorithm
  • Initialize a feasible solution f
  • Do
  • G ? g g ? N(f) and cost(g) lt cost(f)
  • If (G ? ?), assign f as one element in G
  • while (G ? ?)
  • Return f

65
Local Search
  • It is called first improvement if we always
    assign f with the first element found with a
    lower cost in the do-while-loop
  • It is called steepest descent if we always assign
    f with the element of the lowest cost in the
    do-while-loop
  • The biggest problem of local search is getting
    stuck in a local minimum

66
Tabu Search
  • Tabu search allows uphill moves
  • Given a neighborhood set N(f) of a feasible
    solution f, the principle of tabu search is to
    move to the cheapest element g?N(f) even when
    cost(g) gt cost(f)
  • A Tabu list containing the k last visited
    solutions is maintained to avoid circular search
    pattern of length ? k

67
Tabu Search
  • Algorithm
  • Initialize a feasible solution f
  • best ? f, Q ? an empty queue
  • Do
  • G ? g g ? N(f) and g ? Q
  • If (G ? ?)
  • Assign f as the cheapest element in G
  • If Q lt k, enqueue(Q, f) else dequeue(Q),
    enqueue(Q, f)
  • If cost(best) gt cost(f), best ? f
  • while (G ? ?) and not stop()
  • Return best

68
Genetic Algorithms (GA)
  • Instead of keeping just one current solution, GA
    keeps track of a set P of feasible solutions,
    called population
  • In each iteration, the current population Pi is
    replaced by the next one Pi1
  • To generate a feasible solution h?Pi1, two
    solutions f and g are selected from Pi and h is
    generated such that it inherits properties from
    both parents f and g . This is called crossover

69
Genetic Algorithms (GA)
  • A lower cost solution in a population will have a
    higher chance of being selected as the parent
  • Mutation may occur to avoid getting stuck in a
    local minimum

70
Genetic Algorithms (GA)
  • Algorithm
  • Initialize P with k feasible solutions
  • Do
  • newP ? ?
  • For (i 1 i ? k i
  • Select two low cost solutions from P as f and g
  • h ? crossover(f, g)
  • Add h to newP
  • P ? newP
  • while not stop()
  • Return the best solution in P

71
Markov Chain
  • Let G(V,E) be a directed graph with
    VS1,S2,..Sn
  • Let C V?Z be a vertex-weighting function
  • We write Ci for C(Si)
  • Let P E?0,1 be an edge-weighting function
    (notation PijP(Si,Sj) such that
  • for
    all Si?V
  • N(Si) is the set of next next states of Si
  • We call (G,P) ? (finite) time-homogenous Markov
    chain
  • Pij is called the conditional probability of
    edge(Si,Sj)
  • Assume Pijgt0, if Pij0, simply delete edge(Si,Sj)
    from E

72
Irredundant Markov Chain
  • Irredundant Markov Chain
  • A Markov chain (G,P)is irreducible if G is
    strongly connected
  • Ppij is the matrix whose lti,jgt entry is pij
  • Stationary distribution
  • ?i is the probability of being in state i
  • A state distribution ? of a Markov Chain (G,P) is
    stationary if

73
Markov Chain Example
74
Simulated Annealing (SA)
  • Simulated annealing is a powerful technique to
    provide high quality solutions to some difficult
    combinatorial problems
  • It keeps a variable Temperature (T) which
    determines the behavior of the annealing process.
    This variable T is initialized to a very large
    value at the beginning, and will be gradually
    decreased (cooled down).

75
Simulated Annealing
  • Algorithm
  • Initialize T and a feasible solution f
  • While (T ? a threshold)
  • Make a slight modification to f to get g
  • Check if g is better than f, i.e., cost(g) ?
    cost(f)?
  • If yes, accept g, i.e., f ? g else, compute p as
    e-k(cost(g)-cost(f))/T where k is a positive
    constant, and then, accept g with probability p
  • Update T

76
Basic Ingredients of Simulated Annealing
  • Solution Space
  • Neighboring Structure
  • Cost Function
  • Annealing Schedule
  • Moves are selected randomly, and the probability
    that a move is accepted is proportional to
    systems current temp

77
Simulated Annealing
500 modules
This is a floorplanning result obtained by
simulated annealing
78
Floorplanning
79
Hierarchical Design
  • Several blocks after partitioning
  • Need to
  • Put the blocks together.
  • Design each block.
  • Which step to go first?

80
Hierarchical Design
  • How to put the blocks together without knowing
    their shapes and the positions of the I/O pins?
  • If we design the blocks first, those blocks may
    not be able to form a tight packing

81
Floorplanning
  • The floorplanning problem is to plan the
    positions and shapes of the modules at the
    beginning of the design cycle to optimize the
    circuit performance
  • chip area
  • total wire length
  • delay of critical path
  • routability
  • others, e.g., noise, heat dissipation, etc.

82
Floorplanning v.s. Placement
  • Both determines block positions to optimize the
    circuit performance.
  • Floorplanning
  • Details like shapes of blocks, I/O pin positions,
    etc. are not yet fixed (blocks with flexible
    shape are called soft blocks)
  • Placement
  • Details like module shapes and I/O pin positions
    are fixed (blocks with no flexibility in shape
    are called hard blocks)

83
Floorplanning Problem
  • Input
  • n Blocks with areas A1, ... , An
  • Bounds ri and si on the aspect ratio of block Bi
  • Output
  • Coordinates (xi, yi), width wi and height hi for
    each block such that hi wi Ai and ri ? hi/wi ?
    si
  • Objective
  • To optimize the circuit performance

84
Bounds on Aspect Ratios
  • If there is no bound on the aspect ratios, we can
    surely pack very tightly
  • However we dont want to layout blocks as long
    strips, so we require ri ? hi/wi ? si for each i

85
Bounds on Aspect Ratios
  • We can also allow several shapes for each block
  • For hard blocks, the orientations can be changed

86
Objective Function
  • A commonly used objective function is a weighted
    sum of area and wirelength
  • cost aA bL
  • where A is the total area of the packing, L is
    the total wire length, and a and b are constants

87
Wire length Estimation
  • Exact wire length of each net is not known until
    routing is done
  • In floorplanning, even pin positions are not
    known yet
  • Some possible wire length estimations
  • Center-to-center estimation
  • Half-perimeter estimation

88
Dead space
  • Dead space is the space that is wasted
  • Minimizing area is the same as minimizing dead
    space
  • Dead space percentage is computed as
  • (A - ?iAi) / A ? 100

Dead space
89
Mixed Integer Linear Program
  • A mathematical program such that
  • The objective is a linear function
  • All constraints are linear functions
  • Some variables are real numbers and some are
    integers, i.e., mixed integer
  • It is almost like a linear program, except that
    some variables are integers

90
Problem Formulation
  • Minimize the packing area
  • Assume that one dimension W is fixed
  • Minimize the other dimension Y
  • Need to have constraints
  • so that blocks do not overlap
  • Associate each block Bi with 4 variables
  • xi and yi coordinates of its lower left corner
  • wi and hi width and height

W
Y
91
Non-overlapping Constraints
  • For two non-overlapping blocks Bi and Bj, at
    least one of the following four linear
    constraints must be satisfied

hi
hj
Bi
Bj
(xi, yi)
wi
(xj, yj)
wj
92
Integer Variables
  • Use integer (0 or 1) variables xij and yij
  • xij0 and yij 0 if (1) is true
  • xij0 and yij 1 if (2) is true
  • xij1 and yij 0 if (3) is true
  • xij1 and yij 1 if (4) is true
  • Let W and H be upper bounds on the total width
    and height. Non-overlapping constraints

93
Formulation
94
Formulation with Hard Blocks
  • If the blocks can be rotated, use a 0-1 integer
    variable zi for each block Bi s.t. zi 0 if Bi
    is in the original orientation and zi 1 if Bi
    is rotated 90o

95
Formulation with Soft Blocks
  • If Bi is a soft block, wihiAi. But this
    constraint is quadratic!
  • Linearized by taking the first two terms of the
    Taylor expression of hiAi/wi at wimax (max.
    width of block Bi)
  • hi himinli(wimax-wi)
  • where himin Ai/wimax and liAi/wimax2

96
Formulation with Soft Blocks
  • If Bi is soft and Bj is hard
  • If both Bi and Bj are soft

97
Solving Linear Program
  • Linear Programming (LP) can be solved by
    classical optimization techniques in polynomial
    time.
  • Mixed Integer LP (MILP) is NP-Complete.
  • The run time of the best known algorithm is
    exponential to the no. of variables and equations

98
Complexity
  • For a problem with n blocks, and for the simplest
    case, i.e., all blocks are hard
  • 4n continuous variables (xi, yi, wi, hi)
  • n(n-1) integer variables (xij, yij)
  • 2n2 linear constraints
  • Practically, this method can only solve small
    size problems.

99
Successive Augmentation
  • A classical greedy approach to keep the problem
    size small repeatedly pick a small subset of
    blocks to formulate a MILP, solve it together
    with the previously picked blocks with fixed
    locations and shapes

Next subset
Y
Partial Solution
Write a Comment
User Comments (0)
About PowerShow.com