Design and Analysis of Algorithms - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Design and Analysis of Algorithms

Description:

6. NP-Complete Problems and theirs Heuristics. 7. Meta-heuristic Methods ... c such that to the right of n0, the value of f(n) always lied on or below cg(n) ... – PowerPoint PPT presentation

Number of Views:26
Avg rating:3.0/5.0
Slides: 37
Provided by: csb17
Category:

less

Transcript and Presenter's Notes

Title: Design and Analysis of Algorithms


1
Design and Analysis of Algorithms
  • Wattana jindaluang

2
Outline
1. Introduction
2. SearchingSorting
3. Devide-Conquer Methods
4. Greedy Methods
3
Outline (Cont.)
5. Dynamic Programming Methods
6. NP-Complete Problems and theirs Heuristics
7. Meta-heuristic Methods
4
Introduction
1. Algorithms
2. Insertion sort
3. Analysis of insertion sort
4. Growth of function asymptotic notation
5
Algorithms
  • Informally, an algorithms is any well-defined
    computational procedure that take some value, or
    set of values, as input and produces some value,
    or set of values, as output. An algorithm is thus
    a sequence of computational steps that transform
    the input into the output.

6
Algorithms (cont.)
Input A sequence of n numbers lta1,a2,,angt. Outpu
t A permutation (reordering) lta1,a2,,angt of
the input sequence such that a1 a2
an. Given an input sequence such as
lt31,41,59,26,41,58gt, a sorting algorithm return
as output the sequence lt26,31,41,41,58,59gt. Such
an input sequence is called an instance of the
sorting problem.
7
Algorithms (cont.)
In general, an instance of a problem consists of
all the inputs (satisfying whatever constrains
are imposed in the problem statement) needed to
compute a solution to the problem. An algorithm
is said to be correct if, for every input
instance, it halts with the correct output. An
incorrect algorithm might not halt at all on some
input instances, or it might halt with other than
the desired answer.
8
Insertion sort
Our pseudocode for insertion sort is presented
as a procedure called INSERTTION-SORT, which
takes as a parameter an array A1..n containing
a sequence of length n that is to be sorted. (In
the code, the number n of elements in A is
denoted by lengthA.) The input numbers are
sorted in place the numbers are rearranged
within the array A, with at most a constant
number of them stored outside the array at any
time.
9
Insertion sort (cont.)
Input Array A1..n. Output Sorted array
A1..n.
10
Insertion sort (cont.)
INSERTION-SORT(A)1. for j 2 to lengthA2.
key Aj3. // insert Aj into the
sorted sequence A1..j-1 4. i
j -1 5. while i gt 0 and Ai gt key6.
Ai1 Ai7.
i i -18. Ai1 key
11
Insertion sort (cont.)
The index j indicates the current card being
inserted into the hand. Array elements
A1..j-1 constitute the currently sorted hand,
and elements Aj1..n correspond to
the pile of cards still on the table. The index
j moves left to right through the array. At
each iteration of the outer for loop, the
element Aj is picked out of the array (line
2). Starting position j-1,elements are
successively moved one position to the right
until the proper position for Aj is found
(lines 4-7), at which point it is inserted
(line 8).
12
Analyzing algorithms
Analyzing an algorithm has come to mean
prediction the resources that the algorithm
requires. Occasionally, resources such as memory,
communication bandwidth, or logic gates are of
primary concern, but most often it is
computational time that we want to measure.
Generally, by analyzing several candidate
algorithms for a problem, a most efficient one
can be easily identified. Such analysis may
indicate more than one viable candidate, but
several inferior algorithms are usually discarded
in the process.
13
Analysis of insertion sort
The time taken by the Insertion-Sort procedure
depends on the input sorting a thousand numbers
takes longer than sorting three numbers.
Moreover, Insertion-Sort can take different
amounts of time to sort two input sequences of
the same size depending on how nearly sorted they
already are. In general, the time taken by an
algorithm grows with the size of the input, so it
is traditional to describe the running time of a
program as a function of the size of its input.
14
Analysis of insertion sort (Cont.)
The best notion for input size depends on the
problem being studied. Sorting
the array size n for sorting Multiplying
the total number of bits two integers
needed to represent the input
in ordinary binary notation Graph
the numbers of vertices and edges
in the graph
15
Analysis of insertion sort (Cont.)
The running time of an algorithm on a particular
input is the number of primitive operations or
steps executed. It is convenient to define the
notion of step so that it is as
machine-independent as possible.
16
Analysis of insertion sort (Cont.)
We start by presenting the Insertion-Sort
procedure with the time cost of each statement
and the number of times each statement is
executed. For each j 2, 3, , n, where n
lengthA, we let tj be the number of times the
while loop test in line 5 is executed for that
value of j. We assume that comments are
executable statements, and so they take no time.
17
Analysis of insertion sort (Cont.)
INSERTION-SORT(A) cost time 1.
for j 2 to lengthA c1
n2. key Aj c2 n
- 1 3. // insert Aj into the sorted
sequence A1..j-1 0 n - 1
4. i j -1 c4 n -
1 5. while i gt 0 and Ai gt key
c5 6.
Ai1 Ai c6 7. i i
-1 c7 8. Ai1 key
c8 n - 1
18
Analysis of insertion sort (Cont.)
The running time of the algorithm is the sum of
running times for each statement executed a
statement that takes cin to total running time.
To compute T(n), the running time of
Insertion-Sort, we sum the products of the cost
and times columns, obtaining
19
Analysis of insertion sort (Cont.)
Even for inputs of a given size, an algorithms
running time may depend on which input of that
size is given. For example, in Insertion-Sort,
the best case occurs if the array is already
sorted. For each j 2, 3, , n, we then find
that Ai key in line 5 when i has its initial
value of j 1. Thus tj 1 for j 2, 3, , n,
and the best-case running time is T(n) c1n
c2(n-1) c4(n-1) c5(n-1) c8(n-1)
(c1 c2 c4 c5 c8)n (c2 c4 c5 c8)
20
Analysis of insertion sort (Cont.)
This running time can be expressed as an b for
constants a and b that depend on the statement
costs ci it is thus a linear function of n.
21
Analysis of insertion sort (Cont.)
If the array is in reverse sorted order that
is, in decreasing order- the worst case results.
We must compare each element Aj with each
element in the entire sorted subarray A1..j-1,
and so tj j for 2, 3, , n. Noting that and
. We find that in the worst case, the running
time of Insertion-Sort is
22
Analysis of insertion sort (Cont.)
This worst-case running time can be expressed as
an2 bn c for constants a, b, and c that again
depend on the statement costs ci it is thus a
quadratic function of n.
23
Growth of functions asymptotic notation
We shall now make one more simplifying
abstraction. It is the rate of growth, or order
of growth, of the running time that really
interests us. We therefore consider only the
leading term of a formula (e.g., an2), since the
lower-order terms are relatively insignificant
for large n. We also ignore the leading terms
constant coefficient, since constant factors are
less significant than the rate of growth in
determining computational efficiency for large
inputs.
24
Asymptotic notation
1.T-notation Let us define what this notation
means. For a given function g(n), we denote by
T(g(n)) the set of functions.T(g(n)) f(n)
there exist positive constants c1, c2, and n0
such that 0 c1g(n) f(n) c2g(n) for
all n n0
25
Asymptotic notation (Cont.)
2.O-notation (asymptotic upper bound) Let us
define what this notation means. For a given
function g(n), we denote by O(g(n)) the set of
functions.O(g(n)) f(n) there exist positive
constants c and n0 such that 0 f(n)
cg(n) for all n n0
26
Asymptotic notation (Cont.)
3.O-notation (asymptotic lower bound) Let us
define what this notation means. For a given
function g(n), we denote by O(g(n)) the set of
functions.O(g(n)) f(n) there exist positive
constants c and n0 such that 0 cg(n)
f(n) for all n n0
27
Asymptotic notation (Cont.)
c2g(n)
f(n)
c1g(n)
n
n0
f(n) T(g(n))
(a)
28
Asymptotic notation (Cont.)
Figure 1.1 Graphic examples of the T, O, and O
notations. In each part, the value of n0 shown is
the minimum possible value any greater value
would also work. a) T-notation bounds a function
to within constant factors. We write f(n)
T(g(n)) if there exist positive constants n0, c1,
and c2 such that to the right of n0, the value of
f(n) always lies between c1g(n) and c2g(n)
inclusive.
29
Asymptotic notation (Cont.)
c g(n)
f(n)
n
n0
f(n) O(g(n))
(b)
30
Asymptotic notation (Cont.)
b) Onotation gives an upper bound for a function
to within to a constant factor. We write f(n)
O(g(n)) if there are positive constants n0 and c
such that to the right of n0, the value of f(n)
always lied on or below cg(n).
31
Asymptotic notation (Cont.)
f(n)
c g(n)
n
n0
f(n) O(g(n))
(c)
32
Asymptotic notation (Cont.)
c) Onotation gives a lower bound for a function
to within to a constant factor. We write f(n)
O(g(n)) if there are positive constants n0 and c
such that to the right of n0, the value of f(n)
always lied on or above cg(n).
33
Asymptotic notation (Cont.)
4.o-notation The asymptotic upper bound
provided by O-notationmay or may not by
asymptotically tight. The bound 2n2 O(n2) is
asymptotically tight, but the bound 2nO(n2) is
not. We use o-notation to denote an upper bound
that is not asymptotically tight. We formally
define o(g(n)) as the seto(g(n)) f(n) for
any positive constants c gt 0, there exist a
constant n0 gt 0 such that 0 f(n) lt cg(n)
for all n n0
34
Asymptotic notation (Cont.)
For example, 2n o(n2), but 2n2 ? o(n2). The
definitions of O-notation and o-notation are
similar. The main difference is that in f(n)
O(g(n)), the bound 0 f(n) cg(n) holds for some
constant c gt 0, but in f(n) o(g(n)), the bound
0 f(n) lt cg(n) holds for all constant c gt 0.
35
Asymptotic notation (Cont.)
5.?-notation By analogy, ?-notation is to
Onotation as o-notation is to O-notation. We
use ?notation to denote a lower bound that is
not asymptotically tight. One way to define it is
by f(n) ? ?(g(n)) if and only if g(n) ?
o(f(n))Formally, however, we define ?(g(n)) as
the set?(g(n)) f(n) for any positive
constants c gt 0, there exist a constant n0
gt 0 such that 0 cg(n) lt f(n) for all n n0
36
Reference book
THOMAS H. CORMEN, CHARLES E. LEISERSON and RONALD
L RIVEST (1990) Introduction to Algorithms. The
MIT Press.
Write a Comment
User Comments (0)
About PowerShow.com