Chapter 2 Complexity Analysis - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

Chapter 2 Complexity Analysis

Description:

Summation Formulas. Let N 0, let A, B, and C be constants, ... Given a rational function the last two rules are sufficient if a little algebra is employed: ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 52
Provided by: cynd4
Category:

less

Transcript and Presenter's Notes

Title: Chapter 2 Complexity Analysis


1
Chapter 2Complexity Analysis
2
Objectives
  • Discuss the following topics
  • Computational and Asymptotic Complexity
  • Big-O Notation
  • Properties of Big-O Notation
  • O and T Notations
  • Examples of Complexities
  • Finding Asymptotic Complexity Examples
  • Amortized Complexity
  • The Best, Average, and Worst Cases
  • NP-Completeness

3
Computational and Asymptotic Complexity
  • Computational complexity measures the degree of
    difficulty of an algorithm
  • Indicates how much effort is needed to apply an
    algorithm or how costly it is
  • To evaluate an algorithms efficiency, use
    logical units that express a relationship such
    as
  • The size n of a file or an array
  • The amount of time t required to process the data

4
Computational and Asymptotic Complexity
(continued)
  • This measure of efficiency is called asymptotic
    complexity
  • It is used when disregarding certain terms of a
    function
  • To express the efficiency of an algorithm
  • When calculating a function is difficult or
    impossible and only approximations can be found
  • f (n) n2 100n log10n 1,000

5
Computational and Asymptotic Complexity
(continued)
  • Figure 2-1 The growth rate of all terms of
    function f (n) n2 100n
    log10n 1,000

6
Big-O Notation
  • Introduced in 1894, the big-O notation specifies
    asymptotic complexity, which estimates the rate
    of function growth
  • Definition 1 f (n) is O(g(n)) if there exist
    positive numbers c and N such that f (n) cg(n)
    for all n N

Figure 2-2 Different values of c and N for
function f (n) 2n2 3n 1 O(n2)
calculated according to the definition of big-O
7
Big-O Notation (continued)
Figure 2-3 Comparison of functions for different
values of c and N from Figure 2-2
8
Properties of Big-O Notation
  • Fact 1 (transitivity) If f (n) is O(g(n)) and
    g(n) is O(h(n)), then f(n) is O(h(n))
  • Fact 2 If f (n) is O(h(n)) and g(n) is O(h(n)),
    then f(n) g(n) is O(h(n))
  • Fact 3 The function ank is O(nk)

9
Properties of Big-O Notation (continued)
  • Fact 4The function nk is O(nkj) for any
    positive j
  • Fact 5If f(n) cg(n), then f(n) is O(g(n))
  • Fact 6 The function loga n is O(logb n) for any
    positive numbers a and b ? 1
  • Fact 7loga n is O(lg n) for any positive a ? 1,
    where lg n log2 n

10
O and T Notations
  • Big-O notation refers to the upper bounds of
    functions
  • There is a symmetrical definition for a lower
    bound in the definition of big-O
  • Definition 2 The function f(n) is O(g(n)) if
    there exist positive numbers c and N such that
    f(n) cg(n) for all n N

11
O and T Notations (continued)
  • The difference between this definition and the
    definition of big-O notation is the direction of
    the inequality
  • One definition can be turned into the other by
    replacing with
  • There is an interconnection between these two
    notations expressed by the equivalence
  • f (n) is O(g(n)) iff g(n) is O(f (n))
    (prove?)

12
O and T Notations (continued)
  • Definition 3 f(n) is T(g(n)) if there exist
    positive numbers c1, c2, and N such that c1g(n)
    f(n) c2g(n) for all n N
  • When applying any of these notations (big-O,O,
    and T), remember they are approximations that
    hide some detail that in many cases may be
    considered important

13
Examples of Complexities
  • Algorithms can be classified by their time or
    space complexities
  • An algorithm is called constant if its execution
    time remains the same for any number of elements
  • It is called quadratic if its execution time is
    O(n2)

14
Examples of Complexities (continued)
Figure 2-4 Classes of algorithms and their
execution times on a computer executing 1
million operations per second (1 sec 106 µsec
103 msec)
15
Examples of Complexities (continued)
Figure 2-4 Classes of algorithms and their
execution times on a computer executing 1
million operations per second (1 sec 106 µsec
103 msec)(continued)
16
Examples of Complexities (continued)
Figure 2-5 Typical functions applied in big-O
estimates
17
Finding Asymptotic Complexity Examples
  • Asymptotic bounds are used to estimate the
    efficiency of algorithms by assessing the amount
    of time and memory needed to accomplish the task
    for which the algorithms were designed
  • for (i sum 0 i lt n i)
  • sum ai
  • Initialize two variables
  • Execute two assignments
  • Update sum
  • Update i
  • Total 22n assignments for the complete execution
  • Asymptotic complexity is O(n)

18
Finding Asymptotic Complexity Examples
  • Printing sums of all the sub-arrays that begins
    with position 0
  • for (i 0 i lt n i)
  • for (j 1, sum a0 j lt i j)
  • sum aj
  • System.out.println ("sum for subarray 0 through
    "i" is"
  • sum)
  • 13n 13n2(12.n-1)
  • 13nn(n-1)O(n) O(n2) O(n2)

19
Examples Continued
  • Printing sums of numbers in the last five cells
    of the sub-arrays starting in position 0
  • for (i 4 i lt n i)
  • for (j i-3, sum ai-4 j lt i j)
  • sum aj
  • System.out.println
  • ("sum for subarray "(i - 4)" through
    "i" is" sum)
  • n-4 times for outer loop
  • For each i, inner loop executes only four times
  • 18.(n-4) O(n)

20
Finding Asymptotic Complexity Examples
  • Finding the length of the longest sub-array
    with the numbers in increasing order
  • For example 1 2 5 in 1 8 1 2 5 0 11 12
  • for (i 0, length 1 i lt n-1 i)
  • for (i1 i2 k i k lt n-1 ak lt ak1
  • k, i2)
  • if (length lt i2 - i1 1)
  • length i2 - i1 1
  • System.out.println ("the length of the longest
  • ordered subarray is" length)

21
  • If all numbers in the array are in decreasing
    order, the outer loop is executed n-1 times
  • But in each iteration, the inner loop executes
    just one time. The algorithm is O(n)
  • If the numbers are in increasing order, the outer
    loop is executed n- 1 times and the inner loop is
    executed n-1-i times for each i in 0,, n-2.
    The algorithm is O(n2)

22
Finding Asymptotic Complexity Examples
  • int binarySearch(int arr, int key)
  • int lo 0, mid, hi arr.length-1
  • while (lo lt hi)
  • mid (lo hi)/2
  • if (key lt arrmid)
  • hi mid - 1
  • else if (arrmid lt key)
  • lo mid 1
  • else return mid // success
  • return -1 // failure
  • O(lg n)

23
The Best, Average, and Worst Cases
  • The worst case is when an algorithm requires a
    maximum number of steps
  • The best case is when the number of steps is the
    smallest
  • The average case falls between these extremes
  • Cavg Sip(inputi)steps(inputi)

24
  • The average complexity is established by
    considering possible inputs to an algorithm,
  • determining the number of steps performed by the
    algorithm for each input,
  • adding the number of steps for all the inputs,
    and dividing by the number of inputs
  • This definition assumes that the probability of
    occurrence of each input is the same. It is not
    the case always.
  • The average complexity is defined as the average
    over the number of steps executed when processing
    each input weighted by the probability of
    occurrence of this input

25
Consider searching sequentially an unordered
arrayto find a number
  • The best case is when the number is found in the
    first cell
  • The worst case is when the number is in the last
    cell or not in the array at all
  • The average case?

26
  • Assuming the probability distribution is uniform
  • The probability equals to 1/n for each position
  • To find a number in one try is 1/n
  • To find a number in two tries is 1/n
  • etc
  • The average steps to find a number is

27
  • If the probabilities differ, the average case
    gives a different outcome
  • If the probability of finding a number in the
    first cell is ½ , the probability in the second
    cell is ¼ and the probability is the same for
    remaining cells

  • the average steps


1 4(n - 2)
1 - ½ - ¼ n - 2
28
Summation Formulas
Let N gt 0, let A, B, and C be constants, and let
f and g be any functions. Then
29
Logarithms
Let b be a real number, b gt 0 and b ? 1. Then,
for any real number x gt 0, the logarithm of x to
base b is the power to which b must be raised to
yield x. That is
For example
If the base is omitted, the standard convention
in mathematics is that log base 10 is intended
in computer science the standard convention is
that log base 2 is intended.
30
Logarithms
Let a and b be real numbers, both positive and
neither equal to 1. Let x gt 0 and y gt 0 be real
numbers.
31
Limit of a Function
Definition Let f(x) be a function with domain
(a, b) and let a lt c lt b. The limit of f(x) as x
approaches c is L if, for every positive real
number e, there is a positive real number d such
that whenever x-c lt d then f(x) L lt e.
The definition being cumbersome, the following
theorems on limits are useful. We assume f(x) is
a function with domain as described above and
that K is a constant.
C3
32
Limit of a Function
Here assume f(x) and g(x) are functions with
domain as described above and that K is a
constant, and that both the following limits
exist (and are finite)
Then
C7
33
Limit as x Approaches Infinity
Definition Let f(x) be a function with domain
0, ?). The limit of f(x) as x approaches ? is L
if, for every positive real number e, there is a
positive real number N such that whenever x gt N
then f(x) L lt e.
The definition being cumbersome, the following
theorems on limits are useful. We assume f(x) is
a function with domain 0, ?) and that K is a
constant.
34
Limit of a Rational Function
Given a rational function the last two rules are
sufficient if a little algebra is employed
Divide by highest power of x from the denominator.
Take limits term by term.
Apply theorem C3.
35
Infinite Limits
In some cases, the limit may be infinite.
Mathematically, this means that the limit does
not exist.
C13
Example
36
l'Hôpital's Rule
In some cases, the reduction trick shown for
rational functions does not apply
In such cases, l'Hôpital's Rule is often useful.
If f(x) and g(x) are differentiable functions
such that
This also applies if the limit is 0.
then
37
l'Hôpital's Rule Examples
Applying l'Hôpital's Rule
Another example
Recall that
38
Mathematical Induction
Mathematical induction is a technique for proving
that a statement is true for all integers in the
range from N0 to ?, where N0 is typically 0 or 1.
First (or Weak) Principle of Mathematical
Induction
Let P(N) be a proposition regarding the integer
N, and let S be the set of all integers k for
which P(k) is true. If 1) N0 is in S,
and 2) whenever N is in S then N1 is also in
S, then S contains all integers in the range N0,
?).
To apply the PMI, we must first establish that a
specific integer, N0, is in S (establishing the
basis) and then we must establish that if a
arbitrary integer, N ? N0, is in S then its
successor, N1, is also in S.
39
Induction Example
Theorem For all integers n ? 1, n2n is a
multiple of 2.
proof Let S be the set of all integers for
which n2n is a multiple of 2. If n 1, then
n2n 2, which is obviously a multiple of 2.
This establishes the basis, that 1 is in S. Now
suppose that some integer k ? 1 is an element of
S. Then k2k is a multiple of 2. We need to
show that k1 is an element of S in other words,
we must show that (k1)2(k1) is a multiple of
2. Performing simple algebra (k1)2(k1) (k2
2k 1) (k 1) k2 3k 2 Now we know
k2k is a multiple of 2, and the expression above
can be grouped to show (k1)2(k1) (k2 k)
(2k 2) (k2 k) 2(k 1) The last
expression is the sum of two multiples of 2, so
it's also a multiple of 2. Therefore, k1 is an
element of S. Therefore, by PMI, S contains all
integers 1, ?). QED
40
Inadequacy of the First Form of Induction
Theorem Every integer greater than 3 can be
written as a sum of 2's and 5's. (That is, if N
gt 3, then there are nonnegative integers x and y
such that N 2x 5y.)
This is not (easily) provable using the First
Principle of Induction. The problem is that the
way to write N1 in terms of 2's and 5's has
little to do with the way N is written in terms
of 2's and 5's. For example, if we know that N
2x 5y we can say that N 1 2x 5y 1 2x
5(y 1) 5 1 2(x 3) 5(y 1) but we
have no reason to believe that y 1 is
nonnegative. (Suppose for example that N is 9.)
41
"Strong" Form of Induction
There is a second statement of induction,
sometimes called the "strong" form, that is
adequate to prove the result on the preceding
slide
Second (or Strong) Principle of Mathematical
Induction
Let P(N) be a proposition regarding the integer
N, and let S be the set of all integers k for
which P(k) is true. If 1) N0 is in S,
and 2) whenever N0 through N are in S then N1
is also in S, then S contains all integers in the
range N0, ?).
Interestingly, the "strong" form of induction is
logically equivalent to the "weak" form stated
earlier so in principle, anything that can be
proved using the "strong" form can also be proved
using the "weak" form.
42
Using the Second Form of Induction
Theorem Every integer greater than 3 can be
written as a sum of 2's and 5's.
proof Let S be the set of all integers n gt 3
for which n 2x 5y for some nonnegative
integers x and y. If n 4, then n 22 50.
If n 5, then n 20 51. This establishes
the basis, that 4 and 5 are in S. Now suppose
that all integers from 4 through k are elements
of S, where k ? 5. We need to show that k1 is an
element of S in other words, we must show that
k1 2r 5s for some nonnegative integers r and
s. Now k1 ? 6, so k-1 ? 4. Therefore by our
assumption, k-1 2x 5y for some nonnegative
integers x and y. Then, simple algebra yields
that k1 k-1 2 2x 5y 2 2(x1)
5y, whence k1 is an element of S. Therefore, by
the Second PMI, S contains all integers 4,
?). QED
43
Amortized Complexity
  • Amortized analysis
  • Analyzes sequences of operations
  • Can be used to find the average complexity of a
    worst case sequence of operations
  • By analyzing sequences of operations rather than
    isolated operations, amortized analysis takes
    into account interdependence between operations
    and their results

44
Amortized Complexity (continued)
  • Worst case
  • C(op1, op2, op3, . . .) Cworst(op1)
    Cworst(op2) Cworst(op3) . . .
  • Average case
  • C(op1, op2, op3, . . .) Cavg(op1) Cavg(op2)
    Cavg(op3) . . .
  • Amortized
  • C(op1, op2, op3, . . .) C(op1) C(op2)
    C(op3) . . .
  • Where C can be worst, average, or best case
    complexity

45
Amortized Complexity (continued)
  • Figure 2-6 Estimating the
    amortized cost

46
NP-Completeness
  • A deterministic algorithm is a uniquely defined
    (determined) sequence of steps for a particular
    input
  • There is only one way to determine the next step
    that the algorithm can make
  • A nondeterministic algorithm is an algorithm that
    can use a special operation that makes a guess
    when a decision is to be made

47
NP-Completeness (continued)
  • A nondeterministic algorithm is considered
    polynomial its running time in the worst case is
    O(nk) for some k
  • Problems that can be solved with such algorithms
    are called tractable and the algorithms are
    considered efficient
  • A problem is called NP-complete if it is NP (it
    can be solved efficiently by a nondeterministic
    polynomial algorithm) and every NP problem can be
    polynomially reduced to this problem

48
NP-Completeness (continued)
  • The satisfiability problem concerns Boolean
    expressions in conjunctive normal form (CNF)

49
Summary
  • Computational complexity measures the degree of
    difficulty of an algorithm.
  • This measure of efficiency is called asymptotic
    complexity.
  • To evaluate an algorithms efficiency, use
    logical units that express a relationship.
  • This measure of efficiency is called asymptotic
    complexity.

50
Summary (continued)
  • Introduced in 1894, the big-O notation specifies
    asymptotic complexity, which estimates the rate
    of function growth.
  • An algorithm is called constant if its execution
    time remains the same for any number of elements.
  • It is called quadratic if its execution time is
    O(n2).
  • Amortized analysis analyzes sequences of
    operations.

51
Summary (continued)
  • A deterministic algorithm is a uniquely defined
    (determined) sequence of steps for a particular
    input.
  • A nondeterministic algorithm is an algorithm that
    can use a special operation that makes a guess
    when a decision is to be made.
  • A nondeterministic algorithm is considered
    polynomial.
Write a Comment
User Comments (0)
About PowerShow.com