Title: Chapter 2 Complexity Analysis
1Chapter 2Complexity Analysis
2Objectives
- Discuss the following topics
- Computational and Asymptotic Complexity
- Big-O Notation
- Properties of Big-O Notation
- O and T Notations
- Examples of Complexities
- Finding Asymptotic Complexity Examples
- Amortized Complexity
- The Best, Average, and Worst Cases
- NP-Completeness
3Computational and Asymptotic Complexity
- Computational complexity measures the degree of
difficulty of an algorithm - Indicates how much effort is needed to apply an
algorithm or how costly it is - To evaluate an algorithms efficiency, use
logical units that express a relationship such
as - The size n of a file or an array
- The amount of time t required to process the data
4Computational and Asymptotic Complexity
(continued)
- This measure of efficiency is called asymptotic
complexity - It is used when disregarding certain terms of a
function - To express the efficiency of an algorithm
- When calculating a function is difficult or
impossible and only approximations can be found - f (n) n2 100n log10n 1,000
5Computational and Asymptotic Complexity
(continued)
- Figure 2-1 The growth rate of all terms of
function f (n) n2 100n
log10n 1,000
6Big-O Notation
- Introduced in 1894, the big-O notation specifies
asymptotic complexity, which estimates the rate
of function growth - Definition 1 f (n) is O(g(n)) if there exist
positive numbers c and N such that f (n) cg(n)
for all n N
Figure 2-2 Different values of c and N for
function f (n) 2n2 3n 1 O(n2)
calculated according to the definition of big-O
7Big-O Notation (continued)
Figure 2-3 Comparison of functions for different
values of c and N from Figure 2-2
8Properties of Big-O Notation
- Fact 1 (transitivity) If f (n) is O(g(n)) and
g(n) is O(h(n)), then f(n) is O(h(n)) - Fact 2 If f (n) is O(h(n)) and g(n) is O(h(n)),
then f(n) g(n) is O(h(n)) - Fact 3 The function ank is O(nk)
9Properties of Big-O Notation (continued)
- Fact 4The function nk is O(nkj) for any
positive j - Fact 5If f(n) cg(n), then f(n) is O(g(n))
- Fact 6 The function loga n is O(logb n) for any
positive numbers a and b ? 1 - Fact 7loga n is O(lg n) for any positive a ? 1,
where lg n log2 n
10O and T Notations
- Big-O notation refers to the upper bounds of
functions - There is a symmetrical definition for a lower
bound in the definition of big-O - Definition 2 The function f(n) is O(g(n)) if
there exist positive numbers c and N such that
f(n) cg(n) for all n N
11O and T Notations (continued)
- The difference between this definition and the
definition of big-O notation is the direction of
the inequality - One definition can be turned into the other by
replacing with - There is an interconnection between these two
notations expressed by the equivalence - f (n) is O(g(n)) iff g(n) is O(f (n))
12O and T Notations (continued)
- Definition 3 f(n) is T(g(n)) if there exist
positive numbers c1, c2, and N such that c1g(n)
f(n) c2g(n) for all n N - When applying any of these notations (big-O,O,
and T), remember they are approximations that
hide some detail that in many cases may be
considered important
13Examples of Complexities
- Algorithms can be classified by their time or
space complexities - An algorithm is called constant if its execution
time remains the same for any number of elements - It is called quadratic if its execution time is
O(n2)
14Examples of Complexities (continued)
Figure 2-4 Classes of algorithms and their
execution times on a computer executing 1
million operations per second (1 sec 106 µsec
103 msec)
15Examples of Complexities (continued)
Figure 2-4 Classes of algorithms and their
execution times on a computer executing 1
million operations per second (1 sec 106 µsec
103 msec)(continued)
16Examples of Complexities (continued)
Figure 2-5 Typical functions applied in big-O
estimates
17Finding Asymptotic Complexity Examples
- Asymptotic bounds are used to estimate the
efficiency of algorithms by assessing the amount
of time and memory needed to accomplish the task
for which the algorithms were designed - for (i sum 0 i lt n i)
- sum ai
18Finding Asymptotic Complexity Examples
- for (i 0 i lt n i)
- for (j 1, sum a0 j lt i j)
- sum aj
- System.out.println ("sum for subarray 0 through
"i" is" - sum)
-
- for (i 4 i lt n i)
- for (j i-3, sum ai-4 j lt i j)
- sum aj
- System.out.println ("sum for subarray "(i -
4)" through "i" is" sum) -
19Finding Asymptotic Complexity Examples
- for (i 0, length 1 i lt n-1 i)
- for (i1 i2 k i k lt n-1 ak lt ak1
- k, i2)
- if (length lt i2 - i1 1)
- length i2 - i1 1
- System.out.println ("the length of the longest
- ordered subarray is" length)
-
20Finding Asymptotic Complexity Examples
- int binarySearch(int arr, int key)
- int lo 0, mid, hi arr.length-1
- while (lo lt hi)
- mid (lo hi)/2
- if (key lt arrmid)
- hi mid - 1
- else if (arrmid lt key)
- lo mid 1
- else return mid // success
-
- return -1 // failure
-
21The Best, Average, and Worst Cases
- The worst case is when an algorithm requires a
maximum number of steps - The best case is when the number of steps is the
smallest - The average case falls between these extremes
- Cavg Sip(inputi)steps(inputi)
22The Best, Average, and Worst Cases (continued)
1 - ½ - ¼ n - 2
1 4(n - 2)
n(n1) - 6 8(n-2)
n 3 8
1 2 3n 2 4 4(n-2)
1
1
23Amortized Complexity
- Amortized analysis
- Analyzes sequences of operations
- Can be used to find the average complexity of a
worst case sequence of operations - By analyzing sequences of operations rather than
isolated operations, amortized analysis takes
into account interdependence between operations
and their results
24Amortized Complexity (continued)
- Worst case
- C(op1, op2, op3, . . .) Cworst(op1)
Cworst(op2) Cworst(op3) . . . - Average case
- C(op1, op2, op3, . . .) Cavg(op1) Cavg(op2)
Cavg(op3) . . . - Amortized
- C(op1, op2, op3, . . .) C(op1) C(op2)
C(op3) . . . - Where C can be worst, average, or best case
complexity
25Amortized Complexity (continued)
-
- Figure 2-6 Estimating the
amortized cost
26NP-Completeness
- A deterministic algorithm is a uniquely defined
(determined) sequence of steps for a particular
input - There is only one way to determine the next step
that the algorithm can make - A nondeterministic algorithm is an algorithm that
can use a special operation that makes a guess
when a decision is to be made
27NP-Completeness (continued)
- A nondeterministic algorithm is considered
polynomial its running time in the worst case is
O(nk) for some k - Problems that can be solved with such algorithms
are called tractable and the algorithms are
considered efficient - A problem is called NP-complete if it is NP (it
can be solved efficiently by a nondeterministic
polynomial algorithm) and every NP problem can be
polynomially reduced to this problem
28NP-Completeness (continued)
- The satisfiability problem concerns Boolean
expressions in conjunctive normal form (CNF) -
29Summary
- Computational complexity measures the degree of
difficulty of an algorithm. - This measure of efficiency is called asymptotic
complexity. - To evaluate an algorithms efficiency, use
logical units that express a relationship. - This measure of efficiency is called asymptotic
complexity.
30Summary (continued)
- Introduced in 1894, the big-O notation specifies
asymptotic complexity, which estimates the rate
of function growth. - An algorithm is called constant if its execution
time remains the same for any number of elements. - It is called quadratic if its execution time is
O(n2). - Amortized analysis analyzes sequences of
operations.
31Summary (continued)
- A deterministic algorithm is a uniquely defined
(determined) sequence of steps for a particular
input. - A nondeterministic algorithm is an algorithm that
can use a special operation that makes a guess
when a decision is to be made. - A nondeterministic algorithm is considered
polynomial.