Title: Unit%201:%20Algorithmic%20Fundamentals
1Unit 1 Algorithmic Fundamentals
- Course contents
- On algorithms
- Mathematical foundations
- Asymptotic notation
- Growth of functions
- Complexity
- Lower vs. upper bounds
- Recurrences
2On Algorithms
- Algorithm A well-defined procedure for
transforming some input to a desired output. - Major concerns
- Correctness Does it halt? Is it correct? Is it
stable? - Efficiency Time complexity? Space complexity?
- Worst case? Average case? (Best case?)
- Better algorithms?
- How Faster algorithms? Algorithms with less
space requirement? - Optimality Prove that an algorithm is best
possible/optimal? Establish a lower bound?
3Example Traveling Salesman Problem (TSP)
- Input A set of points (cities) P together with a
distance d(p, q) between any pair p, q ? P. - Output What is the shortest circular route that
starts and ends at a given point and visits all
the points. - Correct and efficient algorithms?
4Nearest Neighbor Tour
1. pick and visit an initial point p0 2. P ? p0
3. i ? 0 4. while there are unvisited points
do 5. visit pi's closet unvisited point
pi1 6. i ? i 1 7. return to p0 from pi.
- Simple to implement and very efficient, but
incorrect!
5A Correct, but Inefficient Algorithm
1. d ? ? 2. for each of the n! permutations ?i
of the n points 3. if (cost(?i) ? d) then
4. d ? cost(?i) 5. Tmin ?
?i 6. return Tmin.
- Correctness Tries all possible orderings of the
points ? Guarantees to end up with the shortest
possible tour. - Efficiency Tries n! possible routes!
- 120 routes for 5 points, 3,628,800 routes for 10
points, 20 points? - No known efficient, correct algorithm for TSP!
- TSP is NP-complete.
6Example Sorting
- Input A sequence of n numbers lta1, a2, , angt.
- Output A permutation lta1', a2', , an'gt such
that a1' ? a2' ? ? an'. - Input lt8, 6, 9, 7, 5, 2, 3gt
- Output lt2, 3, 5, 6, 7, 8, 9 gt
- Correct and efficient algorithms?
7Insertion Sort
InsertionSort(A) 1. for j ? 2 to lengthA do 2.
key ? Aj 3. / Insert Aj into the
sorted sequence A1..j-1. / 4. i ? j - 1
5. while i gt 0 and Ai gt key do 6.
Ai1 ? Ai 7. i ? i - 1 8.
Ai1 ? key
8Exact Analysis of Insertion Sort
- The for loop is executed (n-1) 1 times. (why?)
- tj of times the while loop test for value j
(i.e., 1 of elements that have to be slided
right to insert the j-th item). - Step 5 is executed t2 t3 tn times.
- Step 6 is executed (t2 - 1) (t3 - 1) (tn
- 1) times. -
9Exact Analysis of Insertion Sort (contd)
-
- Best case If the input is already sorted, all
tj's are 1. - Linear T(n) (c1 c2 c4 c5 c8)n -
(c2 c4 c5 c8) - Worst case If the array is in reverse sorted
order, tj j, ? j. - Quadratic T(n) (c5 /2 c6/ 2 c7/2 ) n2
(c1 c2 c4 c5 /2 c6 /2 c7/2 c8) n
(c2 c4 c5 c8) - Exact analysis is often hard!
10Asymptotic Analysis
- Asymptotic analysis looks at growth of T(n) as n
? ?. - ? notation Drop low-order terms and ignore the
leading constant. - E.g., 8n3 - 4n2 5n - 2 ?(n3).
- As n grows large, lower-order ? algorithms
outperform higher-order ones. - Worst case input reverse sorted, while loop is
?(j) -
- Average case all permutations equally likely,
while loop is ?(j / 2) -
11Merge Sort A Divide-and-Conquer Algorithm
MergeSort(A, p, r)
T(n) 1. If p lt r then
?(1) 2. q ? ? (pr)/2?
?(1) 3.
MergeSort (A, p, q)
T(n/2) 4. MergeSort (A, q 1, r)
T(n/2) 5. Merge(A, p, q, r)
?(n)
12Recurrence
- Describes a function recursively in terms of
itself. - Describes performance of recursive algorithms.
- Recurrence for merge sort
MergeSort(A, p, r)
T(n) 1. If p lt r then
?(1) 2. q ? ? (pr)/2?
?(1) 3. MergeSort (A, p,
q) T(n/2) 4. MergeSort
(A, q 1, r) T(n/2) 5.
Merge(A, p, q, r) ?(n)
13Recursion Tree for Asympotatic Analysis
- ?(n lg n) grows more slowly than ?(n2).
- Thus merge sort asymptotically beats insertion
sort in the worst case. (insertion sort
stable/in-place merge sort stable/not in-place)
14O Upper Bounding Function
- Def f(n) O(g(n)) if ? c gt0 and n0 gt 0 such that
0 ? f(n) ? cg(n) for all n ? n0. - Intuition f(n) ? g(n) when we ignore constant
multiples and small values of n. - How to show O (Big-Oh) relationships?
- f(n) O(g(n)) iff limn ? ? c for some
c ? 0. - Remember L'Hopitals Rule?
15? Lower Bounding Function
- Def f(n) ?(g(n)) if ? c gt0 and n0 gt 0 such that
0 ? cg(n) ? f(n) for all n ? n0. - Intuition f(n) ? g(n) when we ignore
constant multiples and small values of n. - How to show ? (Big-Omega) relationships?
- f(n) ?(g(n)) iff limn ? ? c for some
c ? 0.
16? Tightly Bounding Function
- Def f(n) ?(g(n)) if ? c1, c2 gt0 and n0 gt 0 such
that 0 ? c1g(n) ? f(n) ? c2 g(n) for all n ? n0. - Intuition f(n) g(n) when we ignore
constant multiples and small values of n. - How to show ? relationships?
- Show both big Oh (O) and Big Omega (?)
relationships. - f(n) ?(g(n)) iff limn ? ? c for some
c gt 0.
17o, ? Untightly Upper, Lower Bounding Functions
- Little Oh o f(n) o(g(n)) if ? c gt 0, ? n0 gt 0
such that 0 ? f(n) lt cg(n) for all n ? n0. - Intuition f(n) lt any constant multiple of
g(n) when we ignore small values of n. - Little Omega ? f(n) ?(g(n)) if ? c gt 0, ? n0 gt
0 such that 0 ? cg(n) lt f(n) for all n ? n0. - Intuition f(n) is gt any constant multiple of
g(n) when we ignore small values of n. - How to show o (Little-Oh) and ?(Little-Omega)
relationships? - f(n) o(g(n)) iff limn ? ? 0.
- f(n) ?(g(n)) iff limn ? ? ?.
18Properties for Asymptotic Analysis
- An algorithm has worst-case run time O(f(n))
there is a constant c s.t. for every n big
enough, every execution on an input of size n
takes at most cf(n) time. - An algorithm has worst-case run time ?(f(n))
there is a constant c s.t. for every n big
enough, at least one execution on an input of
size n takes at least cf(n) time. - Transitivity If f(n) ?(g(n)) and g(n)
?(h(n)), then f(n) ?(h(n)), where ? O, o, ?,
?, or ?. - Rule of sums ?(f(n) g(n)) ?(maxf(n),
g(n)), where ? O, o, ?, ?, or ?. - Rule of sums f(n) g(n) ?(maxf(n), g(n)),
where ? O, ?, or ?. - Rule of products If f1(n) ?(g1(n)) and f2(n)
?(g2(n)), then f1(n) f2(n) ?(g1(n) g2(n)),
where ? O, o, ?, ?, or ?. - Transpose symmetry f(n) O(g(n)) iff g(n)
?(f(n)). - Transpose symmetry f(n) o(g(n)) iff g(n)
?(f(n)). - Reflexivity f(n) ?(f(n)), where ? O, ? , or
?. - Symmetry f(n) ?(g(n)) iff g(n) ?(f(n)).
19Asymptotic Functions
-
-
- Polynomial-time complexity O(p(n)), where n is
the input size and p(n) is a polynomial function
of n (p(n) nO(1)).
20Runtime Comparison
- Run-time comparison Assume 1000 MIPS, 1
instruction/operation.
21Cant Finish the Assigned Task
I cant find an efficient algorithm, I guess Im
just too dumb.
22Mission Impossible
I cant find an efficient algorithm, because no
such algorithm is possible.
23I cant find an efficient algorithm, but neither
can all these famous people.
24Easy and Hard Problems
- We argue that the class of problems that can be
solved in polynomial time (denoted by P)
corresponds well with what we can feasibly
compute. But sometimes it is difficult to tell
when a particular problem is in P or not. - Theoreticians spend a good deal of time trying to
determine whether particular problems are in P.
To demonstrate how difficult it can be. - To make this determination, we will survey a
number of problems, some of which are known to be
in P, and some of which we think are (probably)
not in P. The difference between the two types
of problem can be surprisingly small. Throughout
the following, an ''easy'' problem is one that is
solvable in polynomial time, while a ''hard''
problem is one that we think cannot be solved in
polynomial time.
25Eulerian Tour vs. Hamiltonian Tour
-- Easy
- Eulerian Tours
- INPUT A graph G (V, E).
- DECIDE Is there a path that crosses every edge
exactly once and returns to its starting point? - Hamiltonian Tours
- INPUT A graph G (V, E).
- DECIDE Is there a path that visits every vertex
exactly once and returns to its starting point?
-- Hard
26Some Facts
- Eulerian Tours
- A famous mathematical theorem comes to our
rescue. If the graph is connected and every
vertex has even degree, then the graph is
guaranteed to have such a tour. The algorithm to
find the tour is a little trickier, but still
doable in polynomial time. - Hamiltonian Tours
- No one knows how to solve this problem in
polynomial time. The subtle distinction between
visiting edges and visiting vertices changes an
easy problem into a hard one.
27Map Colorability
-- Easy
- Map 2-colorability
- INPUT A graph G(V, E).
- DECIDE Can this map be
- colored with 2 colors so that no
- two adjacent countries have the
- same color?
- Map 3-colorability
- INPUT A graph G(V, E).
- DECIDE Can this map be colored with 3 colors so
that no two adjacent countries have the same
color? - Map 4-colorability
-- Hard
-- Easy
28Some Facts
- Map 2-colorability
- To solve this problem, we simply color the first
country arbitrarily. This forces the colors of
neighboring countries to be the other color,
which in turn forces the color of the countries
neighboring those countries, and so on. If we
reach a country which borders two countries of
different color, we will know that the map cannot
be two-colored otherwise, we will produce a two
coloring. So this problem is easily solvable in
polynomial time.
29Some Facts
- Map 3-colorability
- This problem seems very similar to the problem
above, however, it turns out to be much harder.
No one knows how this problem can be solved in
polynomial time. (In fact this problem is
NP-complete.) - Map 4-colorability.
- Here we have an easy problem again. By a famous
theorem, any map can be four-colored. It turns
out that finding such a coloring is not that
difficult either.
30Problem vs. Problem Instance
- When we say that a problem is hard, it means that
some instances of the problem are hard. It does
not mean that all problem instances are hard. - For example, the following problem instance is
trivially 3-colorable
31Longest Path vs. Shortest Path
-- Hard
- Longest Path
- INPUT A graph G (V, E), two vertices u, v of
V, and a weighting function on E. - OUTPUT The longest path between u and v.
- Shortest Path
- INPUT A graph G (V, E), two vertices u, v of
V, and a weighting function on E. - OUTPUT The shortest path between u and v.
No one is able to come up with a polynomial time
algorithm yet.
-- Easy
A greedy method will solve this problem easily.
32Multiplication vs. Factoring
-- Easy
- Multiplication
- INPUT Integers x,y.
- OUTPUT The product x?y.
- Factoring (Un-multiplying)
- INPUT An integer n.
- OUTPUT If n is not prime, output two integers x,
y such that 1 lt x, y lt n and x ? y n.
-- Hard
Again, the problem of factoring is not known to
be in P. In this case, the hardness of a problem
turns out to be useful. Some cryptographic
algorithms depend on the assumption that
factoring is hard to ensure that a code cannot be
broken by a computer.
33Boolean Formulas
- Formula evaluation
- INPUT A boolean formula (e.g. (x ?y) ? (z ??x))
and a value for all variables in the formula
(e.g. x 0, y 1, z 0). - DECIDE The value of the formula. (e.g., 1, or
"true'' in this case). - Satisfiability of boolean formula
- INPUT A boolean formula.
- DECIDE Do there exist values for all variables
that would make the formula true? - Tautology
- INPUT A boolean formula.
- DECIDE Do all possible assignments of values to
variables make the formula true?
-- Easy
-- Hard
-- Harder
34Facts
- Formula evaluation
- It's not too hard to think of what the algorithm
would be in this case. All we would have to do is
to substitute the values in for the various
variables, then simplify the formula to a single
value in multiple passes (e.g. in a pass simplify
1 ? 0 to 1). . - Satisfiability of boolean formula
- Given that there are n different variables in the
formula, there are 2n possible assignments of 0/1
to the variables. This gives us an easy
exponential time algorithm simply try all
possible assignments. No one knows if there is a
way to be cleverer, and cut the running time down
to polynomial - Tautology
- It turns out that this problem seems to be even
harder than the Satisfiability problem.
35How Do You Judge an Algorithm?
- Issues Related to the analysis of Algorithms
- How to measure the goodness of an algorithm?
- How to measure the difficulty of a problem?
- How do we know that an algorithm is optimal?
36The Complexity of an Algorithm
- The space complexity of a program is the amount
of memory that it needs to run to completion. - Fixed space requirements does not depend on the
programs inputs and outputs -- usually ignored. - Variable space requirement size depends on
execution of program (recursion, dynamic
allocated variables, etc.) - The time complexity of a program is the amount of
computer time that it needs to run a computation.
37Input (Problem) Size
- Input (problem) size and costs of operations The
size of an instance corresponds formally to the
number of bits needed to represent the instance
on a computer, using some precisely defined and
reasonably compact coding scheme. - uniform cost function
- logarithmic cost function
- Example Compute x nn
- x 1 uniform logarithmic
- for i 1 to n do T(n) Q(n) T(n)
Q(n2?log n) - x x n S(n) Q(1) S(n) Q(n ?
log n)
38Complexity of an Algorithm
- Best case analysis too optimistic, not really
useful. - Worst case analysis usually only yield a rough
upper bound. - Average case analysis a probability distribution
of input is assumed, and the average of the cost
of all possible input patterns are calculated.
However, it is usually difficult than worst case
analysis and does not reflect the behavior of
some specific data patterns. - Amortized analysis this is similar to average
case analysis except that no probability
distribution is assumed and it is applicable to
any input pattern (worst case result). - Competitive analysis Used to measure the
performance of an on-line algorithm w.r.t. an
adversary or an optimal off-line algorithm.
39Example Binary Search
- Given a sorted array A1..n and an item x in A.
What is the index of x in A? - Usually, the best case analysis is the easiest,
the worst case the second easiest, and the
average analysis the hardest.
40Another Example
- Given a stack S with 2 operations push(S, x),
and multipop(S, k), the cost of the two
operations are 1 and min(k, S) respectively.
What is the cost of a sequence of n operations on
an initially empty stack S? - Best case n, 1 for each operation.
- Worst case O(n2), O(n) for each operation.
- Average case complicate and difficult to
analyze. - Amortized analysis 2n, 2 for each operation.
(There are at most n push operations and hence at
most n items popped out of the stack.)
41The Difficulty of a Problem
- Upper bound O(f(n)) means that for sufficiently
large inputs, running time T(n) is bounded by a
multiple of f(n) - Existing algorithms (upper bounds).
- Lower bound W(f(n)) means that for sufficiently
large n, there is at least one input of size n
such that running time is at least a fraction of
f(n) for any algorithm that solves the problem. - The inherent difficulty ? lower bound of
algorithms - The lower bound of a method to solve a problem is
not necessary the lower bound of the problem.
42(No Transcript)
43Examples
- Sorting n elements into ascending order.
- O(n2), O(n?log n), etc. -- Upper bounds.
- O(n), O(n?log n), etc. -- Lower bounds.
- Lower bound matches upper bound.
- Multiplication of 2 matrices of size n by n.
- Straightforward algorithm O(n3).
- Strassen's algorithm O(n2.81).
- Best known sequential algorithm O(n2.376) ?
- Best known lower bound W(n2)
- The best algorithm for this problem is still open.
44Complexity Classes
- DSPACE(S(n)) NSPACE(S(n)) The classes of
problems that can be solved by deterministic
nondeterministic Turing machines using S(n)
space. - DTIME(T(n)) NTIME(T(n)) The classes of
problems that can be solved by deterministic
nondeterministic Turing machines using T(n)
time. - Tractable problems Problems in P.
- Intractable problems Problem not known to be in
P. - Efficient algorithms Algorithms in P.
45Complexity Classes
NEXP
co-NEXP
EXP
PSPACE
NPC
P
co-NP
NP
46NP-Complete Problems
- M. R. Garey, and D. S. Johnson
- Computers and Intractability A Guide to the
Theory of NP-Completeness - W. H. Freeman and Company, 1979
47Complexity of Algorithms and Problems
- Notations
- Symbol Meaning
- P a problem
- I a problem instance
- In the set of all problem instances of size
- A an algorithm for P
- AP the set of algorithms for problem P
- Pr(I) probability of instance I
- CA(I) cost of A with input I
- RA the set of all possible versions of a
randomized algorithm A
48Formal Definitions
f(n)
49Example Complexity of the Sorting Problem
- Assume comparison is used to determine the
order of keys.
50Divide-and-Conquer Algorithms Revisited
- The divide-and-conquer paradigm
- Divide the problem into a number of subproblems.
- Conquer the subproblems (solve them).
- Combine the subproblem solutions to get the
solution to the original problem. - Merge sort T(n) 2T(n/2) ? (n) ? (n lg n).
- Divide the n-element sequence to be sorted into
two n/2-element sequence. - Conquer sort the subproblems, recursively using
merge sort. - Combine merge the resulting two sorted
n/2-element sequences.
51Analyzing Divide-and-Conquer Algorithms
- Recurrence for a divide-and-conquer algorithms
- a of subproblems
- n/b size of the subproblems
- D(n) time to divide the problem of size n into
subproblems - C(n) time to combine the subproblem solutions to
get the answer for the problem of size n - Merge sort
- a 2 two subproblems
- n/b n/2 each subproblem has size ? n/2
- D(n) ?(1) compute midpoint of array
- C(n) ?(n) merging by scanning sorted subarrays
52Divide-and-Conquer Binary Search
- Binary search on a sorted array
- Divide Check middle element.
- Conquer Search the subarray.
- Combine Trivial.
- Recurrence T(n) T(n/2) ?(1) ?(lg n).
- a 1 search one subarray
- n/b n/2 each subproblem has size ? n/2
- D(n) ?(1) compute midpoint of array
- C(n) ?(1) trivial
53Solving Recurrences
- Three general methods for solving recurrences
- Iteration Convert the recurrence into a
summation by expanding some terms and then bound
the summation - Substitution Guess a solution and verify it by
induction. - Master Theorem if the recurrence has the form
- T(n) aT(n/b) f(n),
- then most likely there is a formula that
can be applied. - Two simplifications that won't affect asymptotic
analysis - Ignore floors and ceilings.
- Assume base cases are constant, i.e., T(n) ?(1)
for small n.
54Solving Recurrences Iteration
55Iteration by Using Recursion Trees
- Root computation (D(n) C(n)) at top level of
recursion. - Node at level i Subproblem at level i in the
recursion. - Height of tree level in the recursion.
- T(n) sum of all nodes in the tree.
- T(1)1 ? T(n) 4T(n/2) n n 2n 4n
2lgnn ?(n2).
56Solving Recurrences Substitution (Guess Verify)
- Guess form of solution.
- Apply math. induction to find the constant and
verify solution. - Use to find an upper or a lower bound.
- Example Guess T(n) 4T(n/2) n O(n3) (T(1)
1) - Show T(n) ? cn3 for some c gt 0 (we must find c).
- 1. Basis T(2) 4T(1) 2 6 ? 23c (pick c
1) - 2. Assume T(k) ? ck3 for k lt n, and prove T(n) ?
cn3 - T(n) 4 T(n/2) n
- ? 4 (c (n/2)3) n
- cn3/2 n
- cn3 - (cn3/2-n)
- ? cn3,
- where c ? 2 and n ? 1. (Pick c ?
2 for Steps 1 2!) - Useful tricks subtract a lower order term,
change variables (e.g., T(n)
57Pitfall in Substitution
- Example Guess T(n) 2T(n/2) n O(n) (wrong
guess!) - Show T(n) ? cn for some c gt 0 (we must find c).
- Basis T(2) 2T(1) 2 4 ? 2 c (pick c 2)
- Assume T(k) ? ck for k lt n, and prove T(n) ? cn
- T(n) 2 T(n/2) n
- ? 2 (cn/2) n
- cn n
- O(n)
/ Wrong!! / - What's wrong?
- How to fix? Subtracting a lower-order term may
help!
58Fixing Wrong Substitution
- Guess T(n) 4T(n/2) n O(n2) (right guess!)
- Assume T(k) ? ck2 for k lt n, and prove T(n) ?
cn2. - T(n) 4T (n/2) n
- ? 4c (n/2)2 n
- cn2 n
- O(n2)
/ Wrong!! / - Fix by subtracting a lower-order term.
- Assume T(k) ? c1k2 - c2 k for k lt n, and prove
T(n) ? c1 n2 - c2 n. - T(n) 4T(n/2) n
- ? 4 (c1(n/2)2 -
c2(n/2)) n - c1n2 - 2c2n n
- ? c1n2 - c2n
(if c2 ? 1) - Pick c1 big enough to handle initial conditions.
59Solving Recurrence Relations
- In general, we would prefer to have an explicit
formula to compute the value of an rather than
conducting n iterations. - For one class of recurrence relations, we can
obtain such formulas in a systematic way. - Those are the recurrence relations that express
the terms of a sequence as linear combinations of
previous terms.
60Solving Recurrence Relations
- Definition A linear homogeneous recurrence
relation of degree k with constant coefficients
is a recurrence relation of the form - an c1an-1 c2an-2 ckan-k,
- Where c1, c2, , ck are real numbers, and ck ? 0.
- A sequence satisfying such a recurrence relation
is uniquely determined by the recurrence relation
and the k initial conditions - a0 C0, a1 C1, a2 C2, , ak-1 Ck-1.
61Solving Recurrence Relations
- Examples
- The recurrence relation Pn (1.05)Pn-1
- is a linear homogeneous recurrence relation of
degree one. - The recurrence relation fn fn-1 fn-2
- is a linear homogeneous recurrence relation of
degree two. - The recurrence relation an an-5
- is a linear homogeneous recurrence relation of
degree five.
62Solving Recurrence Relations
- Basically, when solving such recurrence
relations, we try to find solutions of the form
an rn, where r is a constant. - an rn is a solution of the recurrence
relationan c1an-1 c2an-2 ckan-k if and
only if - rn c1rn-1 c2rn-2 ckrn-k.
- Divide this equation by rn-k and subtract the
right-hand side from the left - rk - c1rk-1 - c2rk-2 - - ck-1r - ck 0
- This is called the characteristic equation of the
recurrence relation.
63Solving Recurrence Relations
- The solutions of this equation are called the
characteristic roots of the recurrence relation. - Let us consider linear homogeneous recurrence
relations of degree two. - Theorem Let c1 and c2 be real numbers. Suppose
that r2 c1r c2 0 has two distinct roots r1
and r2. - Then the sequence an is a solution of the
recurrence relation an c1an-1 c2an-2 if and
only if an ?1r1n ?2r2n for n 0, 1, 2, ,
where ?1 and ?2 are constants.
64Solving Recurrence Relations
- Example What is the solution of the recurrence
relation an an-1 2an-2 with a0 2 and a1 7
? - Solution The characteristic equation of the
recurrence relation is r2 r 2 0. - Its roots are r 2 and r -1.
- Hence, the sequence an is a solution to the
recurrence relation if and only if - an ?12n ?2(-1)n for some constants ?1 and
?2.
65Solving Recurrence Relations
- Given the equation an ?12n ?2(-1)n and the
initial conditions a0 2 and a1 7, it follows
that - a0 2 ?1 ?2
- a1 7 ?1?2 ?2 ?(-1)
- Solving these two equations gives us?1 3 and
?2 -1. - Therefore, the solution to the recurrence
relation and initial conditions is the sequence
an with - an 3?2n (-1)n.
66Solving Recurrence Relations
- an rn is a solution of the linear homogeneous
recurrence relationan c1an-1 c2an-2
ckan-k - if and only if
- rn c1rn-1 c2rn-2 ckrn-k.
- Divide this equation by rn-k and subtract the
right-hand side from the left - rk - c1rk-1 - c2rk-2 - - ck-1r - ck 0
- This is called the characteristic equation of the
recurrence relation.
67Solving Recurrence Relations
- The solutions of this equation are called the
characteristic roots of the recurrence relation. - Let us consider linear homogeneous recurrence
relations of degree two. - Theorem Let c1 and c2 be real numbers. Suppose
that r2 c1r c2 0 has two distinct roots r1
and r2. - Then the sequence an is a solution of the
recurrence relation an c1an-1 c2an-2 if and
only if an ?1r1n ?2r2n for n 0, 1, 2, ,
where ?1 and ?2 are constants.
68Solving Recurrence Relations
- Example Give an explicit formula for the
Fibonacci numbers. - Solution The Fibonacci numbers satisfy the
recurrence relation fn fn-1 fn-2 with initial
conditions f0 0 and f1 1. - The characteristic equation is r2 r 1 0.
- Its roots are
69Solving Recurrence Relations
- Therefore, the Fibonacci numbers are given by
- for some constants ?1 and ?2.
- We can determine values for these constants so
that the sequence meets the conditions f0 0 and
f1 1
70Solving Recurrence Relations
- The unique solution to this system of two
equations and two variables is
- So finally we obtained an explicit formula for
the Fibonacci numbers
71Solving Recurrence Relations
- But what happens if the characteristic equation
has only one root? - How can we then match our equation with the
initial conditions a0 and a1 ? - Theorem Let c1 and c2 be real numbers with c2??
0. Suppose that r2 c1r c2 0 has only one
root r0. A sequence an is a solution of the
recurrence relation an c1an-1 c2an-2 if and
only if an ?1r0n ?2nr0n, for n 0, 1, 2, ,
where ?1 and ?2 are constants.
72Solving Recurrence Relations
- Example What is the solution of the recurrence
relation an 6an-1 9an-2 with a0 1 and a1
6? - Solution The only root of r2 6r 9 0 is r0
3.Hence, the solution to the recurrence
relation is - an ?13n ?2n3n for some constants ?1 and ?2.
- To match the initial condition, we need
- a0 1 ?1a1 6 ?1?3 ?2?3
- Solving these equations yields ?1 1 and ?2 1.
- Consequently, the overall solution is given by
- an 3n n3n.
73Multiple Roots
- If a is the root of the characteristic equation
with multiplicity m, there is a general solution
74Getting a Particular Solution
- Mainly try and error. However, some very good
suggestions do exist - Sometimes your initial guess "degenerates" and
gives contradictory conditions. Then try a
solution of higher degree.
75Examples
76A Total Example
- Once the general and particular solutions have
been found, they are added together to give the
total solution. - The initial conditions are then used to determine
the constants of the general solution.
77A Total Example
78Pseudo Nonlinear Recurrence
- Range Transformations on the values of the
sequences - Domain Transformations on the value of indices
79Example
80A Practical Example
- Mergesort
- Split list in half
- Sort both halves
- Merge the two halves
- Analysis The recurrence relation is
- T(0) 0
81Example Continued
82Example Continued
83Example Continued
- (3) Impose the initial conditions on A2n n ? 2n
1 - (n0) A 0 1 U(0) 0 ? A -1
- So U(n) n ? 2n 2n 1 T(2n)
- Replace n by log n gives
-
- T(n) n?log n n 1 Q(n ? log n)
84Master Theorem
- Let a ? 1 and b gt 1 be constants, f(n) be a
function, and T(n) be defined on nonnegative
integers as - T(n) aT(n/b) f(n).
- Then, T(n) can be bounded asymptotically as
follows
85Solving Recurrences by Using Master Method