Title: Speedup and Compression
1Speed-up and Compression
- Theoretical Models 1999
- Peter van Emde Boas
References available at http//turing.wins.uva.n
l/peter/teaching/thmod99.html Most Papers will
be made available in Library
2When is More Better ?
- More time or space will allow you to compute more
- This is not always true
- Constant factor speed-up
- Non Constructible time/space bounds Gap
theorems. - Compression theorems for constructible bounds
3Constant Factor Speed-up
A Turing Machine Alphabet is easily compressed by
coding k symbols in one symbol of a
larger alphabet Sk ---gt S
S S3
4Constant Factor Speed-up
This yields automatic constant factor speed-up
in space Space( S(n) ) Space( S(n)/k
) Snags Input is not compressed! This may
require additional steps and another worktape.
It shows space speed-up for single tape
model only for w(n) bounds. And what about Time?
5The k for 6 solution
PREPARE
COMPUTE UPDATE
Snags you must compress the alphabet 6
times more dense than you expected. Input must be
preprocessed so it works only nice for time
t(n) w(n) (even w(n2) in single tape model)
6The direct solution
1172
1177
1176
1173
1175
1174
Encode two blocks in finite control
simulator Turing Machine. Externally scan the
block adjacent to the block scanned internally
Finite Control
Now one can allways simulate k steps for 1
step and still preserve the above invariant after
every step.
7Time speed-up
THEOREM Time( t(n) ) Time( t(n)/ k
) for fixed k , as long as t(n)/k gt (1
e).n Doesnt work for single tape model there
the input compression already requires time
W(n2) So in order that Time( t(n) ) ? Time(
G(t(n)) ) it is necessary that G(m)
w(m) This is however not sufficient......
?
8Borodin-Trakhtenbrot Gap Theorem
For every G(m) gt m one can invent a
pathological timebound u(n) such that
Time(u(n)) Time(G(u(n)))
u(m) min k no machine computation
Mi(x) with x m and i ? m has
a runtime Ti(x) with k ? Ti(x) lt G(k)
This is well defined there are at most m.cm
runtimes which have to be excluded from the
interval k, G(k) ) so sooner or later the
sequence 0, G(0), G(G(0)), .... will provide us
with an example....
9Constructible Bounds
t(n) is time constructible when some TM
on input n (in binary) can initialize a binary
counter with value t(n) in time lt t(n) s(n)
is space constructible when some TM on input x
of length n can mark a block of s(n) tape
cells without ever exceeding this block.
Against constructible bounds effective
diagonalization is possible
10SPACE COMPRESSIONDownward Diagonalization
If S1(n) gt log(n) is space constructible
and S2(n) o(S1(n)) then Space( S2(n) ) ?
Space( S1(n) )
?
On input i x 1) mark S1(ix) tape
cells 2) simulate Mi( ix ) within this
block if simulation leaves the block accept if
simulation cycles accept - counting OK since
S1(n) gt log(n) 3) if simulation terminates do
the opposite if accept reject and accept
otherwise
11SPACE COMPRESSIONDownward Diagonalization
This program runs in space S1(n) by
construction The result cant be computed by any
device in space S2(n) . Assume Mj does it
then on input jx for x sufficiently large,
cases 1 and 2 wont occur and therefore Mj(
jx ) accepts iff it rejects.....
CONTRADICTION !
12TIME COMPRESSIONDownward Diagonalization
A similar result for Time Compression is
affected by the overhead required for maintaining
the counter ticking down from T1(n) to 0 . If
we assume that this overhead is logarithmic the
result becomes
If T1(n) gt n is time constructible and T2(n)
o(T1(n)) then Time( T2(n) ) ? Time(
T1(n).log(T1(n)) )
?
13TIME COMPRESSIONDownward Diagonalization
Improvements Add an extra tape. Storing the
clock on it makes the overhead vanish..... At
least two tapes Divide clock in head and
tail and move the head only when the tail
underflows. This reduces overhead to loglog(n)
the trick extends yielding log(n) overhead
(W. Paul) Use distributed super redundant clock
overhead vanishes (Fürer 1982)
14Compression in General
The diagonalization argument is generic the
minimal overhead determining the size of the
separation gaps is fully machine
dependent. Extends to the world of
nondeterministic computation proof become rather
complex. (Seiferas et al. for TM time
measure) For the RAM world diagonalization
results are similar Constant factor speed-up is
difficult.
15Constructibility ?
- Reasonable bounds turn out to be constructible
- polynomials,
- simple exponentials,
- polylog functions
- closed under sum product
- not closed under difference!
16Constructibility ?
- Many Theorems are proven assuming
constructibility of bounds - Some theorems extend to general case, using trick
of incremental resources - Savitch Theorem
- Hopcroft, Paul, Valiant Theorem
- resulting bounds are weak (terminating
computations only)