Unconditional Weak derandomization of weak algorithms Explicit versions of Yao - PowerPoint PPT Presentation

About This Presentation
Title:

Unconditional Weak derandomization of weak algorithms Explicit versions of Yao

Description:

PrXUn[B(X)=f(X)] 1-?. Useful tool in bounding randomized algs. ... C is a class of distributions over n bit strings 'containing' k bits of (min)-entropy. ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 31
Provided by: Compu428
Category:

less

Transcript and Presenter's Notes

Title: Unconditional Weak derandomization of weak algorithms Explicit versions of Yao


1
UnconditionalWeak derandomization of weak
algorithmsExplicit versions of Yaos lemma
  • Ronen Shaltiel, University of Haifa

2
Derandomization The goal
  • Main open problem Show that BPPP.
    (There is evidence that this is
    hard IKW,KI)
  • More generally
  • Convert randomized algorithm A(x,r)
  • into deterministic algorithm B(x)
  • Wed like to
  • Preserve complexity
    complexity(B) complexity(A) (known
    BPP?EXP).
  • Preserve uniformity
    transformation A ? B is explicit (known
    BPP?P/poly).

n bit long input
m bit long coin tosses
3
Strong derandomization is sometimes impossible
  • Setting Communication complexity.
  • x(x1,x2) where x1,x2 are shared between two
    players.
  • Exist randomized algorithms A(x,r) (e.g. for
    Equality) with logarithmic communication
    complexity s.t. any deterministic algorithm B(x)
    requires linear communication.
  • Impossible to derandomize while preserving
    complexity.

4
(The easy direction of) Yaos Lemma A straight
forward averaging argument
  • Given randomized algorithm that computes a
    function f with success 1-? on the worst case,
    namely
  • Given A0,1n0,1m ?0,1 s.t.
  • ?x PrR?UmA(x,R)f(x)1-?
  • ?r20,1m s.t. the deterministic algorithm
  • B(x)A(x,r)
  • computes f well on average, namely
  • PrX?UnB(X)f(X)1-?
  • Useful tool in bounding randomized algs.
  • Can also be viewed as weak derandomization.

5
Yaos lemma as weak derandomization
  • Advantages
  • Applies to any family of algorithms and any
    complexity measure.
  • Communication complexity.
  • Decision tree complexity.
  • Circuit complexity classes.
  • Construction B(x)A(x,r) preserves complexity.
  • e.g. if A has low communication complexity then B
    has low communication complexity.
  • Drawbacks
  • Weak derandomization is weak deterministic alg B
    succeeds on most but not all inputs.
  • Lets not be too picky.
  • In some scenarios (e.g. communication complexity)
    strong derandomization is impossible.
  • The argument doesnt give an explicit way to find
    r and produce B(x)A(x,r).
  • Uniformity is not preserved Even if A is
    uniform we only get that B(x)A(x,r) is
    nonuniform (B is a circuit).

6
The goalExplicit versions of Yaos Lemma
  • Given randomized algorithm that computes a
    function f with success 1-? on the worst case,
    namely
  • Given A0,1n0,1m ?0,1 s.t.
  • ?x PrR?UmA(x,R)f(x)1-?
  • Give explicit construction of a deterministic
    algorithm B(x) s.t.
  • B computes f well on average, namely
  • PrX?UnB(X)f(X)1-?
  • Complexity is preserved complexity(B)
    complexity(A).
  • We refer to this as Explicit weak
    derandomization.

7
Adelmans theorem (BPP?P/poly) follows from
Yaos lemma
  • Given randomized algorithm A that computes a
    function f with success 1-? on the worst case.
  • (amplification) amplify success prob. to 1-? for
    ?2-(n1)
  • (Yaos lemma) ? deterministic circuit B(x) such
    that
  • b PrX?UnB(X)?f(X)lt?lt2-(n1) ? b0
  • ? B succeeds on all inputs.
  • Corollary Explicit version of Yaos lemma for
    general poly-time algorithms ? BPPP.
  • Reminder of talk Explicit versions of Yaos
    lemma for weak algorithms Communication games,
    Decision trees, Streaming algorithms, AC0
    algorithms.

8
Related work Extracting randomness from the input
  • Idea Goldreich and Wigderson Given a
    randomized alg A(x,r) s.t. rx consider the
    deterministic alg
  • B(x)A(x,x).
  • Intuition If input x is chosen at random then
    random coins rx is chosen at random.
  • Problem Input and coins are correlated.
    (e.g. consider A s.t. 8input x, coin x is bad
    for x).
  • GW Does work if A has the additional property
    that whether or not a coin toss is good does not
    depend on the input.
  • GW It turns out that there are As with this
    property.

9
The role of extractors in GW
  • In their paper Goldreich and Wigderson actually
    use
  • B(x)maj seeds y A(x,E(x,y))
  • Where E(x,y) is a seeded extractor.
  • Extractors are only used for deterministic
    amplification (that is to amplify success
    probability).
  • Alternative view of the argument
  • Set A(x,r)maj seeds y A(x,E(r,y))
  • Apply construction B(x)A(x,x).

10
Randomness extractors
Do we have to tell that same old story again?
Daddy, how do computers get random bits?
11
Randomness Extractors Definition and two flavors
  • C is a class of distributions over n bit strings
    containing k bits of (min)-entropy.
  • A deterministic (seedless) C-extractor is a
    function E such that for every X?C, E(X) is
    e-close to uniform on m bits.
  • A seeded extractor has an additional (short i.e.
    log n) independent random seed as input.
  • For Seeded extractors Call X with
    min-entropy k

source distribution from C
Seeded
Deterministic
  • A distribution X has min-entropy k if
    ?x PrXx 2-k
  • Two distributions are e-close if the probability
    they assign to any event differs by at most e.

12
Zimand explicit version of Yaos lemma for
decision trees
  • Zimand defines and constructs a stronger variant
    of seeded extractors E(x,y) called exposure
    resilient extractors . He considers
  • B(x)maj seeds y A(x,E(x,y))
  • Thm Zimand07 If A is a randomized decision
    tree with q queries that tosses q random coins
    then
  • B succeeds on most inputs. (a (1-?)-fraction).
  • B can be implemented by a deterministic decision
    tree with qO(1) queries.
  • Zimand states his result a bit differently.

We improve to O(q)
13
Our results
  • Develop a general technique to prove explicit
    versions of Yaos Lemma (that is weak
    derandomization results).
  • Use deterministic (seedless) extractors that is
    B(x)A(x,E(x)) where E is a seedless extractor.
  • The technique applies to any class of algorithms
    with rx. Can sometimes handle rgtx using
    PRGs.
  • More precisely Every class of randomized
    algorithm defines a class C of distributions. An
    explicit construction of an extractor for C
    immediately gives an explicit version of Yaos
    Lemma (as long as rx).

14
Explicit version of Yaos lemma for communication
games
  • Thm If A is a randomized (public coin)
    communication game with communication complexity
    q that tosses mltn random coins then set
    B(x)A(x,E(x)) where E is a 2-source extractor.
  • B succeeds on most inputs.
    A (1-?)-fraction (or even
    a (1-2-(mq))-fraction).
  • B can be implemented by a deterministic
    communication game with communication complexity
    O(mq).
  • Dfn A communication game is explicit if each
    party can compute its next message in poly-time
    (given his input, history and random coins).
  • Cor Given an explicit randomized communication
    game with complexity q and m coins there is an
    explicit deterministic communicaion game with
    complexity O(mq) that succeeds on a (1-2-(mq))
    fraction of the inputs.

Both complexity and uniformity are preserved
15
Explicit weak derandomization results
Extractors Algorithms
Extractors for bit-fixing sources KZ03,GRS04,R07 Decision trees (Improved alternative proof of Zimands result).
2-source extractors CG88,Bou06 Communication games
Construct from 2-source extractors. Inspired by KM06,KRVZ06. Streaming algorithms (can handle r x).
We construct inspired by PRGs for AC0. N,NW AC0 (constant depth) (can handle r x).
We construct using low-end hardness assumptions. Poly-time algorithms (can handle r x).
16
Constant depth algorithms
  • Consider randomized algorithms A(x,r) that are
    computable by uniform families of poly-size
    constant depth circuits.
  • NW,K Strong derandomization in quasi-poly
    time. Namely, there is a uniform family of
    quasi-poly-size circuits that succeed on all
    inputs.
  • Our result Weak derandomization in poly-time.
    Namely, there is a uniform family
    of poly-size circuits that succeed on most
    inputs. (can also preserve constant depth).
  • High level idea
  • Reduce of random coins of A from nc to (log
    n)O(1) using a PRG. (Based on the hardness of the
    parity function H,RS)
  • Extract random coins from input x using an
    extractor for sources recognizable by AC0
    circuits.
  • Construct extractors using the hardness of the
    parity function and ideas from NW,TV.

17
High level overview of the proof
  • To be concrete we consider communication games

18
Preparations
  • Thm If A is a randomized communication game with
    communication complexity q that tosses m random
    coins then set B(x)A(x,E(x)) where E is a
    2-source extractor.
  • B succeeds on most inputs.
    A (1-?)-fraction.
  • B can be implemented by a deterministic
    communication game with communication complexity
    O(mq).
  • Define independent random variables X,R by X?Un,
    R?Um.
  • We have that ?x PrA(x,R)f(x)1-?
  • It follows that a PrA(X,R)f(X)1-?
  • We need to show b PrA(X,E(X))f(X)1-?
    (2?2-2m).
  • The plan is to show that b a (2? 2-2m).

19
High level intuition
x2
  • For every choice of random coins r the
    game A(,r) is deterministic w/complexity q.
  • It divides the set of strings x of length n into
    2q rectangles.
  • Let Qr(x) denote the rectangle of x.

x1
Qr(x1,x2)
  • At the end of protocol all inputs in a rectangle
    answer the same way.
  • Consider the entropy in the variable Xrectangle
    (XQr(X)v).
  • Independent of answer.
  • Idea extract the randomness from this entropy.
  • Doesnt make sense rectangle is defined only
    after random coins r are fixed.

Rectangle 2-source
20
Averaging over random coins and rectangles
x2
  • For every choice of random coins r the
    game A(,r) is deterministic w/complexity q.
  • It divides the set of strings x of length n into
    2q rectangles.
  • Let Qr(x) denote the rectangle of x.

x1
Qr(x1,x2)
  • a PrA(X,R)f(X)
  • Sr PrA(X,R)f(X) ? Rr
  • Sr Sv PrA(X,R)f(X) ? Rr ? Qr(X)v
  • Sr Sv PrA(X,r) f(X) ? Rr ? Qr(X)v
  • Sr Sv PrQr(X)vPrRrQr(X)vPrA(X,r)f(
    X)Rr?Qr(X)v

21
Averaging over random coins and rectangles
x2
  • For every choice of random coins r the
    game A(,r) is deterministic w/complexity q.
  • It divides the set of strings x of length n into
    2q rectangles.
  • Let Qr(x) denote the rectangle of x.

x1
Qr(x1,x2)
  • b PrA(X,E(X))f(X)
  • Sr PrA(X, E(X))f(X) ? E(X)r
  • Sr Sv PrA(X,E(X))f(X) ? E(X)r ? Qr(X)v
  • Sr Sv PrA(X,r) f(X) ? E(X)r ? Qr(X)v
  • SrSvPrQr(X)vPrE(X)rQr(X)v

    PrA(X,r)f(X)E(X)r?Qr(X)v

22
Proof (continued)
a PrA(X,R)f(X)
SrSvPrQr(X)vPrRr Qr(X)vPrA(X,r)f(X)
Rr ?Qr(X)v
SrSvPrQr(X)vPrE(X)rQr(X)vPrA(X,r)f(X)
E(X)r?Qr(X)v
b PrA(X,E(X))f(X)
23
Proof (continued)
Would be fine if f was also constant over
rectangle
a PrA(X,R)f(X)
SrSvPrQr(X)vPrRr Qr(X)vPrA(X,r)f(X)
Rr ?Qr(X)v
v
2-m
x2
Problem It could be that A(,r) does well on
rectangle but poorly on E(X)r Note A(,r) is
constant over rectangle.
R is uniform and independent of X
x1
E is a 2-source extractor and Qr(X)v is a
rectangle
E(X)r
v
2-m
SrSvPrQr(X)vPrE(X)rQr(X)vPrA(X,r)f(X)
E(X)r?Qr(X)v
b PrA(X,E(X))f(X)
24
Modifying the argument
  • We have that PrA(X,R)f(x)1-?
  • By Yaos lemma ?deterministic game F w/complexity
    q
  • PrF(X)f(X)1-?
  • Consider randomized algorithm A(x,r) which
  • Simulates A(x,r)
  • Simulates F(x)
  • Let Qr(x) denote the rectangle of A and note
    that
  • A(,r) is constant on rectangle Qr(X)v.
  • F(x) is constant on rectangle Qr(X)v.

25
Proof (continued)
Would be fine if f was also constant over
rectangle
a PrA(X,R)f(X)
SrSvPrQr(X)vPrRr Qr(X)vPrA(X,r)f(X)
Rr ?Qr(X)v
v
2-m
x2
Problem It could be that A(,r) does well on
rectangle but poorly on E(X)r Note A(,r) is
constant over rectangle.
R is uniform and independent of X
x1
E is a 2-source extractor and Qr(X)v is a
rectangle
E(X)r
v
2-m
SrSvPrQr(X)vPrE(X)rQr(X)vPrA(X,r)f(X)
E(X)r?Qr(X)v
b PrA(X,E(X))f(X)
26
Proof (replace f?F)
We have that F is constant over rectangle!
a PrA(X,R)F(X)
a-a ?
SrSvPrQr(X)vPrRr Qr(X)vPrA(X,r)F(X)
Rr ?Qr(X)v
v
2-m
x2
Problem It could be that A(,r) does well on
rectangle but poorly on E(X)r Note A(,r) is
constant over rectangle.
R is uniform and independent of X
x1
E is a 2-source extractor and Qr(X)v is a
rectangle
E(X)r
v
2-m
SrSvPrQr(X)vPrE(X)rQr(X)vPrA(X,r)F(X)
E(X)r?Qr(X)v
b PrA(X,E(X))F(X)
b-b ?
27
Finishing up
  • Thm If A is a randomized communication game with
    communication complexity q that tosses m random
    coins then set B(x)A(x,E(x)) where E is a
    2-source extractor.
  • B succeeds on most inputs.
    A (1-?)-fraction.
  • B can be implemented by a deterministic
    communication game with communication complexity
    O(mq).
  • 2-source extractors cannot be computed by
    communication games.
  • However, we need extractors for relatively large
    rectangles. Namely 2-source extractors for
    min-entropy n-(mq).
  • Each of the two parties can send the first 3(mq)
    bits of his input. The sent strings have entropy
    rate at least ½.
  • Run explicit 2-source extractor on substrings.

q.e.d.
???
28
Generalizing the argument
  • Consider e.g. randomized decision trees A(x,r).
  • Define Qr(x) to be the leaf the decision tree
    A(,r) reaches when reading x.
  • Simply repeat argument noting that Qr(X)v is a
    bit-fixing source.
  • More generally, for any class of randomized
    algorithms we can set Qr(x)A(x,r)
  • Can do the argument if we can explicitly
    construct extractors for distributions that are
    uniform over Qr(X)v
    A(X,r)v.
  • Loosely speaking, need extractors for sources
    recognizable by functions of the form A(,r).
  • There is a generic way to construct them from a
    function that cannot be approximated by functions
    of the form A(,r).

29
Conclusion and open problem
  • Loosely speaking Whenever we have a function
    that is hard on average against a nonuniform
    version of a computational model we get an
    explicit version of Yaos lemma (that is explicit
    weak derandomization) for the model.
  • Can handle AC0 using the hardness of parity.
  • Gives a conditional weak derandomization for
    general poly-time algorithms. Assumption is
    incomparable to NW,GW.
  • Open problems
  • Other ways to handle r gt x.
  • Distributions that arent uniform.

30
Thats it
Write a Comment
User Comments (0)
About PowerShow.com