HARDNESS OF APPROXIMATIONS - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

HARDNESS OF APPROXIMATIONS

Description:

The purpose of the Expander graph is to ensure that in any optimal truth ... Let Gx be a 14 degree expander on k-vertices. ... constructs an expander graph H ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 55
Provided by: FSC1103
Category:

less

Transcript and Presenter's Notes

Title: HARDNESS OF APPROXIMATIONS


1
HARDNESS OFAPPROXIMATIONS
2
Gap Introducing Reduction
  • For simplicity we assume that we are always
    reducing from SAT(or any other NP- hard problem).
  • Let ? be a minimization problem.
  • A gap introducing reduction from SAT to ? comes
    with two parameters f and a.
  • f is a function of the instance.
  • a is a function of the size of the instance.

Given an instance F of SAT it outputs, in
polynomial time, an instance x of ? such that
Gap a(x) represents the hardness factor
established by the gap-introducing reduction for
the NP-hard optimization problem.
3
Gap Preserving Reduction
Once we have obtained a gap-introducing reduction
from SAT to an optimization problem, say ?1, a
hardness result for another optimization problem,
say ?2 can be proved by a gap preserving
reduction from ?1 to ?2.
  • We assume
  • ?1 is a minimization problem, and
  • ?2 is a maximization problem.

A gap-preserving reduction G from ?1 t? ?2 comes
with 4 parameters (functions), f1, a, f2,
ß. Given an instance x of ?1, it computes in
polynomial time, an instance y of ?2 such that
Since ?1 is a minimization problem Since ?2 is a
maximization problem

Composed reduction shows that there is no ß(y)
factor approximation algorithm for ?2 assuming
4
Remarks on reductions
  • The gap ß can, in general, be bigger or smaller
    than a. In this sense, gap-preserving is a
    slight misnomer.
  • An approximation algorithm for ?2 together with a
    gap reduction G from ?1 to ?2 does not
    necessarily yield an approximation algorithm for
    ?1.
  • Gap preserving reduction G together with an
    appropriate gap-introducing reduction from SAT to
    ?1 does suffice for proving a hardness of
    approximation for results for ?2.

5
Probabilistically Checkable Proof Systems
A verifier is a polynomial-time probabilistic
Turing Machine containing
  • An input tape.
  • A work tape.
  • A tape that contains a random string.
  • A tape called the proof string and denoted as ?.
    Proof system should be thought of as an array of
    bits out of which the verifier will examine a
    few.

6
Definitions on PCP I
  • A verifier is O(r(n),q(n)) restricted if on
    each input size n it
  • uses at most O(r(n)) random bits for its
    computation, and
  • queries at most O(q(n)) bits of the proof.
  • In other words, an (r(n),q(n)) restricted
    verifier has two associated integers c, k
  • Random string has length cr(n).
  • Verifier reads the random string R, computes a
    sequence of kq(n) locations, and queries these
    locations in ?

7
Definitions on PCP II
  • A verifier M probabilistically checks membership
    proof for language L, if
  • For every input x in L, there is a proof ?x that
    causes M to accept for every random string, i.e.,
    with probability 1,
    .
  • For any input x not in L, every proof is rejected
    with probability at least 0.5, i.e.,

The probability of accepting in case
is called the error probability.
Observation The choice of probability ½ in the
second part is arbitrary. By repeating the
verifiers program O(1) times, and rejecting if
the verifier rejects once, the error probability
can be reduced to any arbitrary positive constant
8
A language L is in PCP(r(n),q(n)), written
, if there is an
(r(n),q(n)) - verifier M that probabilistically
checks membership proof for L.
Note that
since NP is the set of languages for which
membership proofs can be checked in deterministic
polynomial time
9
PCP Theorem
Gives an alternative characterization of NP in
terms of PCP
Theorem
Proof is divided into two parts
  • (easy to prove)
  • (difficult
    to prove)

In terms of the 3SAT problem, the interesting and
difficult part of the PCP theorem is decreasing
the error probability to lt1/2 (i.e., maximizing
the acceptance probability of M), even though the
verifier M is allowed to read only a constant
number of bits.
Use of PCP Theorem
The PCP theorem directly gives an optimization
problem in particular a maximization problem for
which there is no factor ½ approximation
algorithm, assuming
10
Maximizing Accept Probability
Maximization Problem Let M be a PCP(logn,1)
verifier for SAT. On input F, a SAT problem, find
a proof that maximizes the probability of
acceptance of V.
Theorem Assuming P is not NP, there is no factor
½ approximation algorithm for the above problem.
Proof Suppose there is a factor ½ approximation
algorithm. If F is satisfiable, then this
algorithm must provide a proof on which Ms
acceptance probability is
But the acceptance probability can be computed in
polynomial time, by simply simulating M for all
random strings of length O(logn). Thus this
polynomial time can be computed in polynomial
time, contradicting the assumption that
11
Reductions Using PCP-theorem
12
Inapproximability Results
Recent inapproximability results divide into four
broad classes based on the approximation ratio
that is provably hard to achieve
13
Hardness of MAX-3SAT
MAX-3SAT is the restriction to of MAX-SAT to
instances in which each clause has at most 3
literals. In MAX-3SAT optimization problem,
feasible solutions are truth assignments and
objective function is the number or fraction of
satisfied clauses. MAX-3SAT plays a similar role
in hardness of approximation as 3SAT plays in the
theory of NP Hardness.
We will prove the next theorem
Theorem There is a constant for which
there is a gap introducing reduction from SAT to
MAX-3SAT that transforms a boolean formula F t? ?
such that
  • If F is satisfiable, OPT(?)m, and
  • If F is not satisfiable,

m denotes the number of clauses in ?
14
Strategy
Proof will be accomplished in two stages.
Definition of MAX k-function Problem
  • Given
  • n boolean variables x1,x2,,xn
  • m functions f1,f2,,fm each of which is a
    function of k of the boolean variables.
  • Find
  • A truth assignment to x1,,xn that maximizes the
    number of functions satisfied. Here k is assumed
    to be a fixed constant.

15
From SAT to MAX k-functions SAT
Lemma There is a constant k for which there is a
gap-introducing reduction from SAT to MAX
k-FUNCTION SAT such that transforms a boolean
formula F to an instance ? of MAX k-FUNCTION SAT
such that
  • If F is satisfiable, , and
  • If F is not satisfiable,

where m is the number of formulae in I
Proof
  • F instance of SAT of length n
  • M PCP(logn,1) verifier for SAT, with associated
    parameters c and q. Corresponding to each random
    string r of length clog(n), M reads q bits of
    the proof. Thus M reads a total of at most
    bits of the proof
  • B is the set of boolean variables corresponding
    to each of these bits
  • fr A boolean function of q variables from B
    corresponding to each string r. There is a
    polynomial algorithm which given input F, outputs
    the mnc functions fr.

If F is satisfiable, there is a proof ? that
makes M accepts with probability 1. The
corresponding truth assignment to B satisfies all
nc functions fr.
If F is not satisfiable, then on every proof ?, M
accepts with probability lt1/2. Thus every truth
assignment satisfies ½ nc functions fr.
16
Proof of Hardness of MAX-3SAT
We show how to obtain a 3SAT formula from the nc
functions
  • ? Boolean formula defined by
  • ?r fr boolean function written as a SAT formula
    containing at most 2q clauses. Each clause
    contains at most q literals.
  • 3SAT formula, obtained by using the
    standard trick of introducing new variables to
    every clause of ? containing more than 3
    literals. The resultant formula contains at
    most clauses

If a truth assignment satisfies formula fr, then
it must satisfies all clauses of ?r On the other
hand if it does not satisfy fr, then it must
leave at least one clause of ?r
un-satisfied. Thus if F is not satisfiable, any
truth assignment must leave gt1/2nc clauses of ?
unsatisfied
If F is satisfiable, then there is a truth
assignment satisfying all clause of If F is not
satisfiable gt1/2nc remain unsatifiable, under
any truth assignment.
17
MAX-3SAT with bounded occurrences of variables
Useful Notions and some Notations
  • Expander Graph G(V, E)
  • Every vertex has the same degree
  • For any nonempty subset
  • Gx A degree 14 expander graph on k vertices.
    Label the vertices with distinct boolean
    variables x1,x2,,xk
  • ?x A CNF formula
  • B The set of boolean variables occuring in F
  • Consistent truth assignments to x1,,xk. All the
    variables are set to true or all are set to false

The purpose of the Expander graph is to ensure
that in any optimal truth assignment, a given set
of Boolean must have consistent assignment, i.e.,
all true or all false.
where
is the set of edges in the cut
Corresponding to each edge of Gx we
will include the clauses and

18
Description of reduction
  • W.l.o.g it is assumed that every variable occurs
    in F at least ?? times. If not we can replicate
    each clause No times.
  • For each variable x in B which occurs
    times in F
  • Each variable in of
    Vx occurs exactly 29 times

is a set of completely new variables.
Let Gx be a 14 degree expander on k-vertices.
Label its vertices with variables from Vx and
obtain formula ?x.
Replace each occurrence of variable x in F by a
distinct variable from Vx
After the end of the above for loop every
occurrence of a variable in F is replaced by a
distinct variable from the set of new variables.
denotes the new formula after the replacement in F
  • Type I clauses Clause of
  • Type II clauses Remaining clauses of ?

19
Proof (continued)
An inconsistent truth assignment partition the
vertices of Gx into two sets S and
Corresponding to each edge in the cut ?x will
have an unsatisfied clause (see example).
Since by definition
the number of unsatisfied clause in ?x is at
least assuming w.l.og that S is the
smallest subset
Claim An optimal truth assignment for ? must
satisfy all type II clauses, and therefore must
be consistent for each set Vx.
Proof By contradiction.
t is an optimal assignment for ? that is not
consistent for some Vx, with x in B. Thus t
partitions the edges of Gx into two disjoint
sets. Flip the truth assignment to variables in
S, keeping the rest of the assignment the same as
t. As a result, some type I clauses that were
satisfied under t may now be unsatisfied. Each of
these must contain a variable of S so their
number is at most S. On the other hand we get
at least S1 new satisfied clauses
corresponding to the edges in the
cut. Consequently the flipped assignment
satisfies more clauses than t.(Contradiction)
20
Gap analysis
  • m Number of clauses in F
  • Number of clauses in ?.
  • 3m Total number of occurences of all variables
    in F is at most 3m

Each occurrence participates in 28 type II two
literal clauses giving a total cost of at most
42m clauses. In addition, ? has m type I clauses.
Therefore m42m43m. Thus
If F is satisfiable, then by construction ? is
satisfiable, i.e . If F is
unsatisfiable, then
i.e., clauses of F remain
unsatisfied
Thus
21
Hardness of Vertex Cover
  • Input
  • An undirected graph G(V,E)
  • A cost function on vertices
  • Find
  • A minimum cost vertex cover, i.e., a subset
    such that every edge has at least on
    endpoint incident at

Cardinality vertex cover is a special case in
which all vertices are of unit cost. VC(d)
Restriction of the cardinality vertex cover
problem to instances in which each vertex has
degree at most d.
Theorem There is a gap-preserving reduction from
MAX-3SAT(29) to VC(30) that transforms a boolean
formula F to a graph such that
  • where m is the number of clauses in F.

22
Description of reduction
  • W.l.o.g it is assumed that each clause of F has
    exactly three literals.
  • Corresponding to each clause of F, graph G has 3
    vertices.
  • Each of these vertices is labeled with one
    literal of the clause. Thus
  • Graph G has two types of edges
  • For each clause of F, G has 3 edges connecting
    its vertices, and
  • For each u, v in V, if the literals u and v are
    negations of each other, then (u,v) is an edge in
    V.

By construction each vertex of G has two edges of
the first type and at most 28 edges of the second
type. Hence G has at most degree (282)30.
23
Proof (continued)
Claim The size of a maximum independent set in G
is precisely OPT(F)
Proof
  • Consider an optimal truth assignment for clause F
  • Pick one vertex, corresponding to a satisfied
    clause, from each satisfied clause. Clearly the
    picked vertices form an independent set.

Conversely
  • Consider an independent set I in G
  • Set literals corresponding to its vertices to be
    true. Any extension of this truth setting to all
    variables must satisfy at least I clauses in F.

Gap Analysis Note that the complement of a
maximum independent set in G is a minimum vertex
cover.

24
Hardness of Steiner Tree
  • Input
  • An undirected graph G(V,E)
  • Nonnegative edge costs
  • Vertices are partitioned into two sets, Required
    and Steiner
  • Find
  • A minimum cost tree in G that contains all the
    required vertices and any subset of the Steiner
    vertices

Theorem There is a gap-preserving reduction from
VC(30) to the Steiner tree problem. It transforms
an instance G(V,E) of VC(30) to an instance
H(R,S,cost) of Steiner tree, where R and S are
the required and steiner vertices of H, and cost
is a metric on . It satisfies

25
Description of reduction (Construction of graph H)
  • Construct a graph H(R, S, cost) such that G(V,E)
    has a vertex cover of size c iff H has a Steiner
    tree of cost Rc-1.
  • H will have a required vertex re corresponding to
    each edge e in E.
  • H will have a Steiner vertex su corresponding to
    each vertex u in V.
  • Assigned edge costs on graph H

26
Proof
Claim G(V,E) has a vertex cover of size c iff H
has a Steiner tree of cost Rc-1
Proof
gt
  • Let G has a vertex cover of size c.
  • Let Sc be the set of Steiner vertices in H
    corresponding to the c vertices in the cover.

There is a steiner tree in H covering
using cost 1 edges only, since every edge e in E
must be incident at a vertex in the cover.
The cost of the Steiner tree is
.
lt
Let T be a Steiner tree in H of cost .
  • We will show that
  • T can be transformed into a Steiner tree of the
    same cost that uses edged of cost 1 only. If so
    the latter must contain exactly c Steiner
    Vertices.
  • Every required vertex of H must have a unit cost
    edge to one of these Steiner Vertices.
    (Therefore, the corresponding c vertices of G
    form a cover).

27
Proof (continued)
Let (u,v) be an edge of cost 2 in T. (W.l.og
vertices u and v are both required)
  • Let eu be the edge in G corresponding to the
    required vertex u in T.
  • Let ev be the edge in G corresponding to the
    required vertex v in T.

Since G is connected there is a path, p, from one
of the endpoints of eu to one of the endpoints of
ev in G
Removing edge (u,v) from T gives two connected
components
  • Let R1 be the set of required vertices in the
    first connected component.
  • Let R2 be the set of required vertices in the
    second connected component.

Vertices u,v lie in different sets, so path p in
G must have two adjacent edges, say (a,b) and
(b,c) such that their corresponding vertices in
H, say w and lie in R1 and R2 respectively.
28
Proof (continued)
Let the Steiner vertex in H, corresponding to b
be sb. Now throwing the edges and
must connect the two components .
These two edges are of unit cost.
Gap Analysis

29
(No Transcript)
30
HARDNESS OFCLIQUE PROBLEM
31
A First Approach On Hardness of Clique Problem
  • Input
  • An undirected graph G(V,E)
  • Nonnegative weights on vertices. Cardinality
    version all weights are equal to 1.
  • Find
  • A clique of maximum weight. A clique in G is a
    subset of vertices such that for each
    pair u, v in S, (u,v) is in E. Its weight is the
    sum of weights of its vertices

Theorem For fixed constants b and q, there is a
gap introducing reduction from SAT to Clique that
transforms a boolean formula of size n to a graph
G(V,E), where such that

32
Preparation for the proof
  • F A PCP(logn,1) verifier for SAT. F requires
    blogn random bits and queries q bits of the proof
  • r Binary string of blogn bits.
  • t truth assignment to q boolean variables.
  • Q(r) The q positions in the proof that F queries
    when it is given string r as the random string.
  • p(r) Truth setting assigned by proof p to
    positions Q(r)
  • Ur,t A vertex in G for each choice of the random
    string, r, of blogn bits, and each truth
    assignment, t, of q boolean variables

33
Proof
  • Vertices and are consistent
    if t1 and t2 agree at each position at which
    Q(r1) and Q(r2) overlap.
  • Two distinct vertices and are
    connected by an edge in G iff they are consistent
    and they are both accepting.
  • If F is satisfiable, there is a proof, p, on
    which F accepts for each choice of r, of the
    random string. There are 2blognnb possible
    random strings .
  • If F is not satisfiable, suppose that C is a
    clique in G.

For each random string r, let p(r) be the truth
setting assigned by proof p to positions Q(r).
Thus, the vertices
form a clique of size nb.
Since the vertices of C are pairwise consistent,
there is a proof p such that the Q(r) positions
of p contain the truth assignment t for each
vertex of clique. The number of vertices in
clique is 2qC. The total number of vetices in G
is 2qnb. Thus the probability of acceptance is
at least
34
Generalization of definition of PCP
In the previous proof the hardness factor
established is precisely the bound on the error
probability of the PCP verifier for SAT.
Question Why the need for generalizing the
definition of PCP
Answer Error probability needs to be made
inverse polynomial
Definition of
  • We introduce two parameters
  • c parameter stands for completeness, and
  • s parameter stands for soundness.

Definition if
there is a verifier V, which on input x of length
n, obtains a random string of length O(r(n)),
queries O(q(n)) bits of the proof and satisfies.
  • If x is in L, there is a proof y that makes V
    accept with probability gtc.
  • If x is not in L, for every proof y, V accepts
    with probability lts.

According to the previous definition
35
How to reduce parameter s
Two ways
  • Obvious way Simulate a PCPlogn,1 verifier
    multiple number of times and accepts iff the
    verifier accepts each time
  • Clever way Use a constand degree expander graph
    to generate O(logn) strings of blogn bits each,
    using only O(logn) truly random bits.

Simulating k times will reduce soundness to
However this will increase the number of random
bits needed to O(klogn) and the number of query
bits to O(k)
Verifier will be simulated using these O(logn)
strings as the random strings. Clearly these are
not truly random strings. Properties of expanders
show that they are almost random. Probability of
error still drops exponentially in the number of
times the verifier is simulated
36
A useful theorem
Graph H with the following properties
  • A constant degree expander on nb vertices.
  • Each vetrex has a unique blogn bits

Theorem Let S be any set of vertices of H and
There is a constant k such that
Question 1 Why we introduce graph H
Answer We will use it to generate O(logn)
strings of blogn bits, using only O(logn) truly
random strings. The verifier will be simulated
using these O(logn) strings as random strings.
Question 2 How we construct a random walk on
graph H of length O(logn)
  • Answer We use only O(logn) random bits.
  • blogn bits to pick the starting vertex, and
  • a constant number of bits to pick successive
    vertex

37
Theorem
Proof is divided into two parts

Let which means
there is a verifier F for L which requires blogn
random bits and queries q bits of the proof,
where b and q are constants
We give a verifier
for language L, which.
  • constructs an expander graph H
  • constructs a random walk of length klogn using
    only O(logn) random bits. By construction of H,
    the label of each vertex on this path specifies a
    blogn bit string.
  • It uses these these klogn1 strings as the
    random string on which it simulates verifier F.

38
Proof
  • Consider x is in L. Let p be a proof that makes
    verifier F accepts with probability 1.
  • Consider x is not in L. Let p be an arbitrary
    proof supplied to

Given proof p, also accepts x with
probability 1. Hence the completeness parameter
is 1 (c1)
Given proof p, verifier F accepts on
random strings of length blogn Let S denote the
corresponding set of vertices of H,
. Since accepts x iff F accepts x on all
klogn1 random strings, accepts x if the
random walk remains entirely in S. But the
probability of this event is lt1/n. Thus the
soundness of is 1/n.
Observe that requires only O(logn) random
bits and queries O(logn) bits of the query.
39
Attack On Hardness of Clique Problem
Theorem For fixed constants b and q, there is a
gap introducing reduction from SAT to Clique that
transforms a boolean formula of size n to a graph
G(V,E), where such that

Proof Let F be a
verifier for SAT that requires blogn random bits
and queries qlogn bits of the proof
  • If F is satisfiable and p is a good proof, choose
    the nb vertices of G such that the klogn
    positions of p associated with each chosen vertex
    contains assignment t. These vertices form a
    clique
  • If F is not satisfiable, suppose that C is a
    clique in G

We have shown that any clique C in G gives rise
to a proof that is accepted by F with probability
Since the soundness of F is 1/n, the largest
clique in G is of size ltnb-1.
40
Attack On Hardness of Clique Problem II
Corollary There is no factor
approximation algorithm for the
cardinality clique problem assuming
, where
41
(No Transcript)
42
HARDNESS OFSET-COVER PROBLEM
43
A known approximation algorithm
  • Input
  • A universe U of n elements,
  • a collection of subsets of U,
    , and
  • a cost function
  • Find
  • A minimum cost subcollection of S that covers all
    elements of U.

A greedy approximation algorithm with factor
C0 while ( ) Let
the cost-effectiveness of S find the set
whose cost-effectiveness is smallest, say
S Pick S, and for each e in S-C, set end
while
44
Two-prover one-round proof System (Introduction)
Since now for the purpose of showing hardness of
MAX-3SAT and Clique we did not require a detailed
description of the kinds of queries made by the
verifier. The only restriction was that we only
required a bound on the number of queries made on
the proof.
Question Which is the notion behind a Two-prover
One-round proof system
Answer Think of the proof system as a game
between the prover and the verifier. Prover is
trying to cheat in the sense that it is trying to
convince the verifier that a no instance for
Language L is actually in L.
Question Is there a verifier that can ensure
that the probability of getting cheated is lt1/2
fro every no instance?
45
Two-prover one-round proof System
Two-prover model
  • Verifier is allowed to query two
    non-communicating provers, denoted P1 and P2.
  • Verifier can cross check the provers answer.
    Thus the provers ability to cheat gets
    restricted in this model.

One-round proof system
  • Verifier is allowed one round of communication
    with each prover. The simplest way formalizing
    this is as follows
  • Proof P1 is written in alphabet S1. The size of
    the alphabet, S1 may be unbounded.
  • Proof P2 is written in alphabet S2. The size of
    the alphabet, S2 may be unbounded.
  • Verifier is allowed to query one position of the
    two proofs.

46
Two-prover one-round proof System
  • Comes with 3 parameters
  • Completeness (c),
  • Soundness (s), and
  • of random bits provided to the verifier (r(n))

Two-prover One-round model defines the class
There is a polynomial time bounded verifier M
that receives O(r(n)) truly random bits and
satisfies
  • for every input , there is a pair of
    proofs and that makes
    M accepts with probability .
  • for every input , and every pair of
    proofs and that makes
    M accepts with probability lts.

47
Theorem
Theorem There is a constant epgt0 such that
Proof Divided into two parts

Proving the second part. We know that
  • If F is satisfiable, OPT(?)m
  • If F is not satisfiable, OPT(?)lt(1-e5)m

48
Proof (Continued)
The Two-prover one-round verifier, M, for SAT
works as follows
  • Given a SAT formula F, it uses the aforementioned
    reduction to obtain a MAX-3SAT(5) instance of ?.
  • M assumes that P1 contains an optimal truth
    assignment, t, for ?. (S12). It assumes that
    P2 contains for each clause, the assignment to
    its three boolean variables under t (S223).
  • It uses the O(logn) random bits to pick
  • A random clause C from ?.
  • A random boolean variable x occurring in clause
    C.
  • M obtains the truth assignment to x and the three
    variables in C by querying P1 and P2,
    respectively.
  • M accepts if C is satisfied and the two proofs
    agree on their assignment for variable x.

49
Proof (Continued)
  • If F is satisfiable, then so is ?. Clearly there
    are proofs y1 and y2 such that M accepts with
    probability 1.
  • If F is unsatisfiable, any truth assignment must
    leave strictly more than e5 clauses unsatisfied

Consider any pair of proofs (y1,y2), and assume
y1 as a truth assignment, say, t. The random
clause C picked by M, is not satisfied by t with
probability If so, and if the assignment for C
contained in y2 is satisfying then y1 and y2 must
be inconsistent.
Let A, B be two events A Random Clause C is not
satisfied by t. ? Random clause C is satisfied
by the assignment contained in y2
Hence overall, verifier M must reject with
probability
50
Main Reduction
Theorem There is a constant cgt0 for which there
is a randomized gap-introducing reduction G,
requiring time n(O(loglogn)), from SAT to the
cardinality set cover problem that transforms a
Boolean formula F to a set system S over a
universal set of size n(O(loglogn)) such that
  • where
  • n the length of each of the two proofs for SAT
    under the two-prover one-round model (polynomial
    in the size of F).
  • kO(loglogn).

Observation A slight abuse of notation, since
gap introducing reductions were defined to run in
polynomial time
51
What we need for the proof I
  • Uniformity Conditions for MAX-3SAT(5) formula.
  • Each boolean variable occurs in exactly 5
    clauses.
  • Each clause contains 3 distinct variables
    (negated or unnegated).
  • As a result of the uniformity conditions, if ?
    has n variables, then it has clauses.
    Therefore, the two proofs are of length n and
  • This modification changes the constant e5 to some
    other constant.
  • The two proofs have equal lenght.

The two proofs are of length n and Equality of
length can be easily achieved by repeating the
first proof 5 times and the second proof 3
times. Verifier will query a random copy of each
proof.
52
What we need for the proof II
  • A gadget (set system)
  • A set system
    where,
  • U is the universal subset.
  • C1,,Cm are subsets of U.

Good Cover U is covered by picking a set Ci and
its complement. Bad Cover A cover that does not
include a set and its complement.
  • A useful theorem for constructing efficiently set
    systems as the above one

Theorem There exists a polynomial p(., .) such
that there is a randomized algorithm which
generates, for each m and l, a set system
With Up(m,2l). With probability gt1/2 the
gadget produced satisfies that every bad cover is
of size l. Moreover the running time is
polynomial in U.
53
What we need for the proof II
  • Reduce error probability of two-prover one-round
    proof system

Two ways
  • Parallel repetition (usual way) Verifier picks k
    clauses randomly and inpependently, and a random
    boolean variable from each of the clauses
  • Verifier queries P1 on the k-variables.
  • Verifier queries P2 on the k-clauses.
  • Accepts if all the answers are accepting

Under this schema, the probability that the
provers manage to cheat drops to lt(1-ep)k. This
is true only if it is assumed that the provers
are required to answer each question before being
given the next question
Two major drawbacks
  • Each prover is allowed to look at all k questions
    before providing its k answers. Able to
    coordinate its answers.
  • If the provers are required to answer each
    question before being given the next question,
    probability error drops in the usual fashion.
    However this required k-round of communiocations.

54
What we need for the proof III
  • Parallel repetition (proposed by the following
    theorem)

Theorem Let the error probability of a
two-prover one-round proof system be dlt1. Then
the error probability on k parallel is at kost
dkd, where d is a constant that depends only on
the length of the answers of the original proof
system.
Write a Comment
User Comments (0)
About PowerShow.com