The PCP starting point - PowerPoint PPT Presentation

About This Presentation
Title:

The PCP starting point

Description:

Using a Random Matrix. The Chernoff bound: ... Hence, for such m, a random matrix satisfies the lemma with positive probability. ... – PowerPoint PPT presentation

Number of Views:75
Avg rating:3.0/5.0
Slides: 29
Provided by: ELAD
Category:
Tags: pcp | point | random | starting

less

Transcript and Presenter's Notes

Title: The PCP starting point


1
Quadratic Solvability
  • The PCP starting point

2
Overview
  • In this lecture well present the Quadratic
    Solvability problem.
  • Well see this problem is closely related to PCP.
  • And even use it to prove a (very very weak...)
    PCP characterization of NP.

3
Quadratic Solvability
or equally a set of dimension D total-degree 2
polynomials
  • Def (QSD, ?)
  • Instance a set of n quadratic equations over ?
    with at most D variables each.
  • Problem to find if there is a common solution.

0
?
1
?
1
?
1
4
Solvability
  • A generalization of this problem
  • Def (SolvabilityD, ?)
  • Instance a set of n polynomials over ? with at
    most D variables. Each polynomial has
    degree-bound n in each one of the variables.
  • Problem to find if there is a common root.

5
Solvability is Reducible to QS
w2
w3
w2
y2 x2 x2 t tlz z 1 0
w1
w1 y2
w2 x2
w3 tl
the parameters (D,?) dont change (assuming Dgt2)!
Could we use the same trick to show Solvability
is reducible to Linear Solvability?
6
QS is NP-hard
  • Let us prove that QS is NP-hard by reducing 3-SAT
    to it

Def (3-SAT) Instance a 3CNF formula.
Problem to decide if this formula is
satisfiable.
(?1??2??3)?...?(?m/3-2??m/3-1??m/3) where each
literal ?i?xj,?xj1?j?n
7
QS is NP-hard
  • Given an instance of 3-SAT, use the following
    transformation on each clause

xi
1-xi
xi
? xi
( ?i ? ?i1 ? ?i2 )
Tr ?i Tr ?i1 Tr ?i2
The corresponding instance of Solvability is the
set of all resulting polynomials (which, assuming
the variables are only assigned Boolean values,
is equivalent)
8
QS is NP-hard
  • In order to remove the assumption we need to add
    the equation for every variable xi
  • xi ( 1 - xi ) 0
  • This concludes the description of a reduction
    from 3SAT to SolvabilityO(1),? for any field ?.

What is the maximal degree of the resulting
equations ?
9
QS is NP-hard
  • According to the two previous reductions

3-SAT
Solvability
QS
10
Gap-QS
  • Def (Gap-QSD, ?,?)
  • Instance a set of n quadratic equations over ?
    with at most D variables each.
  • Problem to distinguish between the following two
    cases
  • There is a common solution
  • No more than an ? fraction of the equations
    can be satisfied simultaneously.

YES
NO
11
Gap-QS and PCP
quadratic equations system
Gap-QSD,?,?
Def L?PCPD,V, ? if there is a polynomial time
algorithm, which for any input x, produces a set
of efficient Boolean functions over variables of
range 2V, each depending on at most D variables
so that x?L iff there exits an assignment to
the variables, which satisfies all the
functions x?L iff no assignment can satisfy more
than an ?-fraction of the functions.
For each quadratic polynomial pi(x1,...,xD), add
the Boolean function ?i(a1,...,aD)?pi(a1,...,aD)0
values in ?
the variables of the input system
  • Gap-QSD,?,? ? PCPD,log?,?

12
Gap-QS and PCP
  • Therefore, every language which is efficiently
    reducible to Gap-QSD,?,? is also in
    PCPD,log?,?.
  • Thus, proving Gap-QSD,?,? is NP-hard, also
    proves the PCPD,log?,? characterization of
    NP.
  • And indeed our goal henceforth will be proving
    Gap-QSD,?,? is NP-hard for the best D, ? and ?
    we can.

13
Gap-QSn,?,2/? is NP-hard
  • Proof by reduction from QSO(1),?

Instance of QSO(1),?
Satisfying assignment ?i
0 0 0 . . . 0
Non-satisfying assignment ?j
0 3 7 . . . 0
14
Gap-QSO(1),? is NP-hard
  • In order to have a gap we need an efficient
    degree-preserving transformation on the
    polynomials so that any non-satisfying assignment
    results in few satisfied polynomials

p1 p2 p3 . . . pm
Non-satisfying assignment ?j
0 2 4 . . . 3
15
Gap-QSO(1),? is NP-hard
  • For such an efficient degree-preserving
    transformation E it must hold that

Thus E is an error correcting code !
We shall now see examples of degree-preserving
transformations which are also error correcting
codes
16
The linear transformation multiplication by a
matrix
polynomials
poly-time, if mnc
inner product
a linear combination of polynomials
scalars
17
The linear transformation multiplication by a
matrix
the values of the polynomials under some
assignment
the values of the new polynomials under the same
assignment
a zero vector if ?0n
18
Whats Ahead
  • We proceed with several examples for linear error
    correcting codes
  • Reed-Solomon code
  • Random matrix
  • And finally even a code which suits our needs...

19
Using Reed-Solomon Codes
  • Define the matrix as follows

Thats really Lagranges formula in disguise...
  • One can prove that for any 0?i??-1, (vA)i is
    P(i), where P is the unique degree n-1 univariate
    polynomial, for which P(i)vi for all 0?i?n-1.
  • Therefore for any v the fraction of zeroes in vA
    is bounded by (n-1)/?.

using multivariate polynomials we can even get
?O(logn/?)
20
Using a Random Matrix
Lem A random matrix A??nxm satisfies w.h.p
The fraction of zeros in the output vector
21
Using a Random Matrix
Proof (by the probabilistic method) Let
v?0n??n. Because the inner product of v and a
random vector is random
? Hence, i (vA)i 0 (denoted Xv) is a
binomial random variable with parameters m and
?-1.
For this random variable, we can compute the
probability Pr Xv ? 2m?-1 (the probability
that the fraction of zeros exceeds 2?-1 )
22
Using a Random Matrix
The Chernoff bound For a binomial random
variable with parameters m and ?-1
Hence
23
Using a Random Matrix
Overall, the number of different vectors v is ?
?n
Hence, according to the union bound, we can
multiply the previous probability by the number
of different vectors v to obtain a bound on the
probability
The union bound The probability for a union of
events is Smaller then or equal to the sum of
Their probabilities
And this probability is smaller then 1 for
mO(n?log?). Hence, for such m, a random
matrix satisfies the lemma with positive
probability. ?
24
Deterministic Construction
Define a random matrix A??nxm
Assume ?Zp. Let klogpn1. (Assume w.l.o.g
k?N) Let Zpk be the dimension k extension field
of ?.
Associate each row with 1?i?pk-1
Hence, npk-1
Associate each column with a pair (x,y)?Zpk?Zpk
Hence, mp2k
25
Deterministic Construction
And define A(i,(x,y)) ltxi,ygt (inner product)
ltxi,ygt
26
Analysis
  • For any vector v??n, for any column
    (x,y)?Zpk?Zpk,
  • The number of zeroes in vA where v?0n ?

x,y G(x)?0 ? ltG(x),ygt0

x,y G(x)0
  • And thus the fraction of zeroes ?

27
Summary of the Reduction
  • Given an instance p1,...,pn for QSO(1),?,
  • We found a matrix A which satisfies
  • ?v?0??n i (vA)i 0 /m lt 2?-1

Hence p1,...,pn ? QSO(1),? If and only
if p1A,...,pnA ? Gap-QSO(n),?,2?-1
This proves Gap-QSO(n),?,2?-1 is NP-hard
!!
28
Hitting the Road
  • This proves a PCP characterization with DO(n)
    (hardly a local test...).
  • Eventually well prove a characterization with
    DO(1) (DFKRS) using the results presented here
    as our starting point.
Write a Comment
User Comments (0)
About PowerShow.com