Crash Course on Cryptography - PowerPoint PPT Presentation

1 / 69
About This Presentation
Title:

Crash Course on Cryptography

Description:

Alice should be sure c is uniformly random even if Bob cheats and vice versa ... Bob can, however, cheat by waiting to see a and then pick b=c'-a mod 2 for his ... – PowerPoint PPT presentation

Number of Views:106
Avg rating:3.0/5.0
Slides: 70
Provided by: jesperbuu
Category:

less

Transcript and Presenter's Notes

Title: Crash Course on Cryptography


1
Crash Course on Cryptography
SSoRC 2008
  • Jesper Buus Nielsen

2
Preliminaries
  • Security parameter k
  • Specifies the security level, e.g. the key length
    of cryptosystems and signature scheme
  • Probabilistic poly-time (PPT) algorithm
  • Runs in time polynomial in k on all inputs and is
    allowed to flip random coins
  • Zp 0, , p-1
  • Group of residues modulo p
  • N 1, 2, 3,
  • R the real numbers
  • e N ? R is called negligible if for all c?N
    there exists k such that e(k)lt1/kc for all
    kgtk
  • Goes to 0 faster then any inverse polynomial

3
One-way Functions
  • Key generator K?gen(k)
  • For each key K a function fK X?Y
  • Inverting game for an algorithm A
  • Sample random key K ? gen(k)
  • Sample uniformly random element x?R X
  • Compute y fK(x)
  • Compute x A(K,y)
  • f is called one-way if one can compute fK(x) in
    PPT and Pry fK(x) is negligible in k for all
    PPT A

4
Discrete Logarithm Problem
  • gen(k) ? (p,g)
  • Random k-bit prime p2q1 where q prime
  • g an element of Zp of order q
  • f 0,1,,q-1?Zp
  • f(a) ga mod p
  • Discrete logarithm (DL) problem
  • Given y ga mod p (and p,g) compute a
  • Is believed to be hard, i.e., f is believed to
    be a one-way function

5
Computational Indistinguishability
  • Distribution ensemble X Xkk?N
  • One random variable Xk for each value of the
    security parameter k
  • We call X Xkk?N and Y Ykk?N
    computationally indistinguishable if
    PrB(Xk)1 - PrB(Yk)1 is negligible for all
    PPT B

s
d
B cannot guess a with probability more than
negligibly better that ½
6
Decisional Diffie-Hellman Assumption
  • p2q1 as for DL problem
  • g,h random elements of order q in Zp
  • X sample random r ? 0,1,,q-1 and let X
    (g,gr,hr)
  • Y sample random r,s ? 0,1,,q-1 and let Y
    (g,gr,hs)
  • The DDH assumption is that these X and Y are
    computationally indistinguishable

7
Public-Key Encryption
A
pk
  • Key generator (pk,sk) ? gen(k)
  • pk public key , sk secret key
  • Encryption C Epk(mr) for randomness r
  • Decryption m Dsk(C)
  • Indistinguishability game for algorithm A
  • Sample random (pk,sk) ? gen(k)
  • Run A on pk to produce messages (m0,m1)
  • Compute Cb?Epk(mbr) for uniformly random r
  • Run A on Cb to produce a bit c
  • We call the cryptosystem chosen plaintext attack
    (CPA) secure if Prc1b1 - Prc1b0 is
    negligible for all PPT A

(m0,m1)
Epk(mb)
c
A cannot guess b with probability more than
negligibly better that ½
8
ElGamal Encryption
  • pk (p,g,h) as for DDH assumption
  • sk x such that h gx mod p
  • Messages m are from the group generated by h
  • Epk(mr) (A, B) (gr, mhr) mod p
  • Dsk(A,B) A-xB mod p (gr)-x mhr mod
    p (gx)-rhr m mod p h-rhr m mod
    p m

9
A Reduction Proof
  • We prove CPA security of ElGamal assuming the DDH
    assumption
  • We take any PPT A attacking ElGamal
  • We show how to massage it into a PPT B which
    attacks the DDH assumptions with half the success
    rate of A
  • Since the success rate of B is negligible under
    the DDH assumption, it follows that the success
    rate of A is negligible under the DDH assumption

10
B

a0 (p,g,h,gr,hr)
A B uses A as sub-routine

(p,g,h,G,H)
(p,g,h)
a1 (p,g,h,gr,hs)
b?R0,1
(m0,m1)
(G,mbH)
c
db?c
d
Half the success rate of A
Prd1a0-Prd1a1
Prd1a0 - ½
Prd1a0 ½Prc1a0,b0 ½Prc0a0,b1
½PrA(E(m0))1 ½PrA(E(m1))0)
The success rate of B
½PrA(E(m0))1 - ½PrA(E(m1))1 ½
Prd1a0-Prd1a1 ½(PrA(E(m0))1 -
PrA(E(m1))1)
11
Coin-Flip
  • Alice and Bob wants to flip a random bit c
  • Alice should be sure c is uniformly random even
    if Bob cheats and vice versa
  • Alice Send uniformly random a to Bob
  • Bob Send uniformly random b to Alice
  • Both Let c a?b ab mod 2
  • Idea is that ab mod 2 is uniform no matter how b
    is chosen, as long as Alice picks a uniformly
  • Bob can, however, cheat by waiting to see a and
    then pick bc-a mod 2 for his preferred choice
    c of c

12
Commitment Scheme
  • Key generator K ? gen(k)
  • Commitment to message m C commitK(mr) for
    randomness r
  • Binding Computationally hard or impossible to
    compute (m,r) and (m,r) with m?m and
    commitK(mr)commitK(mr) even given K
  • Hiding commitK(m0) and commitK(m1) are
    computationally of perfectly indistinguishable
    even given K
  • CPA secure public-key encryption directly gives a
    perfect binding, computationally hiding
    commitment scheme

13
Commitment Scheme
  • Computationally binding, perfect hiding
  • K(p,g,h) , where hgx mod p as for DL
  • The committer does not know x
  • commitK(mr) gmhr mod p
  • Perfect hiding
  • hr mod p is uniform in the group spanned by h
  • gx mod p is an element of this group
  • Therefore gxhr mod p is just a uniformly random
    element of this group, which leaks no information

14
Computationally Binding
  • commitK(mr) gmhr mod p , p2q1 , g,h ord q
  • Assume that some PPT A with high probability can
    compute commitK(mr) commitK(mr) for m?m.
    Then
  • gmhr gmhr (mod p)
  • gm-m hr-r (mod p)
  • g h(r-r)(m-m)-1 mod q (mod p)
  • g hx (mod p) for x (r-r)(m-m)-1 mod q
  • This contradicts the DL assumption that no PPT A
    can compute such an x with high probability

15
Coin-Flip
  • Alice
  • Pick uniformly random a
  • Sample a random key K for a commitment scheme
  • Compute CcommitK(ar) for uniformly random r
  • Send C to Bob
  • Bob Send uniformly random b to Alice
  • Alice Send (a,r) to Bob who checks that
    CcommitK(ar)
  • Both Let c a?b ab mod 2
  • If the commitment is computationally hiding, then
    Bob must pick b without knowing a
  • If it is perfect binding Alice cannot change her
    mind on a after seeing b

16
Zero-Knowledge Proofs of Knowledge
  • A prover P proves knowledge of a secret s to a
    verifier V
  • P leaks no information about s
  • Yet V accepts only if P knows s

17
ZK PoK of DL
  • We look at a discrete logarithm (DL) example
  • P and V both know p,g,h for DL problem
  • P knows s such that hgs mod p
  • P wants to convince V that she knows x but does
    not want to show x to V
  • If V trusts P she can just ask P to tell her
    whether P knows x, but V must accept the proof
    even if P cheats (is corrupted)

18
ZK PoK of DL
Hgr for random r
Pick random e?Zq
challenge
zser mod q
response
Accept if heHgz
  • Completeness
  • If P is honest then the proof must accept
  • Follows from heH (gs)egr gser gz

19
ZK PoK of DL
P p,g,h,s , hgs
V p,g,h
P p,g,h
Hgr for random r
???
Pick random e?Zq
zser mod q
???
Accept if heHgz
  • Soundness
  • If P makes the protocol accept with
    non-negligible probability then P know s
  • Should hold even for corrupted P, i.e., no
    matter how the provers messages are computed
  • Knows is defined to mean that P could compute
    s in polynomial time

20
ZK PoK of DL
???
Pick random e?Zq
Pick random e?Zq
???
???
Accept if heHgz
Accept if heHgz
  • All we know is that heHgz
  • But, since P is a program we can record its
    state after it sent H and try running it on a new
    challenge e and hope to get a new good reply z
    for which heHgz
  • We can keep trying until both proofs accept

21
ZK PoK of DL
???
Pick random e?Zq
Pick random e?Zq
???
???
Accept if heHgz
Accept if heHgz
  • We showed that P can compute H, z, z s.t.
    heHgz and heHgz
  • It can be seen that hgx for x(z-z)(e-e)-1 mod
    q
  • This gives a method, known as the extractor,
    which computes the secret given access to some P
    which makes the proof accept

22
ZK PoK of DL
???
Pick random e?Zq
Pick random e?Zq
???
???
Accept if heHgz
Accept if heHgz
  • The expected running time of the extractor must
    be polynomial in k and 1/p, where p is the
    probability that P makes the verifier accept
  • This in particular gives an expected PPT
    algorithm for computing the secret if p1/kc
  • In the above case the time is about poly(k)(1/p)2

23
ZK PoK of DL
P p,g,h,s , hgs
V p,g,h
V p,g,h
Hgr for random r
Pick random e?Zq
???
zser mod q
Accept if heHgz
  • Zero-knowledge
  • Not even a corrupted verifier V gets to know any
    new information during the protocol
  • I.e., everything V sees she could have computed
    herself in poly-time
  • In the above case H, e, z

24
ZK PoK of DL
P p,g,h,s , hgs
V p,g,h
Hgr for random r
Pick random e?Zq
zser mod q
H,e,z heHgz
  • Honest Verifier ZK
  • Same as before, but only for honest V
  • We give the argument for an honest verifier only
  • Above this just means that V picks e uniformly at
    random

25
ZK PoK of DL
P p,g,h,s , hgs
V p,g,h
Hgr for random r
The method which computes the view of verifier
without the secret is called the simulator
Pick random e?Zq
zser mod q
H,e,z heHgz
  • Here is how V could compute (H,e,z) herself
  • Pick e,z?Zq uniformly at random
  • Let Hgz(he)-1 mod p
  • Then heHgz mod p and (H,e,z) has the same
    distribution as in the protocol (i.e., perfect ZK)

By letting V only send 0,1 challenges the system
becomes fully ZK (as opposed to honest verifier
ZK)
26
Possibility of ZK PoK
  • Everything that can be proved can be proved in ZK
    with at most a polynomial blowup in running time

27
Person-in-the-Middle Attacks
I M know s!
I P know s!
Prove it!
Prove it!
P s
V
M
M knows s
No! ?
  • Non-malleable protocols
  • Prevent that values in one execution of the
    protocol can be misused in other executions of
    the protocol
  • Non-trivial to construct, but possible

28
Person-in-the-Middle Attacks
I M know s!
I P know s!
Prove it!
Prove it!
P s
V
M
M knows s
No! ?
  • Non-malleable protocols
  • Non-trivial to construct, but possible
  • Must let the values sent by the prover depend on
    the identity of the prover, e.g. her public key

29
Threshold Secret Sharing
  • Dealer D has a secret s?Zp
  • Splits in shares (s1,,sn)
  • Gives si to party Pi
  • And only Pi
  • t-security
  • Any t shares leak no information on s
  • Any t1 shares allow to compute s

n3, t1
D
s1
s2
s3
s
?
30
Threshold Secret Sharing
  • Can be done for any n and any tltn
  • Pick uniformly random a1,,at?Zp
  • Let f(x) s a1X1 a2X2 atXt mod p
  • Let s1f(1), , snf(n)
  • Reconstruction Given t1 shares on a polynomial
    f(X) of degree at most t one can compute f(X) by
    interpolation and then f(0)
  • Privacy Given only t points any possible secret
    sf(0) acts as point number t1, so there is
    exactly one polynomial which makes s possible

31
Threshold Secret Sharing
  • Will show two example
  • n3 and t1
  • f(X) aa1X (a line)
  • n3 and t2
  • f(X) aa1Xa2X2 (a parabola)

32
Secret Sharing, t1
7
6
5
4
s7 f(X)7-X
33
Security, t1
5
3
1
34
Reconstruction, t1
7
6
5
4
35
Security, t2
3
3
1
1
-2
-5
36
Reconstruction, t2
9
3
3
1
37
Secure Multiparty Computation
  • n parties P1, , Pn
  • Each party Pi has some secret inputs s
  • The parties want to learn some function
    yF(s1,,sm) of their inputs
  • Secure MPC allows to compute y without leaking
    any other information
  • t-security Any set of t colluding parties learn
    only y (and their own secret inputs) by taking
    part in the protocol
  • Formalized by requiring that everything else they
    receive in the protocol can be computed from
    these values in PPT, as for ZK

38
Examples
  • Many interesting tasks can be phrased as a secure
    MPC of an appropriate function, e.g.
  • Yes/no election
  • Each Pi has an input vi?0,1
  • F(v1,,vn) ?vi or F(v1,,vn) (?vi gt?n/2)
  • Vickrey auction
  • Each Pi has an input bi?N
  • F(b1,,bn) (i,bj)
  • i is the index of Pi with highest bid bi
  • bj is the 2nd highest bid

39
Possibility of Secure MPC
  • It is possible to t securely compute any function
    F among n parties as long as tltn/2
  • The running time of the protocol is polynomial in
    the time to compute F (insecurely)
  • Will sketch an approach based on secret sharing

40
MPC via Secret Sharing
  • The first step is to write the function to be
    computed as an arithmetic circuit over Zp for a
    prime pgtn
  • Need pgtn to do secret sharing, as each party
    needs its own point
  • Can be done via well-known techniques
  • Would typically be handled by a compiler

41
Computing with Shared Values
  • Input phase
  • Each Pi secret shares each of its inputs s with
    degree t and sends the shares to the other
    parties over secure channels
  • Secure as long as at most t of the other parties
    collude
  • Computation phase
  • Now the parties securely compute the circuit by
    iteratively computing t-sharings of additions and
    multiplications of other t-shared values
  • Done without leaking the shared intermediary
    values
  • Reconstruction phase
  • In the end the desired results are reconstructed
  • All parties send their share of the result to the
    other parties

42
Addition
6
All parties add their two shares
5
No communication ? Perfect Security
4
3
2
1
Two secrets s1 and t2 secret shared using
f(X)12X and g(X)2-X
Now st3 is secret shared usingh(X)3X (note
h(X)f(X)g(X))
43
Multiplication
6
6
4
4
4
1
Multiplication makes the degree of the sharing
polynomial go up f(X) 1 Xg(X) 4 - X h(X)
f(X)g(X) 4 3X - X2
44
Degree Reduction
6
6
4
4
Idea is now to securely generate at degree-t
sharing of the valueshared by the degree-2t
sharing and use that sharing as input to the
rest ofthe secure computation
45
Degree Reduction
  • By magic a uniformly random2t-sharing is dealt
  • No party knows anything but its own share
  • Along with a uniformly randomt-sharing of the
    same value as the 2t-sharing

6
6
4
4
4
3
They now know the distance betweenthe
troublesome 2t-sharing and the random 2t-sharing
(on the y-axis)
The parties subtract the random 2t-sharing from
the troublesome one, make their shares public
and compute the shared secret (reconstruction
is possible as 2tltn)
They all add this number to their share of the
random t-sharing
Secure as only uniformly random points were
revealed
This gives a t-sharing of the same secret as the
troublesome 2t-sharing
46
The Missing Piece
  • Each Pi picks a uniformly random ri and deals
    both a t-sharing and a 2t-sharing of ri
  • The parties sum their shares of the t-sharings,
    giving them a t-sharing of r?ri
  • The parties sum their shares of the 2t-sharings,
    which gives them a 2t-sharing of r?ri
  • Each party which uses a random sharing is
    guaranteed that the sums of the sharings are
    going to be uniformly random and unknown to all
    other parties

47
Active Security
  • So far we only have passive security, i.e.,
    against parties that follow the protocol but of
    which t might pool their view to learn more
  • We often need active security, i.e., security
    against up to t parties which might deviate from
    the protocol in some evil coordinated manner

48
Forcing Honest Behavior (1/2)
  • We can transform the passive secure protocol into
    an active secure one using a generic technique
  • Use commitments to commit all parties to their
    secret inputs and all received values
  • Uses ZK proofs to let the parties show that all
    shares they send are correctly computed from the
    values committed to
  • The commitments are hiding and the proofs are ZK,
    so no new information is leaked

49
Forcing Honest Behavior (2/2)
  • First all parties send commitments to their
    inputs to the other parties and give
    non-malleable zero-knowledge proof of knowledge
    of the committed values
  • Including commitments to the random values they
    are going to use when sharing
  • Whenever Pi should send a secret message m to Pj
    (in the passive protocol) it does
  • Compute Ccommit(mr) for uniformly random r
  • Send (m,r) securely to Pj
  • Send C to all other parties along with a ZK PoK
    of (m,r) for which Ccommit(mr) and for which m
    was computed as it should from the other values
    to which Pi is committed
  • Maintains that all parties are committed to all
    internal values at all times, and that they are
    correct

Independence of Inputs Non-malleability
guarantees that it is not possible to copy
another partys input commitment
50
Example Sharing
  • Passive secure protocol
  • Pick uniformly random a1,,at?Zp
  • Let f(x) s a1X1 a2X2 atXt mod p
  • Securely send sif(i) to Pi
  • Active secure protocol (additional steps)
  • Send S commit(sr) to all parties
  • Send Aj commit(airj) for j1,,t to all
    parties
  • Send Sj commit(siuj) for i1,,n to all
    parties
  • And send (si,ui) to Pi
  • For each i1,,n prove in ZK to all parties that
    S contains a value s and Si contains a value si
    and A1,,At contain values a1,,at such si s
    a1i1 a2i2 atit mod p

51
Example Addition
  • Passive secure protocol
  • Pj has a share sj of s and a share tj of t
  • Pj computes the share uj sjtj mod p of ust
    mod p
  • Active secure protocol (additional steps)
  • All parties know a commitment Sj to sj and a
    commitment Tj to tj and Pj knows the openings
  • Pj sends Ui commit(uiv) to all parties
  • Pj proves in ZK to all parties that Sj contains a
    value sj and Tj contains a value tj and Uj
    contains a value uj such that uj sjtj mod p

52
Example Opening
  • Passive secure protocol
  • Each Pj has a share yj of the output y
  • Pj sends yj to all other parties who reconstruct
    y from the shares
  • Active secure protocol (additional steps)
  • All parties know a commitment Yj to yj
  • Pj proves in ZK to all parties that yj is what
    was committed to by Yj
  • Since there are n-t?t1 honest parties all
    parties will receive at least t1 good shares

53
Handling Errors (1/2)
  • Now the only power of a corrupted party Pi is to
    refuse to send the required (m,r) to some Pj
  • A failed proof corresponds to not sending
  • If this happens, then Pj sends a complaint
    message to all parties and Pi must then send
    (m,r) to all parties and prove that it is correct
  • Secure to reveal m as one of the parties are
    corrupted
  • If Pi fails to do so, then all parties now know
    that Pi is corrupted

54
Handling Errors (2/2)
  • Detection of a corrupted Pi can be handled in
    several ways
  • Restart the computation without the cheating
    party
  • All intermediary values can be shared at all
    steps, and if a party refuses, then all her
    previously shared values are reconstructed and
    the other servers compute the withheld value
    themselves

55
Definition of Security (1/3)
  • Security is defined by requiring that whatever t
    corrupted parties can obtain by attacking the
    protocol they could obtain by attacking an ideal
    world implementation

56
Definition of Security (2/3)
  • The ideal world
  • Each Pi securely sends its input xi to a
    perfectly trusted party called the ideal
    functionality (IF)
  • The IF computes yF(x1,,xn) and sends y to all
    parties
  • In the ideal world the only power of corrupted
    parties is to send alternative inputs xi and
    that they see the output y

57
Definition of Security (3/3)
  • Active t-security
  • For any set of t PPT corrupted parties for the
    real world there exists t PPT corrupted parties
    for the ideal world such that the outputs of the
    parties in the ideal world is computationally
    indistinguishable from the outputs of the parties
    in the real world
  • This implies that in the real world any t
    corrupted parties only have inevitable powers,
    like giving alternative inputs

58
Sketch of Security Proof
We run the corrupted parties to see their
commitments to their inputs and use the
non-malleable proof of knowledge to extract the
committed values xi
  • Our protocol is proved secure via a simulation
    proof
  • For any t corrupted parties attacking the
    protocol we show how we can simulate the attack
    in the ideal world
  • We compute values xi which corresponds to the
    inputs the corrupted parties use in the protocol
    and input these to the IF which returns the
    result y
  • Given only y and xi we then simulate the values
    that the corrupted parties would have seen in the
    protocol, show them these values, and output
    whatever values they output in the end

Possible as the corrupted parties see only t
shares of sharings sent by honest parties, so
these shares can be simulated by shares of a
random value
59
Consensus Broadcast (1/3)
  • During the active secure MPC protocol we several
    times asked a party to send a value V to all
    other parties
  • A corrupted party can, however, cheat and send
    different V to different parties
  • Can be used to break the protocol
  • A protocol ensuring that even a cheating party
    sends the same message to all parties is called a
    consensus broadcast protocol

60
Consensus Broadcast (2/3)
  • Input The sender has some message m
  • Output Each Pi outputs some mi
  • Validity If the sender is honest, then all
    honest parties output mim
  • Agreement Two honest parties always output the
    same mi
  • No matter which other parties are corrupted

61
Consensus Broadcast (3/3)
  • Consensus broadcast can be implemented for any
    tltn corrupted parties if the parties are able to
    sign and verify messages
  • Example for n3,t1
  • P1 signs m and send it to P2 and P3
  • P2 and P3 relay the message from P1 to the other
    party
  • P2 and P3 output m if they saw a signature on m
    and no signature on another value

62
Simultaneous Broadcast
  • In a simultaneous broadcast all Pi broadcast a
    message mi at the same time
  • No Pi can pick mi to depend on the message mj of
    another party
  • Can be implemented in Internet like networks if
    and only if tltn/2

63
Possibility of Secure MPC (1/4)
  • We showed above
  • If tltn/2 and the corrupted parties are restricted
    to poly-time computation then secure MPC is
    possible
  • Need that the corrupted parties are restricted to
    poly-time so they cannot break the commitments
    and the signatures used by the broadcast protocol

64
Possibility of Secure MPC (2/4)
  • If tltn/3 and we assume that the parties are
    connected by perfectly secure channels, then
    perfectly secure MPC is possible
  • No information is leaked to t parties
  • The result is correct with probability 1
  • Tolerates corrupted parties with unbounded
    computing power

65
Possibility of Secure MPC (3/4)
  • If tltn/2 and we assume that the parties are
    connected by perfectly secure channels and have a
    perfectly reliable broadcast channel, then
    statistically secure MPC is possible
  • Negligible information is leaked to t parties
  • The result is correct except with negligible
    probability
  • Tolerates corrupted parties with unbounded
    computing power

66
Possibility of Secure MPC (4/4)
  • If t?n/2 then secure MPC of all functions is not
    possible
  • It is, however, possible to get unfair secure MPC
    which tolerates poly-time corrupted parties
  • In an unfair protocol the corrupted parties can
    cheat such that they learn the result and the
    honest parties do not

67
Unfair Coin-Flip
  • An example of an unfair protocol is our coin-flip
    protocol, where Alice learns the result first and
    then can just stop
  • Alice
  • Pick uniformly random a
  • Sample a random key K for a commitment scheme
  • Compute CcommitK(ar)
  • Send C to Bob
  • Bob Send uniformly random b to Alice
  • Alice Send (a,r) to Bob who checks that
    CcommitK(ar)
  • Both Let ca?b

Fair coin-flip is provablyimpossible between two
parties given only normal point-to-point
communication
68
Unfair MPC
  • From the impossibility of fair coin-flip for two
    parties (n2, t1) it follows that fair coin-flip
    is impossible for any n when tn/2
  • From this it follows that many (most?) functions
    cannot be implemented fairly when tn/2
  • Since e.g. simultaneous broadcast easily allows
    to implement fair coin-flip we can conclude that
    simultaneous broadcast is impossible when tn/2

69
Crash Course on Cryptography
SSoRC 2008
  • Jesper Buus Nielsen
Write a Comment
User Comments (0)
About PowerShow.com