Techniques in Synchronous Networks - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Techniques in Synchronous Networks

Description:

Lemma: In the absence of failures, any synchronous system ... crucially depends on (an upper bound on) n being known. Application of Waiting. Computing AND,OR: ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 29
Provided by: Pao3
Category:

less

Transcript and Presenter's Notes

Title: Techniques in Synchronous Networks


1
Techniques in Synchronous Networks
  • What is Synchronous Network
  • Techniques in Synchronous Networks
  • Variable Speed
  • Communicators
  • Waiting
  • Guessing

2
Synchronous Networks
  • Restrictions
  • Synchronized Clocks
  • tick simultaneously
  • Bounded-delay Transmitions
  • corollary messages must be of bounded length
  • Lemma In the absence of failures, any
    synchronous system with unitary-delays can be
    simulated by a bounded-delays system.
  • Consequence We will study the unitary-delay
    systems, as these are much easier to reason about
    (and design algorithms for).

3
LE in Synchronous Rings
  • Main idea As-far as possible, but ID travels at
    the speed of 1/f(ID), where f is sufficiently
    fast growing function (e.g. 2ID ).
  • Issues
  • how to implement slow speeds
  • wake-up
  • Complexity
  • messages cca 2n (beats the lower bound for
    asynchronous systems)
  • bits O(n log i)
  • time O(n2i), where i is the minimal ID
    (terrible!)

4
Bit-efficient LE in Synchronous Rings
  • Main idea Communicate ID by sending start and
    stop bits
  • Straightforward implementation
  • messages cca 2n
  • bits O(n) (no more dependent on
    i)
  • time O(ni2i)
  • Improvements
  • pipeline transmission (be careful how to handle
    it)
  • time O(n2i)

5
Generalization Communicator
  • How to send a message i using 2, 3, k bits in
    as small time as possible?
  • 2-bit communicator
  • 3-bit communicator
  • k1-bit communicator
  • using the bits to reduce the time
  • k1 bit communicator
  • encode message into k-tuple of wait-intervals
    (q1, q2, , qk)
  • decode the message from the received k-tuple
  • use bijection between I and all tuples
  • order tuples first according to sum qi , then
    lexicographically
  • map i to i-th tuple

6
Optimizing Communicator
  • For fixed upper bound on time t and number of
    bits k1, how many different messages can we
    transmit?
  • the number of tuples for t choose k-1 out of
    tk-1 positions
  • sum for all t choose k out of tk
  • What if k is not fixed?
  • use a terminator bit pattern
  • first, communicate k
  • does not save a lot

7
Asynchronous to Synchronous Transformation
  • Any asynchronous algorithm can be transformed
    into a synchronous one using the communicator.
  • each send(message) is replaced by
    communicate(message)
  • let the asynchronous algorithm has message
    complexity M and causal time complexity T
  • transmission of value I costs TC(I) time and
    PC(I) packets, depending on the communicator C
    used
  • the total time of simulation is therefore
    TxTC(2m(I)) and the total number of packets is
    MxPC(2m(I)), where m(I) is the number of bits in
    the largest message transmitted

8
Application Better Leader Election
  • Apply the transformation to the Franklins LE
    algorithm
  • use just 2-bit communicator
  • resulting complexity is
  • bits O(n log n)
  • time O(i n log n)
  • much better time then Variable Speed
  • time still depends on i

9
Another Technique Waiting
  • Algorithm assume n is known and simultaneous
    wake-up, wait ni time steps, then send your ID
  • Observations
  • only the smallest ID will circle the ring!
  • messages n
  • bits n
  • time ni
  • if non-simultaneous start, use wake-up phase and
    wait 2ni
  • works in arbitrary networks as well use
    flooding and diameter
  • crucially depends on (an upper bound on) n being
    known

10
Application of Waiting
  • Computing AND,OR
  • wait for 0 (for AND) or 1 (for OR)
  • Reducing the time complexity of Variable Speed
  • first, each processor executes Waiting using
    2xID2 as waiting function
  • if waiting worked, we efficiently elected the
    leader
  • if not (e.g. not received my ID back in time, or
    received others ID), execute Variable Speed
  • if minimal ID i is at least n, Waiting will
    succeed and the time is O(i2)
  • otherwise the time is determined by Variable
    Speed O(n2i) lt O(n2n)
  • summary O(i2n2n) vs O(n2i)

11
Application of Waiting II
  • Randomized Leader Election
  • does not require unique IDs
  • needs to know n
  • Las Vegas algorithm
  • when terminates, the result is correct
  • there is no guarantee of termination (but low
    with high prob.)
  • select 0 with probability 1/n, 1 with
    probability n-1/n
  • terminates when exactly 1 processor selects 0
  • probability of terminating in a given round
  • n x 1/n x ((n-1)/n)n-1 ((n-1)/n)n-1
  • with growing n, this tends to 1/e, i.e. expected
    number of rounds is constant
  • expected time O(n), expected bits O(n)

12
Guessing
  • Minimum Finding as Guessing Game
  • if you are unhappy with O(ni) time complexity,
    but are willing to pay more communication
  • in round i
  • if your idgtgi, send high to the right
  • else wait n time steps or until high is received
  • if high was received, gi was overestimate
  • otherwise gi was underestimate

13
Guessing II
  • Guessing game Given an interval 1..M, find x by
    asking questions, with answers smaller,
    higher, hit.
  • How to guess
  • when no overestimates are allowed?
  • when 1 overestimate is allowed?
  • when k overestimates are allowed?
  • Towards optimal guessing What is the minimal
    number of questions q needed to win in an
    interval of size M using at most k overestimates?
  • Alternatively With q questions of which at most
    k are overestimates, what is the largest M so
    that we can win?
  • Denote the answer by h(q,k)

14
Guessing III
Let the first guess be p. If it was
overestimate, we need to win in an interval of
size p using q questions and at most k-1
overestimates. If is was underestimate, we need
to win in a interval of size h(q,k)-p using at
most q-1 questions h(q,0) q h(q,k)
ph(q-1,k) and plth(q-1,k-1), therefore h(q,k)
h(q-1,k)h(q-1,k-1) solving it gives h(q,k)Si0
( ) Gives us the optimal guessing strategy!
q
i
k-1
15
Guessing IV
  • Unbounded interval
  • guess progressively bigger values until an
    overestimate is achieved
  • e.g. round i guess 2i, leads to log i searching
    rounds and one overestimate
  • in total log i h(2i,k-1) rounds
  • if k overestimates are allowed in searching the
    upper bound, the total can be brought down to
    2h(2i,k)-1 O(h(i,k)) O(ki1/k) rounds
  • Reapplying to Leader Election
  • guess for the smallest ID i, costs O(nk) bits
    and O(nki1/k)
  • an upper bound on n is necessary
  • generalizes well to arbitrary networks

16
Synchronous Leader Election
Can we improve the results for n unknown?
17
Double Waiting
Combines waiting and guessing in order to
efficiently elect the leader even if n is
unknown. Overall structure
wake-up loop guess n and apply waiting
technique check whether everything was
OK if yes, terminate otherwise reset
and guess larger n Tricky part how to verify
that everything was OK?
18
Double Waiting II
  • Actions of node with id i in round r
  • guess that n g(r)
  • wait 2g(r)i steps, then send wait1 to the right
    and start counting until a wait1 or reset message
    is received, or g(r) time steps passed
  • let ni(r) be the time between sending and
    receiving wait1 message
  • if ni(r) g(r-1) or ni(r)gtg(r-1) then
  • send reset to the right and proceed to round
    r1
  • if a reset message was received, proceed to
    round r1
  • otherwise perform second waiting
  • wait 2h(r,i) steps, then send wait2 to the right
    and start counting
  • if a wait2 message arrives after exactly ni(r)
    time steps and nothing was received meanwhile,
    declare yourself a leader and send a terminate
    message to the right
  • else send reset to the right and proceed to
    round r1
  • forward messages you did not send

19
Double Waiting III
  • How to choose g(r) and h(r,i) so that the
    algorithm is not fooled?
  • let ti(r) be the time when i started/joined round
    r
  • it may happen that i received wait1 from j in
    round r
  • tj(r)2g(r)jd(j,i) ti(r)2g(r)ini(r)
  • it should not happen that i received wait2 from
    somebody else in round r after exactly ni(r)
    steps
  • if i received wait1 and wait2 messages, these
    were send by the same node j, assume that indeed
    happened
  • tj(r)2g(r)jnj(r)h(r,j)d(j,i)
    ti(r)2g(r)ini(r)h(r,i)ni(r)
  • subtracting these equations we get
  • nj(r)h(r,j) ni(r)h(r,i)
  • set h(r,i)2g(r)ig(r)-ni(r)
  • substituting we get that the left and right
    sides are equal only if ij

20
Double Waiting IV
  • What about complexity?
  • 3n messages in each round n for wake-up
  • number of rounds g-1(n)
  • time complexity of round r g(r)ih(r,i)
    O(g(r)i)
  • if g(r) grows fast enough (i.e. g(r1)gtag(r)), T
    O(g(g-1(n))i)
  • some possible choices

21
Unison Problem
All clocks should be synchronized Algorithm Spont
aneous wake-up set your clock to 0 send
your clock to all your neighbours After
receiving message containing counter if the
counter reads more then local clock set the
local clock to the counter send
(counter1) to all neighbours After diam(G)
steps, all clocks are synchronized
22
Synchronizers (from Tels book)
  • Goal Simulate a synchronous algorithm on an
    asynchronuos network
  • Consequence of successful simulation Every
    problem solvable in synchronous setting is
    solvable also in asynchronous setting (assumes
    not faults).
  • We will see
  • simple synchronizer
  • a, b and g synchronizers
  • application to Breadth-First-Search

23
Synchronizers
  • Simple synchronizer
  • sent tick message in each pulse (simulation of a
    time step) to each of your neighbours
  • piggyback tick messages on the original messages
    of the protocol, but must sent tick even if the
    original algorithm did not send a message over
    that link
  • proceed to simulating next time step only after
    you received ticks from all your neighbours
  • Complexity of simulation
  • Message 2E ticks every time step
  • Time diam(G) for wake-up, 1 step per pulse
  • Overall simulation time diam(G)T1
  • Overall simulation communication 2ET

24
a, b, g - Synchronizers
  • Overall scheme
  • message transmissions are acknowledged
  • a node becomes safe for a given round when all
    its messages have been acknowledged
  • a node starts executing next pulse when it
    learns that all its neighbours are safe
  • the synchronizers differ in the implementation
    how to learn that all neighbours are safe
  • a - synchronizer
  • very much like the simple synchronizer
  • once a node becomes safe, it sends I am safe to
    all its neighbours
  • O(E) messages per pulse, 3 time steps per
    pulse

25
a, b, g - Synchronizers II
  • b - synchronizer
  • synchronization via a spanning tree
  • when a node becomes safe, it waits for we are
    safe message from all its children, then sends
    we are safe to its parent
  • when the root receives we are safe from all its
    children and itself is already safe, its
    broadcasts next pulse message
  • a node receiving next pulse message proceeds to
    simulate the next time step
  • Complexity
  • Precomputing the spanning tree (assumes unique
    IDs) O(n log n E) messages and O(n) time
  • Overhead messages per pulse 2n-1
  • Time per pulse O(n) in the worst case

26
a, b, g - Synchronizers II
  • g - synchronizer
  • combination of a and b
  • divide G into clusters, use b within clusters,
    a between clusters
  • for each pair of neighbouring clusters, a
    connecting edge is selected
  • a spanning tree from the neighbours incidenten a
    node becomes safe,
  • Overall structure
  • make sure our cluster is safe (use b
    synchronizer)
  • tell neighbouring clusters that we are safe
  • wait until we learn from all neighbouring
    clusters that they are safe
  • proceed to the next pulse

27
g synchronizer complexity
  • Complexity
  • let Hc be the maximal depth of a cluster tree
  • let Ec be the number of tree and connecting
    edges in this clustering
  • Message O(Ec) per pulse
  • Time O(Hc) per pulse
  • Lemma For each k in range 2 k lt n, there
    exists a clustering C such that Ec kn and Hc
    log n/log k
  • The clusters are grown by adding the next layer
    of BFS while it is of size at least (k-1) times
    the current size of the cluster. A cluster of i
    levels is of size at least ki-1, therefore its
    depth is at most log k n log n/log k.
  • A cluster of size x is charged at most (k-1)x
    connecting edges to later clusters. Therefore,
    there are at most n(k-1) connecting edges overall.

28
Synchronizer summary
Write a Comment
User Comments (0)
About PowerShow.com