Caching Queues in Memory Buffers - PowerPoint PPT Presentation

About This Presentation
Title:

Caching Queues in Memory Buffers

Description:

Caching Queues. in Memory Buffers. Rajeev Motwani (Stanford University) ... Static allocation of buffer between n queues cannot be competitive ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 17
Provided by: theoryS
Category:

less

Transcript and Presenter's Notes

Title: Caching Queues in Memory Buffers


1
Caching Queues in Memory Buffers
  • Rajeev Motwani (Stanford University)
  • Dilys Thomas (Stanford University)

2
Problem
  • Memory fast, expensive and small
  • Disk large (infinite), inexpensive but slow
  • Maintaining Queues motivated by DataStreams,
    Distributed Transaction Processing, Networks
  • Queues to be maintained in memory, but may be
    spilled onto disk.

3
Model
  • Queue updates and depletion
  • single/multiple queues
  • Cost Model
  • unit cost per read/write
  • extended cost model
  • c0 c1?numtuples
  • seek time5-10ms,
  • transfer rates10-160MBps

4
Contd..
  • Online algorithms for different cost models.
  • Competitive analysis
  • Acyclicity

5
Algorithm HALF
TAIL
HEAD
Memory size M
SPILLED
HEAD lt SPILLED lt TAIL SPILLED empty gt TAIL
empty SPILLED nonempty gt HEAD, TAIL lt M/2
6
HEAD
SPILLED
TAIL
7
  • Initially all tuples in HEAD
  • First write M/2 newest tuples from HEAD to
    SPILLED.
  • Then, tuples enter TAIL when SPILLED non-empty
  • WRITE-OUT TAIL gt M/2
  • write(M/2) TAIL ?SPILLED
  • READ-IN HEAD empty, SPILLED nonempty
  • read(M/2) SPILLED? HEAD
  • TRANSFER after READ-IN if SPILLED empty
  • move (rename) TAIL ? HEAD
  • //to maintain invariant 3

8
Analysis
  • HALF is acyclic (M/2 windows disjoint)
  • Alternate M-windows disjoint.
  • Atleast one tuple from each M-window has to be
    written to disk by any algorithm including
    offline
  • These have to be distinct writes.
  • Hence 2-competitive wrt writes.
  • Reads analysis similar.
  • Lower bound of 2 by complicated argument see
    paper.

9
Multiple(n) Queues
  • Queue additions adversarial as in previous
    setting.
  • Queue depletions
  • Round-Robin
  • Adversarial

Static allocation of buffer between n queues
cannot be competitive
10
Multiple Queues BufferedHead
  • Dynamic memory allocation
  • Write out newest M/2n of the largest queue in
    memory when no space in memory for incoming
    tuples
  • Read-ins in chunks of M/2n
  • Analysis see paper

11
Multiple Queues BufferedHead
  • BufferedHead is acyclic
  • Round-Robin BufferedHead is 2n-competitive
  • vn lower bound on acyclic algorithms
  • Adversarial no o(M) competitive algorithm
  • However if given M/2 more memory than adversary
    then BufferedHead is
  • 2n-competitive

12
ExtendedCost Model GreedyChunk
  • Cost model c0 c1?t
  • Let block-size, Tc0/c1
  • All read-ins, write-outs in chunks of size T
  • T100KB- few MB
  • Simple algorithm GREEDY-CHUNK
  • Write-out newest T tuples when no space in
    memory
  • Read-in T oldest tuples if oldest tuple on
    disk

13
Extended Cost Model GreedyChunk
  • If M gt 2T Algorithm GREEDY-CHUNK
  • Else Algorithm HALF
  • Algorithm is 4-competitive, acyclic.
  • Analysis see paper.
  • Easy extension to Multiple Queues.

14
Single Queue Single Queue Multiple Queues Multiple Queues Multiple Queues Multiple Queues Multiple Queues
Single Queue Single Queue Round-robin Round-robin Adversarial Adversarial Adversarial
UB LB UB LB UB LB UB
unit 2 2 2n vn M O(M) 2n
linear 4 4 4 4 4 4 4
15
Practical Significance
  • Gigascope ATTs network monitoring tool SIGMOD
    03 drastic performance decrease on disk usage
  • DataStream systems good alternative to
    approximation, no spilling algorithms previously
    studied.

16
Related Work
  • IBM MQSeries spilling to disk
  • Related work on Network Router Design using SRAM
    and DRAM memory hierarchies on the chip

Open Problems
Acyclicity remove for multiple queues. Close the
gap between the upper and the lower bound.
Write a Comment
User Comments (0)
About PowerShow.com