SEDA: An Architecture for WellConditioned Scalable Internet Services - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

SEDA: An Architecture for WellConditioned Scalable Internet Services

Description:

Assumption: event-handling threads don't block ... Threads are constrained to a single stage. Direct communication between threads of different stages is not possible ... – PowerPoint PPT presentation

Number of Views:170
Avg rating:3.0/5.0
Slides: 19
Provided by: thanossta
Category:

less

Transcript and Presenter's Notes

Title: SEDA: An Architecture for WellConditioned Scalable Internet Services


1
SEDA An Architecture for Well-Conditioned
Scalable Internet Services
  • Matt Welsh, David Culler and Eric Brewer

2
Web Serving A Systems Challenge
  • Popular sites receive an enormous amount of hits
    per day
  • Services are becoming more complex
  • Dynamic content with intensive I/O
  • Novel services are constantly introduced
  • System complexity increases
  • Present solution Use more hardware (clusters)
  • What happens when peak load is several orders of
    magnitude higher than average load?
  • Some sort of load management is needed

3
Load Management
  • Graceful degradation
  • as offered load exceeds capacity, the service
    maintains high throughput with a linear
    response-time penalty
  • impacts all clients equally, or based on some
    policy
  • Currently, this doesnt exist in the Web
  • Reasons
  • Traditional operating systems designs
  • transparent resource virtualization gt little or
    no control of resources
  • Models of concurrency
  • Processes and threads have high overhead gt
    concurrency limit

4
Threads vs Events
  • Thread-based concurrency one thread per request
  • Widely used, easy to program
  • When number grows (several hundreds), performance
    degradation is severe, due to overhead
  • Bounded thread pools
  • A limit is placed on the number of threads
    allowed per service
  • Avoids throughput degradation in web servers
  • Introduces unfairness

5
Threads vs Events
  • Event-driven concurrency
  • Small number of threads (usually 1 per CPU)
  • Infinite loop processes events from a queue
  • Events are generated by application or OS
  • Example disk/network I/O, timers etc
  • Task processing is a finite state machine
  • Events trigger transitions
  • Server maintains its own continuation state for
    each task
  • Scales better than thread-based design
  • Increased complexity on the event scheduler
    (critical component)
  • Assumption event-handling threads dont block
  • Not always true (file I/O is blocking in most
    OSes)

6
Event-Driven Server Design
  • Scheduler main thread
  • FSM a request or flow of execution
  • Based on incoming events, scheduler triggers the
    appropriate FSM(s)
  • Scheduler must control each FSM
  • increased complexity
  • performance degradation when number of FSMs
    increases

7
Threads vs Events Comparison
  • Threaded server throughput degradation
  • Event-driven server throughput degradation

8
Staged Event-Driven Architecture
  • Goals
  • Support massive concurrency
  • Event-driven execution where possible
  • Efficient and scalable I/O primitives
  • Simplify construction of well-conditioned
    services
  • Abstractions hide away details of scheduling and
    resource management
  • Modular construction of applications
  • Enable introspection
  • Application can adapt to load condition based on
    the request stream
  • Support self-tuning resource management
  • System dynamically adjusts parameters to meet
    load requirements

9
SEDA Details
  • Stage Fundamental unit for SEDA
  • A stage is a self-contained component containing
  • Event handler
  • Processes batch of events, signals more events
    and places them on queue or other stages
  • Incoming event queue
  • May be finite
  • Thread pool
  • Managed by thread pool controller

10
SEDA Details
  • Controller
  • Observes runtime characteristics of a stage
  • Adjusts resource allocation and scheduling to
    meet demands
  • Can operate with local (state) or global (system)
    knowledge
  • Examples
  • Thread pool controller
  • Adjusts the number of threads within a stage
  • Samples the input queue and adds a thread when
    queue length gt threshold
  • Removes idle threads
  • Batching controller
  • Sets the number of events processed by event
    handler
  • Observes output rate of events, decreases
    batching factor until throughput begins to
    degrade
  • If throughput degrades, it is increased slightly
  • Sudden drops in load reset batching factor to max
    value

11
Thread pool controller performance
12
Batch controller performance
13
SEDA Details
  • Applications
  • Networks of stages linked by event queues
  • Threads are constrained to a single stage
  • Direct communication between threads of different
    stages is not possible
  • Event queues are used for message passing and IPC
  • Improved modularity
  • Downside latency
  • Stages can manage load independently
  • Since queue can be finite, application can act
    accordingly in the presence of heavy load
  • Modular/component design facilitates debugging
    and performance analysis
  • One can place stages for inspection purposes
    without interfering with system operation

14
Sandstorm
  • A SEDA-based Internet services platform
  • Implemented in Java
  • Provides asynchronous socket I/O
  • High performance, exhibits graceful degradation
    and can handle socket connections in the orders
    of thousands
  • Made possible since there are many sockets per
    thread (as opposed to 1-1)
  • Also provides asynchronous file IO
  • Since OS doesnt provide nonblocking file I/O,
    this is obtained using a bounded thread pool and
    blocking I/O
  • Provides familiar Unix-like interface (read,
    write etc)

15
Asynchronous Sockets Layer
  • Three stages
  • Read Responds to network I/O readiness events
    and reads data from sockets, putting new packets
    to the application event queue
  • Write Accepts outgoing packets and schedules
    them for writing to the appropriate socket, then
    establishes new outgoing connections
  • Listen Accepts new TCP connections and pushes
    connection events to the application event queue

16
Application Haboob Web Server
17
Haboob Performance
18
SEDA Conclusions
  • Massive concurrency dictates different approaches
    in server design
  • Event-driven models outperform thread-based
    models in robustness and scalability
  • Application must play a part in system resource
    management
  • SEDA uses the event-driven model in conjunction
    with dynamic resource control to achieve
    robustness, modularity, concurrency and
    scalability
  • Traditional OS design may be insufficient for
    meeting the demands of high-performance internet
    applications
  • New OSes can provide applications with greater
    control over scheduling and resource usage
Write a Comment
User Comments (0)
About PowerShow.com