Scheduler Activations - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Scheduler Activations

Description:

Kernel threads must be able to supply too wide an array of features ... Difficult to implement user-level threads with the same level of integration ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 30
Provided by: robert534
Category:

less

Transcript and Presenter's Notes

Title: Scheduler Activations


1
Scheduler Activations
  • Effective Kernel Support for the User-Level
    Management of Parallelism

2
Presentation Slides
  • A copy of this slide show can be found at the
    following address
  • http//remus.rutgers.edu/sidie

3
Abstract
  • Threads
  • Approach to concurrency
  • Provide a non-sequential execution stream
  • Supported either by the O/S or by user-level
    libraries
  • Neither approach is fully satisfactory
  • Kernel threads perform worse than user threads
  • User threads have limited support in the kernel

4
Abstract
  • Scheduler Activations proposes a new kernel
    interface and user-level threads package
  • Provide the functionality of kernel threads
    without the performance loss

5
Introduction
  • Effectiveness of parallel computing depends on
    the performance of the primitives used to control
    parallelism
  • One approach is to share memory between
    processes, but this is too inefficient
  • Led to the use of threads
  • Separate streams of sequential execution

6
The Problem
  • Threads can be supported at either the user level
    or in the kernel
  • User-level threads
  • Have excellent performance
  • Are flexible and can be customized
  • Built without modifications to the O/S

7
The Problem
  • Kernel threads
  • Integrated with the system
  • Scheduled directly by the kernel
  • Performance
  • Better than processes
  • Worse than procedure calls

8
Goals
  • Functionality
  • Mimic the behavior of kernel threads
  • No processor idles, correct thread priorities
  • Performance
  • Make the cost of common thread operations within
    an order of magnitude of procedure calls
  • Flexibility
  • Simplify application-specific customization

9
Approach
  • Provide each application with a virtual machine,
    an abstraction of a dedicated physical machine
  • The O/S retains control of allocation of
    processors, address space, and the ability to
    change the number of pro

10
Approach
  • The kernel notifies a thread scheduler for each
    address space of all events affecting the address
    space.
  • Vector events that influence user-level
    scheduling to the scheduler for that address
    space
  • The thread system notifies the kernel of the
    subset of events affecting scheduling

11
Scheduler Activation
  • The kernel mechanism in use is called scheduler
    activations
  • The execution context for an event vectored from
    the kernel to an address space
  • Used by the thread scheduler for an address space
    to handle events
  • Modify user-level data structures
  • Execute user-level threads
  • Make requests to the kernel

12
User-level thread performance
  • Advantages of user-level threads over kernel
    threads
  • Performance inherently better
  • Greater flexibility
  • Cost of accessing thread management operations
  • Programs must cross an extra protection boundary
    on every thread operation (kernel threads)

13
User-level thread performance
  • Cost of generality
  • Kernel threads must be able to supply too wide an
    array of features
  • This imposes extra overhead on applications that
    do not require these features
  • User-level thread facilities can be closely
    matched to the needs of the application

14
Thread Operation Latencies
15
Integration Issues
  • Difficult to implement user-level threads with
    the same level of integration with system
    services as in kernel threads
  • Due to lack of support from the kernel

16
Integration Issues
  • Kernel events are handled invisibly to the user
  • Processor preemption
  • I/O blocking
  • Kernel scheduling of threads is done oblivious to
    the user-level thread state

17
Effective Kernel Support
  • Design of a new kernel interface and user-level
    thread system
  • Combines functionality of kernel threads with
    performance of user-level threads

18
Effective Kernel Support
  • Kernel allocates processors to address spaces
  • User-level thread system has control over which
    threads use which processors
  • Kernel notifies an address space of changes in
    events relative to an address space
  • Application programmer sees no difference

19
Explicit Vectoring of Events
  • Communications between kernel processor allocator
    and user-level thread system structured in terms
    of scheduler activations
  • Serves as execution context for running
    user-level threads

20
Explicit Vectoring of Events
  • When a program is started, the kernel creates a
    scheduler activation, assigns it to a processor
    and calls the application
  • The user-level thread management system uses the
    scheduler activation as its execution context
  • The user level then selects and executes a thread
    in this context

21
Explicit Vectoring of Events
  • Once an activation is stopped by the kernel, it
    is not resumed by the kernel
  • Instead, the kernel creates a new activation in
    which it notifies user level that the thread is
    stopped
  • The user level can then decide what action to
    take next

22
Explicit Vectoring of Events
  • The application always knows exactly which
    threads are running on which processors
  • The application is free to build any concurrency
    model on top of SAs
  • The kernel needs no knowledge of the data
    structures used to represent parallelism at the
    user level

23
Notifying the Kernel
  • A user-level thread system need not tell the
    kernel about every thread operation, only those
    that affect processor allocation
  • An SA notifies the kernel whenever it transitions
    to a state where it has more runnable threads
    than processors, or more processors than runnable
    threads

24
Critical Sections
  • A thread could be executing in a critical section
    when it is blocked or preempted
  • Possible ill effects
  • Poor performance
  • Deadlock
  • Solution based on Recovery
  • Temporarily continue the thread until it exits
    the critical section via a context switch

25
Thread Scheduling Policy
  • Kernel should have no knowledge of an
    applications concurrency model or scheduling
    policy
  • Application is free to choose these as
    appropriate
  • They can be tuned to the the apps needs

26
Performance
  • Goal combine functionality of kernel threads
    with performance and flexibility of user-level
    threads
  • SA Thread performance was similar to Topaz
    FastThreads
  • Upcall Performance was much slower than Topaz
    kernel threads

27
Application Performance
  • Based on measurements of the same parallel
    application using Topaz kernel threads,
    FastThreads, and FastThreads based on SA
  • On a single processor, all systems performed
    worse than the sequential implementation, due to
    overhead for thread creation and synchronization

28
Application Performance
  • With multiple processors, FastThreads and
    FastThreads/SA performed about equally
  • When more kernel involvement is necessary,
    FastThreads/SA begins to perform the best

29
Summary
  • Managing parallelism at the user level is
    essential to high-performance parallel computing.
  • Typical kernel threads provide poor support for
    user-level threads
  • Improved kernel support provided by SA
  • Any concurrency model (not just threads) can be
    built on SA because the kernel has no knowledge
    of the user-level data structures.
Write a Comment
User Comments (0)
About PowerShow.com