Threads, SMP, and Microkernels - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Threads, SMP, and Microkernels

Description:

Threads, SMP, and Microkernels Dr. E.C. Kulasekere – PowerPoint PPT presentation

Number of Views:84
Avg rating:3.0/5.0
Slides: 42
Provided by: pon97
Category:

less

Transcript and Presenter's Notes

Title: Threads, SMP, and Microkernels


1
Threads, SMP, and Microkernels
  • Dr. E.C. Kulasekere

2
More on Threads
  • Now we get on to more advance concepts of a
    thread
  • Resource ownership of the process
  • Scheduling/Execution of the process
  • Resource ownership - process is allocated a
    virtual address space to hold the process image
  • Scheduling/execution- follows an execution path
    that may be interleaved with other processes
  • These two characteristics are treated
    independently by the operating system
  • Dispatching is referred to as a thread or
    lightweight process.
  • Resource of ownership is referred to as a process
    or task

3
Multithreading
  • Operating system supports multiple threads of
    execution within a single process
  • MS-DOS supports a single thread
  • UNIX supports multiple user processes but only
    supports one thread per process
  • Windows 2000, Solaris, Linux, Mach, and OS/2
    support multiple threads
  • The following are associated with a process
  • A virtual address space to hold the process image
  • Protected access to processors, other processes,
    files etc.
  • Within a process there may be threads which have
  • A thread execution state/ a saved context when
    not running
  • an execution stack/ per thread static storage
    for local variables.
  • Access to memory etc (resources) shared with all
    other threads in the process.

4
(No Transcript)
5
(No Transcript)
6
Benefits of Threads
  • Takes less time to create a new thread than a
    process
  • Less time to terminate a thread than a process
  • Less time to switch between two threads within
    the same process
  • Since threads within the same process share
    memory and files, they can communicate with each
    other without invoking the kernel
  • If the application that is to run should be
    implemented as a set of related units of
    execution, thread implementation is far more
    efficient than a collection of separate processes.

7
Uses of Threads in a Single-User Multiprocessing
System
  • Foreground to background work
  • Example of an excel environment where several
    jobs such a drawing and getting data can be done
    simultaneously.
  • Asynchronous processing
  • Auto saving in the background is an example
  • Speed execution
  • Simultaneous execution
  • Modular program structure
  • Since threads are bound into processes, the
    programming can be easier.

8
Characteristics of Threadsin scheduling
  • Suspending a process involves suspending all
    threads of the process since all threads share
    the same address space
  • Termination of a process, terminates all threads
    within the process

9
Thread States
  • States associated with a change in thread state
  • Spawn. Spawn another thread within the process if
    required.
  • Block
  • Unblock
  • Finish Deal locate register context and stacks
  • Suspend is not associated with a thread since
    this is a process level concept which does not
    make sense associated with a thread.
  • Note that one blocked thread in a process does
    not block the process or any other threads within
    the process.

10
Remote Procedure Call Using Single Thread
11
Remote Procedure Call Using Multiple Threads
12
Multithreading example on a Uniprocessor
13
Synchronization
  • The threads are always in a restricted space
  • The space is shared by other threads
  • In order to avoid the data structures of the
    process that are being used by the different
    threads end up being destroyed, one has to use a
    shared resource isolation method
  • This is known as synchronization. That is one
    resource is used by one thread at one time.

14
Thread Levels
  • Two broad categories User level and Kernel level
    threads

15
User-Level Threads
  • All thread management is done by the application
  • The kernel is not aware of the existence of
    threads
  • A thread library is provided for the management
    of the threads.
  • The initial user thread is allocated to a process
    that is managed by the kernel. The spawning of
    new threads is done by calling functions in the
    thread library.
  • All the above activity takes place in the user
    space.

16
Advantages of User Level Threads
  • Thread switching is simpler No kernel level
    intervention is required. That it the process
    does not need to switch to kernel mode. This
    saves overhead of two mode switches
    user-gtkernel-gtuser
  • Scheduling can be application specific since now
    the spaces are isolated. That is a user level
    process scheduling system will not upset the
    underlying OS scheduler
  • User level threads can run on any operating
    system since its based on a application library
    rather than a kernel support.

17
Disadvantages of User Level Threads
  • Most system calls are blocking. When a user level
    threads executes a system call, that thread all
    other threads within the process and the process
    itself gets blocked.
  • Only a single thread within a process can execute
    thus eliminating the possibility of
    multiprocessing threads. This is because the
    kernel assigns one process to a processor at a
    time. Within the process, multiprocessing is not
    possible. In a multiprocessor system, as many
    threads as there are processors can run
    simultaneously.

18
Kernel-Level Threads
  • W2K, Linux, and OS/2 are examples of this
    approach
  • Kernel maintains context information for the
    process and the threads
  • Scheduling is done on a thread basis and not on a
    process basis.
  • This overcomes the drawbacks in the user level
    threads.
  • The kernel can simultaneously schedule multiple
    threads from the same process on multiple
    processors.
  • Blocking results in the kernel scheduling another
    thread to the same process. No complete blocking
    occurs.

19
Drawback of Kernel Level Threads
  • Now switching a thread requires a mode switch to
    the kernel which will generate additional
    overhead.
  • The kernel threads are faster in general. However
    if many mode switches are required, then both
    levels will have comparable performance.

20
Combined Approaches
  • Example is Solaris
  • Thread creation done in the user space
  • Bulk of scheduling and synchronization of threads
    done in the user space
  • Multiple threads within the same application can
    run in parallel on multiple processor and a
    blocking system call need not block the entire
    process.
  • In general the concept of a thread and a process
    has always been similar with each having a single
    thread of execution.
  • However other possibilities also exist.

21
Relationship Between Threads and Processes
22
Multithreading Models
  • Many to one Model
  • Maps many user level threads to one kernel level
    thread.
  • Thread management is carried out in the user
    space hence efficient.
  • When one thread makes a blocking call, the entire
    process will be blocked.
  • One thread can access the kernel thread at once,
    hence threads are unable to execute in parallel.
  • One to one Model
  • Each user threads is matched to a kernel thread.
  • Provides more concurrency since now the kernel
    thread cannot be blocked.

23
Multithreading Models
  • This method also allows multiple threads to run
    on a multiprocessor system.
  • A drawback of this system is that each new
    creation of a user thread also should be followed
    by the creation of a kernel thread. This burdens
    the performance of the machine.
  • One way to avoid a big problem is to restrict the
    number of threads that can be created.
  • Many to Many Model
  • Many user level threads are mapped onto a smaller
    or equal number of kernel threads.
  • True concurrency is not gained since only one
    thread can be scheduled by the kernel.

24
Parallel ProcessingSMP Architecture
  • Single Instruction Single Data (SISD)
  • single processor executes a single instruction
    stream to operate on data stored in a single
    memory
  • Single Instruction Multiple Data (SIMD)
  • each instruction is executed on a different set
    of data by the different processors
  • Multiple Instruction Single Data (MISD)
  • a sequence of data is transmitted to a set of
    processors, each of which executes a different
    instruction sequence. Never implemented
  • Multiple Instruction Multiple Data (MIMD)
  • a set of processors simultaneously execute
    different instruction sequences on different data
    sets

25
(No Transcript)
26
Symmetric Multiprocessing
  • Kernel can execute on any processor
  • Typically each processor does self-scheduling
    form the pool of available process or threads
  • Within a shared memory system the classification
    depends on how the processes are applied to
    processors
  • Master slave
  • A failure in master brings down the entire system
  • The master can become a performance bottleneck
    since it has to do all the scheduling etc.
  • Symmetric multiprocessor
  • Te kernel can execute in any processor and each
    processor does its own scheduling.
  • SMP processors should not pick up the same
    process for processing. Hence some
    synchronization technique is required.

27
This architecture should solve the cache
coherence problem. When a private cache gets
updates it makes the other caches and memory
information related to this invalid. Hence a
synchronization technique should be observed to
keep all information updated.
28
Multiprocessor Operating System Design
Considerations
  • Simultaneous concurrent processes or threads
  • Te kernel routines should be reentrant so that
    all processors can execute the same kernel code
    simultaneously.
  • Kernel tables and management of resources should
    be set to avoid deadlock or invalid operations.
  • Scheduling
  • Since scheduling is done by all processors,
    conflicts should be avoided.
  • Synchronization
  • Mutual exclusion and even ordering should be used
    to avoid potential shared resource conflicts.
  • Memory Management
  • Should efficiently manage the available memory
    with paging etc.
  • Reliability and Fault Tolerance
  • Graceful performance degradation is desired with
    processor failure.

29
Microkernels
  • Small operating system core providing a
    foundation for modular extension.
  • Contains only essential operating systems
    functions
  • Many services traditionally included in the
    operating system are now external subsystems
  • device drivers
  • file systems
  • virtual memory manager
  • windowing system
  • security services

30
Kernel Architectures
31
Benefits of a Microkernel Organization
  • Uniform interface on request made by a process
  • All services are provided by means of message
    passing
  • Hence processes need not distinguish between
    kernel level or user level services.
  • Extensibility
  • Allows the addition of new services
  • Only selected servers need to be changed when a
    new feature is added.
  • Flexibility
  • New features added
  • Existing features can be subtracted to form
    smaller and more efficient systems.

32
Benefits of a Microkernel Organization
  • Portability
  • Changes needed to port the system to a new
    processor is changed in the microkernel - not in
    the other services
  • This is because most of the processor-specific
    information is contained in the microkernel.
  • Reliability
  • Modular design
  • Small microkernel can be rigorously tested

33
Benefits of Microkernel Organization
  • Distributed system support
  • Message are sent without knowing what the target
    machine is
  • This is achieved by having a unique identifier
    for each distributed service so that the request
    does not need to distinguish the target machine
    in which it is running.
  • Object-oriented operating system
  • Components are objects with clearly defined
    interfaces that can be interconnected to form
    software
  • One disadvantage sites is that it takes longer to
    build and send a message via a microkernel, and
    accept and decode the reply, than to make a
    single service call.

34
W2K Process and its Resources
35
Windows 2000Process Object
36
Windows 2000Thread States
37
Solaris
  • Process includes the users address space, stack,
    and process control block
  • User-level threads
  • Lightweight processes
  • Kernel threads

38
(No Transcript)
39
Solaris Thread Execution
  • Synchronization
  • Suspension
  • Preemption
  • Yielding

40
Linux Process
  • State
  • Scheduling information
  • Identifiers
  • Interprocess communication
  • Links
  • Times and timers
  • File system
  • Virtual memory
  • Processor-specific context

41
Linux States of a Process
  • Running
  • Interruptable
  • Uninterruptable
  • Stopped
  • Zombie
Write a Comment
User Comments (0)
About PowerShow.com