Processes - PowerPoint PPT Presentation

About This Presentation
Title:

Processes

Description:

Multiple threads of control within a process that share a same address space are ... Threads are lightweight processes that have their own program counter, registers ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 77
Provided by: RPy1
Category:

less

Transcript and Presenter's Notes

Title: Processes


1
Processes
  • A program in execution.
  • Process components Every thing that interacts
    with its current activity includes
  • Program code
  • Program counter
  • Registers
  • Main memory
  • Program stack( temp var, parameters, etc.)
  • Data section ( global variables , etc.)

2
Processes
  • Difference between process and program
  • Example 1) program process
  • Example 2) When two users run ls
  • it is same program different processes.

IKEA Furniture Ins. To Make A chair
Following Ins. To build The chair
3
Processes
  • Multiprogramming is rapid switching back and
    forth between processes

4
Processes
  • Daemons processes that are running in the
    background mode (e.g. checking email)
  • The only Unix system call to create a new process
    is fork. After that new process has same memory
    image, same open files and etc. as its (only one)
    parent
  • kill is the system call to terminate a process
  • In Unix all the processes in the system belong
    to a single tree, with init at the root

5
Process State
  • In the ready state process is willing to run but
    there is no CPU available
  • In blocked state process unable to run until some
    external events happens

6
Process scheduling
  • The lowest level of OS is scheduler that hides
    all interrupt handling and details of starting
    and stopping processes

7
Process Implementation
  • Process table is used to maintain process
    information ( one for each process )
  • Process table is array of structure
  • Another name for process table is process control
    block (PCB)
  • PCB is the key to multiprogramming
  • PCB must be saved when a process is switched from
    running to ready or blocked states

8
  • Some fields of process table

9
  • For doing multiprogramming scheduler uses
    interrupt vector when switching to the interrupt
    service procedure .
  • For example an interrupt in the hardware level
    can be asserting a signal on the assigned bus by
    the I/O device that has finished the work

10
Process Implementation
  • Here is what lowest level of the OS does when an
    interrupt occurs.

11
Threads
  • Each process has a single address space that
    contains program text, data and stack. Multiple
    threads of control within a process that share a
    same address space are called threads.
  • Threads are lightweight processes that have their
    own program counter, registers and state.
  • Threads allow multiple executions to take place
    in the same process environment.
  • Multiple threads running in parallel in one
    process share program address, open files and
    other resources. Multiple process running in
    parallel in one computer share physical memory,
    disk and printers.

12
Threads
  • Threads in three process (a) and one process (b)

13
Threads
  • items shared by threads items private
    to
  • in a process
    each thread

14
Threads
  • Each threads has its own stack

15
Thread Usage
  • For example for making a web server that cache
    the popular pages in the memory

16
Thread Usage
Rough outline of code for previous slide (a)
Dispatcher thread (b) Worker thread
17
Thread Usage
  • Plus using threads in web server other example
    of thread usage is in browser
  • Browser in HTTP 1.0 needs to retrieve multiple
    images belong to a web page. Setting up
    connection (TCP) for each image sequentially
    takes long time. With use of threads it can be
    done in parallel. It means a browser when using
    HTTP 1.0 protocol, retrieves all the images of a
    web page in the separate TCP connections that are
    managed and established by threads almost in
    parallel.

18
Threads implementation
  • Threads can be managed to be used in user/kernel
    space. User space is less expensive because less
    system calls are required for thread management
    and switching between threads in the user space
    is faster. The problem of implementing threads in
    the user space is since kernel is not aware that
    other threads exist, It may block all of the
    threads when blocks the related process in the
    user space. In any of these two thread management
    approaches (i.e., user/kernel space) the
    following problems should be solved
  • Sharing the files by threads can cause problem.
    One thread may close the file while other one
    wants to read it.
  • Managing signals (thread specific or not) and
    stack overflow problem for each thread also
    should be considered for implementing the threads

19
Interprocess Communication (IPC)
  • Processes cooperate with each other to provide
    information sharing (e.g. users share same file),
    computation speedup ( parallel processing) and
    convenience (e.g. editing file A while printing
    file B).
  • There are three issues related to cooperation
    between processes/threads
  • How one process can send information to the
    second one (e.g. pipe)
  • The processes do not get into each others way (
    e.g. grabbing the last 1M of memory)
  • Synchronization of processes (e.g. process A
    produces data and B prints it)

20
Race conditions
  • Two processes A and B prints the name of files
    in the spooler directory for printing
  • Spooler directory have two shared variables out
    (the file printed next ) and in (shows the next
    free slot in the directory)

21
Race conditions
  • Assume the scenario in which process A reads the
    free slot (e.g. 7) but before printing, CPU
    switches to process B and process B reads the
    free slot ( again 7 ) and prints its file name
    there and updates it to 8. In that case when CPU
    switches back to process A , it starts from the
    point it left off and writs its file in 7. Thus
    process B never gets the print of its file.
  • Situations like this , when two processes are
    using the shared data and result depends on who
    runs precisely when are called race conditions.

22
Critical Regions
  • To avoid race conditions there should be a mutual
    exclusion which is a way for making sure if one
    process is using the shared variable/file, the
    other processes will be excluded from doing the
    same thing.
  • That part of each program where the shared memory
    is accessed is called the critical region or
    critical section.
  • To not have race conditions, processes shouldnt
    be in their critical regions in the same time.

23
Critical Regions
  • Mutual exclusion using critical regions

24
Critical Regions
  • In particular a solution to the critical section
    problem has four conditions
  • 1- No two process can execute in C.S. at the same
    time (Mutual Exclusion)
  • 2- No assumption on speed or No. of CPUs
  • 3- No process outside its C.S. may block other
    process (Progress)
  • 4- No process should have to wait forever to
    enter its critical region ( Bounded Waiting)

25
Mutual Exclusions
  • Disabling Interrupts (hardware solution)
  • Process disable all interrupts after entering its
    critical region and re-enable them just before
    leaving it. Thus, CPU will not switch to another
    process during that time.
  • Is not a good solution because
  • User may forget to turn them on again
  • For multiprocessor systems only the CPU that
    executes disabling function will be affected
  • Kernel use disabling interrupts itself so if
    user processes disable interrupts, it causes
    inconsistency

26
Mutual Exclusions
  • Lock variables (software solution)
  • A shared variable that is initially 0. when
    process wants to enter C.S. if it is 0 process
    sets it to 1 and enters C.S.
  • The problem is exactly same as spooler directory
    example. (Race Condition)
  • If the solution is continuously testing a
    variable until some value appears, it is referred
    to as busy waiting (see next slide)

27
Mutual Exclusions
  • Strict Alternation solution (a) process 0 (b)
    process1

28
Strict Alternation
  • In this method variable turn keeps track of whose
    turn it is to enter the C.R. to share the memory
  • The problem with this solution is taking turns is
    not a good idea when one of the process is slower
    than the other one. In fact it violates the
    condition 3 (i.e., no process outside its C.R.
    may block other process).

29
Petersons Solution

30
Petersons solution
  • In this method any of the processes who finish
    the while loop can enter the C.R.
  • If process 0 sets turn to 0 and before checking
    the while process1 become interested it waits
    until turn become 1 then process 0 enters C.R. in
    that case as far as process 0 is in the C.R.
    process 1 should wait. If process 0 leaves C.R.
    it becomes not interested and waiting of process
    1 ends and process 1 can enter C.R.

31
The TSL Instruction
  • It is a hardware solution for mutual exclusion
    problem
  • The hardware provides TSL RX,LOCK instruction,
    which reads the contents of the lock into
    register RX and then stores non zero value into
    lock.
  • The TSL (Test and Set Lock) instruction is
    indivisible. It means no other processor can
    access the memory word until this instruction is
    finished.

32
  • TSL instruction

33
Problem of the solutions based on Busy Waiting
  • Since process who wants to enter its C.R. may
    sits in a tight loop, this approach wastes CPU
    time.
  • This approach has some unexpected effects. Assume
    two processes H (high priority) and L ( low
    priority) , where H scheduled to run whenever it
    is in ready state. If H become ready but L is in
    C.R. since L can not be run while H is running, L
    never leaves its C.R. and H loops forever. This
    situation is priority inversion ( or starvation)
    problem. Thus, the solutions based on blocking
    the process instead of busy waiting is used. See
    next slide

34
The Producer-Consumer Problem
  • Two process share a common fixed size buffer N
    (also known as bounded-buffer problem)
  • Producer goes to sleep when buffer is full and
    consumer goes to sleep when buffer is empty
  • Producer wakes up when consumer remove at least
    one item from the full buffer and consumer wakes
    up when producer put at leas one item in the
    buffer

35
(No Transcript)
36
(No Transcript)
37
The Producer-Consumer Problem
  • The problem of this approach is race condition
    because of unconstrained access to the count
    (variable for keeping track of the items in the
    buffer)
  • Assume a scenario in which buffer is empty and
    consumer has just read the count to see if it is
    empty. Suddenly CPU switches to the producer, and
    producer starts running and put an item and since
    count become one, producer wakes up consumer.
    Since consumer is not sleeping, the wakeup signal
    is lost. Later when consumer starts running,
    since it thinks count is 0 it goes to sleep.
    Sooner or later producer will fill the buffer and
    it goes to sleep too. Unfortunately both
    processes sleep forever.

38
Semaphores
  • Is a kind of variable that can be used as
    generalized lock. Was introduced by Dijkstra
    1965.
  • A semaphore has positive value and supports the
    following 2 operations
  • down(sem) An atomic operation that checks to see
    if sem is positive if true it decrements it by 1.
    if sem is 0 the process that is using down goes
    to sleep.
  • up(sem) An atomic operation that increments
    the value of sem by 1 and wakes up any processes
    who were sleeping on sem.

39
Binary Semaphore
  • Binary semaphores initially are set to 1, and are
    used by two or more processors for mutual
    exclusion. For example suppose D is a binary
    semaphore and we use the following three lines of
  • down(D)
  • Critical Section
  • up(D)
  • in each process. This means each process
    locks the shared resources in its critical
    section.

40
Synchronization with Semaphore
  • Since binary semaphore can be in one of the two
    states we dont use them for synchronization. For
    example in producer-consumer problem we use two
    more semaphores ( full and empty) to guarantee
    the certain event sequences do or do not occur.
    Also we use mutex to make sure only one process
    can read or write the buffer. See next slide

41

42

43
Drawbacks of Semaphors
  • The problem is they are difficult to coordinate.
    For example if we change the order of two downs
    in producer as follws
  • down(mutex)
  • down(empty)
  • when the buffer is full (empty 0) producer sets
    mutex to 0 and blocks. Consumer also blocks at
    down(mutex) because it is zero. Both producer
    and consumer block forever and nothing can be
    done. This situation is called deadlock.

44
Monitors
  • It is a higher-level synchronization mechanism
    which is a collection of procedures, variables
    and data structures
  • Processes can call the procedures in the monitor
    but only one can be active in monitor (i.e.
    executes a procedure in a monitor). In this way
    monitor allows a locking mechanism for mutual
    exclusion.
  • Monitor is a programming language construct, so
    compiler is responsible for implementing mutual
    exclusion on monitor procedures, so by turning
    all critical regions into monitor procedures, no
    two processes will ever execute their C.Ss at the
    same time.

45

46
  • Synchronization in monitor is done by blocking
    the processes when they cannot proceed. Blocking
    the processes achieved by use of condition
    variables and two atomic operations as follows
  • wiat(full) makes the calling process (e.g.
    producer) to wait on condition variable full
    (shows buffer is full in producer-consumer
    problem). It also releases the lock and allows
    other processes (e.g. consumer) reacquire the
    lock.
  • signal(full). Wakes up the process (e.g.
    producer) who is sleeping on condition variable
    full. The process doing signal must exit the
    monitor immediately ( according to Hansen
    proposal)

47

48

49
Message Passing
  • In this method two system calls are used
  • send(destination, message)
  • sends a message to a given destination
  • receive(source, message)
  • receives a message from a given source. If
    no message is available, the receiver could
    block until one arrives
  • Messages can be used for both mutual exclusion
    and synchronization

50
Message Passing Design Issues
  • There are some design issues for message passing
    systems
  • How do we deal with the lost messages?
  • Solution sending acknowledgement message
    from the receiver upon receiving the message. In
    case of lost acknowledgement if sender
    retransmits the message, receiver should be able
    to distinguish a new message from the
    retransmitted one. Using message sequence number
    can solve this problem.
  • How do we authenticate a message?
  • Name of processes
  • Message passing is slower than semaphore or
    monitor.
  • Solution smaller message size that fits in
    the registers

51

52
The Producer-Consumer Problem with Message Passing
  • Can be solved by buffering the messages sent but
    not received
  • The total number of N messages is used is
    analogous to N slots in shared memory
  • Each process can have address or a mailbox to
    buffer a certain number of messages that have
    been sent but have not yet been accepted
  • Another approach from having mail box is
    eliminating buffering. In this approach sending
    can be blocked until receive happens and receiver
    is blocked until a send happens (rendezvous). It
    is easier to implement but less flexible.

53
Classical IPC Problems
  • The Dining Philosophers Problem stated (1965) as
    follows.
  • Five philosophers are seated around a circular
    table
  • Five plates of foods
  • Five chopstics(forks)
  • Between each pair of plates is one fork
  • Philosophers spend time eating and thinking
  • They must have 2 forks to eat
  • We want to write a program for each philosopher
    that does what it suppose to do and never gets
    stuck

54

55
Dining Philosophers Problem
  • The first proposal to solve the problem

56
Dining Philosophers Problem
  • In this algorithm, take-fork waits until the
    specified fork is available. The solution is
    wrong because
  • If all five philosophers take their left fork
    simultaneously there will be deadlock (i.e.,
    process stay blocked forever)
  • If we modify the solution by saying that after
    taking the left fork, the program checks the
    availability of the right fork. If it is not, the
    philosopher puts down the left fork , waits for
    some times, and repeats the process. Again if all
    philosophers take their left fork simultaneously
    put it down and wait for the same time these
    processes continue to run for ever but no
    progress is made ( this situation is called
    starvation).
  • Random time of waiting can not solve the
    deadlock problem. The deadlock-free and a
    parallel solution (note that maximum 2
    philosophers can eat in parallel) to this problem
    is available in book (see this solution)

57
Readers and Writers Problem
  • This problem models the access to a shared
    database (1971)
  • In this problem multiple readers are able to
    access and read the shared database
  • Writer have exclusive access to the database and
    will be kept suspended until no reader is
    present. ( the problem is writer may suspend
    forever because of arrivals of the readers)
  • To solve this problem one way is to give a
    priority to the writer process

58
(No Transcript)
59
Process Scheduling
  • Scheduler uses scheduling algorithm to decide
    which processor to run first (from the processes
    in the memory which ones are ready to execute and
    allocates the CPU to one of them)
  • In the batch system the scheduling algorithm was
    simple just run the next job on the tape.
  • In the timesharing system it was more complex
    because there are multiple users plus batch jobs

60
Process Scheduling
  • A good scheduling algorithm should include
  • Fairness process gets its fair share of CPU
  • Efficiency keeps CPU busy 100 time
  • Response time Minimize response time for the
    interactive users
    contradiction
  • Turnaround time Minimize the time for the batch
    users
  • Throughput Maximize the number of jobs processed
    per hour

61
Process Scheduling
  • The problem is process running time is
    unpredictable so O.S. uses the timer to decide
    whether the currently running process should be
    allowed to continue or should be suspended to
    give another process the CPU.
  • Two types of scheduling are preemptive (i.e.,
    jobs can be stopped and other can be started) and
    nonpreemptive (i.e., each job runs to completion)

62
Process Scheduling
  • Nonpreemptive scheduling algorithm is easy
    because it only must decide who is the next in
    the line to run
  • Preemptive scheduling is more complex because for
    context switching (i.e., allowing multiple
    processes to run at the same time) it must
  • pay the price (context switching
    time)
  • decide on time slice of each
    processor
  • decide who runs next

63
First-Come First served (FCFS)
  • A non-preemptive algorithm. Another name is FIFO
  • In the early systems, It meant a program kept the
    CPU until it finished. Later, FIFO just means,
    keep the CPU until process blocks. After the job
    become unblocked it goes to the end of the queue.
  • Advantage Easy to understand and fair (e.g.,
    think of people who are waiting in the line for a
    concert ticket)
  • Disadvantage Short jobs get stuck behind long
    jobs. Suppose there is a compute-bound process
    (runs 1 sec and read a disk block) and there are
    many I/O bound processes (in less than 10 ms read
    one disk block) both should do 1000 disk reads.
    The result is with FIFO each I/O bound process
    should wait at least 1000 sec for a compute-bound
    process because compute-bound process performs
    1000 disk I/O operations.

64
Shortest Job First (SJF)
  • If we assume the run times of the jobs are known
    in advanced, the non-preemptive batch SJF
    algorithm picks the shortest job first.
  • Note that this algorithm is optimal when all the
    jobs are available simultaneously.
  • The difficulty of this algorithm is predicting of
    the time usage for a job.

65
Shortest Job First (SJF)
  • For example
  • (a)FIFO
    (b)SJF
  • Turnaround average Turnaround average
  • (8 12 16 20)/4 (4 8 12 20)/4
  • 14 in FIFO 11 in
    SJF

66
Round Robin Scheduling
  • Each process is assigned a time interval, called
    its quantum.
  • If the process is running at the end of quantum ,
    the CPU is preempted and given to anther process.
  • If the process has blocked/finished before
    quantum has elapsed, CPU switches to the other
    process when process blocks.

67
Round Robin Scheduling
  • (a) The list of runnable process and (b) is this
    list after B uses up its quantum

68
Round Robin Scheduling
  • R.R scheduling is easy and fair the only problem
    is how to choose the length of quantum
  • If quantum size is too big (e.g. 500 msec) with
    the required time for process switching (e.g. 5
    msec), response time suffers. Suppose 10 users
    hits the Enter key simultaneously, it means the
    last process in the list should wait for a long
    time.
  • If quantum size is too small (e.g. 20 ms)
    throughputs suffers. With spending 5 msec on
    process switching 20 of CPU time will be wasted
    on administrative overhead (i.e., saving and
    loading registers and other required tasks for
    process switching).

69
Priority Scheduling
  • Each process is assign a priority and the CPU is
    allocated to the process with higher priority
  • The problem is high-priority processes may run
    indefinitely
  • Solution
  • Decrease the priority of currently running
    process at each clock tick. If it drops the
    priority below the highest one, executing the
    highest one
  • Assigning the max quantum to each process and
    then running the next highest priority process

70
Priority Scheduling
  • Priority can be assigned to the processes
  • Statically external to O.S. e.g. amount of
    funds, political factors and ..
  • Dynamically assign by the system to achieve some
    system goals. For example I/O bound process
    should be given CPU immediately. A simple
    algorithm for doing that is set the priority to
    1/f, where f is the fraction of the last quantum
    that a process used. For example 2 ms out of 100
    ms quantum, gives priority of 50 and 50 ms out of
    100 ms quantum gives priority of 2.

71
Priority Scheduling
  • Another way is to group the process into priority
    classes and use priority scheduling among the
    class but R.R within each class. If the priority
    class become empty then the lower class will be
    run (see the next slide)
  • The problem with this method is if priorities are
    not adjusted occasionally, lower priority classes
    may all starve to death

72
Priority Scheduling
  • Priority classes

73
Multiple Queues
  • In the CTSS 7094 system (Corbato et al.1962) for
    each switch between processes there was one disk
    swap.
  • To increase the efficiency this algorithm reduces
    the swapping by defining priority queues as the
    follows
  • Suppose one process needs 100 quanta. It would
    initially be given one quanta in queue with
    highest priority , then swapped out to the queue
    with the lower priority with 2 quanta. On the
    succeeding runs it would get 4, 8,16,32 and 64
    quanta, although it would used only 37 quanta of
    the final 64 quanta because
  • 37100 63 where 631 2 4 8 16 32
  • Thus, in this method only 7 swaps (i.e., for
    the initial load and queues with 1,2,4,8,16,32
    quanta) is needed instead of 100 when using a
    pour round robin algorithm.

74
Shortest Job First for Interactive Processes
  • Another name is Shortest Process Next
  • To minimize the average response time we try to
    select the shortest process from the currently
    runnable processes.
  • To find the estimation of running time of a
    process we use aging technique as follows
  • Estimation of running time aE(i-1) (1-a)Ti
  • Where E(i-1) is the past estimation and Ti is
    the time of recent run for that process.
    Parameter a controls the related weight of
    recent and past history
  • For example by selecting a ½
  • E0 T0
  • E1 aE0 (1-a)T1 T0/2 T1/2
  • E2 aE1 (1-a)T2 T0/4 T1/4 T2/2

75
Guaranteed Scheduling
  • Guaranteed scheduling makes real promises to the
    users. For example if there are n users logged in
    each will receives 1/n of CPU power
  • For doing that system should keep track of how
    much CPU each process has had since its creation
    and then it should compute the amount of CPU each
    process entitled to (i.e. the time since creation
    divided by n).

76
Lottery scheduling
  • In this algorithm each job receives some number
    of lottery tickets, and on each time slice,
    randomly a winning tickets will be picked.
  • For example if there are 100 tickets and one
    process holds 20 of them it will have 20 percents
    chance to win each lottery. In long run 20
    percent of CPU.
  • Lottery scheduling solves the problem of
    priority scheduling. Because if the priority of
    everyone increases no one gets a service.
Write a Comment
User Comments (0)
About PowerShow.com