Deadlock - PowerPoint PPT Presentation

1 / 135
About This Presentation
Title:

Deadlock

Description:

The system changes state because of the action of some process, pi ... If pi is blocked in Sj, and will also be blocked in every Sk reachable from Sj, ... – PowerPoint PPT presentation

Number of Views:59
Avg rating:3.0/5.0
Slides: 136
Provided by: Haus5
Category:
Tags: deadlock | pi

less

Transcript and Presenter's Notes

Title: Deadlock


1
Deadlock
Chapter 10
2
Example
Process 1
Process 2
Resource 1
Resource 2
3
A Model
  • P p1, p2, , pn be a set of processes
  • R R1, R2, , Rm be a set of resources
  • cj number of units of Rj in the system
  • S S0, S1, be a set of states representing
    the assignment of Rj to pi
  • State changes when processes take action
  • This allows us to identify a deadlock situation
    in the operating system

4
State Transitions
  • The system changes state because of the action of
    some process, pi
  • There are three pertinent actions
  • Request (ri) request one or more units of a
    resource
  • Allocation (ai) All outstanding requests from
    a process for a given resource are satisfied
  • Deallocation (di) The process releases units
    of a resource

xi
Sj
Sk
5
Properties of States
  • Want to define deadlock in terms of patterns of
    transitions
  • Define pi is blocked in Sj if pi cannot cause a
    transition out of Sj

6
Properties of States (cont)
  • If pi is blocked in Sj, and will also be blocked
    in every Sk reachable from Sj, then pi is
    deadlocked
  • Sj is called a deadlock state

7
Example
  • One process, two units of one resource
  • Can request one unit at a time

d
d
r
a
r
a
S0
S1
S2
S3
S4
8
Extension of Example
d0
d0
r0
a0
r0
a0
S00
S10
S20
S30
S40
r1
r1
r1
r1
r1
d0
d0
r0
a0
r0
a0
S01
S11
S21
S31
S41
d1
d1
d1
d1
a1
a1
a1
a1
d0
r0
a0
r0
S02
S12
S22
S32
r1
d0
r1
r1
r1
r0
a0
r0
S03
S13
S23
S33
d1
d1
a1
a1
r0
S04
S14
9
Addressing Deadlock
  • Prevention Design the system so that deadlock is
    impossible
  • Avoidance Construct a model of system states,
    then choose a strategy that will not allow the
    system to go to a deadlock state
  • Detection Recovery Check for deadlock
    (periodically or sporadically), then recover
  • Manual intervention Have the operator reboot the
    machine if it seems too slow

10
Prevention
  • Necessary conditions for deadlock
  • Mutual exclusion
  • Hold and wait
  • Circular waiting
  • No preemption
  • Ensure that at least one of the necessary
    conditions is false at all times
  • Mutual exclusion must hold at all times

11
Hold and Wait
  • Need to be sure a process does not hold one
    resource while requesting another
  • Approach 1 Force a process to request all
    resources it needs at one time
  • Approach 2 If a process needs to acquire a new
    resource, it must first release all resources it
    holds, then reacquire all it needs
  • What does this say about state transition
    diagrams?

12
Circular Wait
  • Have a situation in which there are K processes
    holding units of K resources

R
P
Ri
P holds R
Pi
R
P
P requests R
13
Circular Wait (cont)
  • There is a cycle in the graph of processes and
    resources
  • Choose a resource request strategy by which no
    cycle will be introduced
  • Total order on all resources, then can only ask
    for Rj if Ri currently holding
  • This is how we noticed the easy solution for the
    dining philosophers

14
Allowing Preemption
  • Allow a process to time-out on a blocked request
    , withdrawing the request if it fails

ru
Si
Sj
wu
dv
ru
Sk
15
Avoidance
  • Define a model of system states, then choose a
    strategy that will guarantee that the system will
    not go to a deadlock state
  • Requires extra information, e.g., the maximum
    claim for each process
  • Allows resource manager to see the worst case
    that could happen, then to allow transitions
    based on that knowledge

16
Safe vs Unsafe States
  • Safe state one in which the system can assure
    that any sequence of subsequent transitions leads
    back to the initial state
  • Even if all exercise their maximum claim, there
    is an allocation strategy by which all claims can
    be met
  • Unsafe state one in which the system cannot
    guarantee that the system will transition back to
    the initial state
  • Unsafe state can lead to a deadlock state if too
    many processes exercise their maximum claim at
    once

17
More on Safe Unsafe States
Normal Execution
No
Request Max Claim
Yes
Execute, then release
18
More on Safe Unsafe States
Normal Execution
No
Request Max Claim
Yes
Execute, then release
  • Suppose all processes take yes branch
  • Avoidance strategy is to allow this to happen,
    yet still be safe

19
More on Safe Unsafe States
I
Disallow
Safe States
Unsafe States
Deadlock States
20
Bankers Algorithm
  • Let maxci, j be the maximum claim for Rj by pi
  • Let alloci, j be the number of units of Rj held
    by pi
  • Can always compute
  • availj cj - S0?i
  • Then number of available units of Rj
  • Should be able to determine if the state is safe
    or not using this info

21
Bankers Algorithm
  • Copy the alloci,j table to alloci,j
  • Given C, maxc and alloc, compute avail vector
  • Find pi maxci,j - alloci,j ? availj
    for 0 ? j
  • If no such pi exists, the state is unsafe
  • If alloci,j is 0 for all i and j, the state is
    safe
  • Set alloci,j to 0 deallocate all resources
    held by pi go to Step 2

22
Example
Maximum Claim
C
Process R0 R1 R2 R3 p0 3 2 1 4 p1 0 2 5 2 p2 5 1 0
5 p3 1 5 3 0 p4 3 0 3 3
Allocated Resources
Process R0 R1 R2 R3 p0 2 0 1 1 p1 0 1 2 1 p2 4 0 0
3 p3 0 2 1 0 p4 1 0 3 0 Sum 7 3 7 5
23
Example
Maximum Claim
C
Process R0 R1 R2 R3 p0 3 2 1 4 p1 0 2 5 2 p2 5 1 0
5 p3 1 5 3 0 p4 3 0 3 3
Allocated Resources
Process R0 R1 R2 R3 p0 2 0 1 1 p1 0 1 2 1 p2 0 0 0
0 p3 0 2 1 0 p4 1 0 3 0 Sum 3 3 7 2
24
Example
Maximum Claim
C
Process R0 R1 R2 R3 p0 3 2 1 4 p1 0 2 5 2 p2 5 1 0
5 p3 1 5 3 0 p4 3 0 3 3
  • Can anyones maxc be met? (Yes, any of them can)

Allocated Resources
Process R0 R1 R2 R3 p0 2 0 1 1 p1 0 1 2 1 p2 0 0 0
0 p3 0 2 1 0 p4 0 0 0 0 Sum 2 1 4 2
25
Detection Recovery
  • Check for deadlock (periodically or
    sporadically), then recover
  • Can be far more aggressive with allocation
  • No maximum claim, no safe/unsafe states
  • Differentiate between
  • Serially reusable resources A unit must be
    allocated before being released
  • Consumable resources Never release acquired
    resources resource count is number currently
    available

26
Reusable Resource Graphs (RRGs)
  • Micro model to describe a single state
  • Nodes p0, p1, , pn ? R1, R2, , Rm
  • Edges connect pi to Rj, or Rj to pi
  • (pi, Rj) is a request edge for one unit of Rj
  • (Rj, pi) is an assignment edge of one unit of Rj
  • For each Rj there is a count, cj of units Rj
  • Number of units of Rj allocated to pi plus the
    number requested by pi cannot exceed cj

27
Example
R
p
P holds one unit of R
P requests one unit of R
R
p
A Deadlock State
28
Example
Not a Deadlock State
No Cycle in the Graph
29
State Transitions due to Request
  • In Sj, pi is allowed to request q?ch units of Rh,
    provided pi has no outstanding requests.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by adding q request edges from pi to Rh

q edges
Rh
pi
Rh
pi
pi request q units
State Sk
State Sj
of Rh
30
State Transition for Acquire
  • In Sj, pi is allowed to acquire units of Rh, iff
    there is (pi, Rh) in the graph, and all can be
    satisfied.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by changing each request edge to an assignment
    edge.

Rh
pi
Rh
pi
pi acquires units
State Sk
State Sj
of Rh
31
State Transition for Release
  • In Sj, pi is allowed to release units of Rh, iff
    there is (Rh, pi) in the graph, and there is no
    request edge from pi.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by deleting all assignment edges.

Rh
pi
Rh
pi
pi releases units
State Sk
State Sj
of Rh
32
Example
p0
p1
S00
33
Example
p0
p0
p1
p1
S00
S01
34
Example
p0
p0
p0
p1
p1
p1
S00
S01
S11
35
Example
p0
p0
p0
p0
p1
p1
p1
p1
S00
S01
S11
S21
36
Example
p0
p0
p0
p0
p0
p1
p1
p1
p1
p1
S00
S01
S11
S21
S22
37
Example
p0
p0
p0
p0
p0
p0
. . .
p1
p1
p1
p1
p1
p1
S00
S01
S11
S21
S22
S33
38
Graph Reduction
  • Deadlock state if there is no sequence of
    transitions unblocking every process
  • A RRG represents a state can analyze the RRG to
    determine if there is a sequence
  • A graph reduction represents the (optimal) action
    of an unblocked process. Can reduce by pi if
  • pi is not blocked
  • pi has no request edges, and there are (Rj, pi)
    in the RRG

39
Graph Reduction (cont)
  • Transforms RRG to another RRG with all assignment
    edges into pi removed
  • Represents pi releasing the resources it holds

pi
Reducing by pi
pi
40
Graph Reduction (cont)
  • A RRG is completely reducible if there a sequence
    of reductions that leads to a RRG with no edges
  • A state is a deadlock state if and only if the
    RRG is not completely reducible.

41
Example RRG
p0
p1
p2
42
Example RRG
p0
p1
p2
43
Consumable Resource Graphs (CRGs)
  • Number of units varies, have producers/consumers
  • Nodes p0, p1, , pn ? R1, R2, , Rm
  • Edges connect pi to Rj, or Rj to pi
  • (pi, Rj) is a request edge for one unit of Rj
  • (Rj, pi) is an producer edge (must have at least
    one producer for each Rj)
  • For each Rj there is a count, wj of units Rj

44
State Transitions due to Request
  • In Sj, pi is allowed to request any number of
    units of Rh, provided pi has no outstanding
    requests.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by adding q request edges from pi to Rh

q edges
Rh
pi
Rh
pi
pi request q units
State Sk
State Sj
of Rh
45
State Transition for Acquire
  • In Sj, pi is allowed to acquire units of Rh, iff
    there is (pi, Rh) in the graph, and all can be
    satisfied.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by deleting each request edge and decrementing wh.

Rh
pi
Rh
pi
pi acquires units
State Sk
State Sj
of Rh
46
State Transition for Release
  • In Sj, pi is allowed to release units of Rh, iff
    there is (Rh, pi) in the graph, and there is no
    request edge from pi.
  • Sj ? Sk, where the RRG for Sk is derived from Sj
    by incrementing wh.

Rh
pi
Rh
pi
pi releases 2 units
State Sk
State Sj
of Rh
47
Example
p0
p1
48
Deadlock Detection
  • May have a CRG that is not completely reducible,
    but it is not a deadlock state
  • For each process
  • Find at least one sequence which leaves each
    process unblocked.
  • There may be different sequences for different
    processes -- not necessarily an efficient approach

49
Deadlock Detection
  • May have a CRG that is not completely reducible,
    but it is not a deadlock state
  • Only need to find sequences, which leave each
    process unblocked.

p0
p1
50
Deadlock Detection
  • May have a CRG that is not completely reducible,
    but it is not a deadlock state
  • Only need to find a set of sequences, which
    leaves each process unblocked.

51
General Resource Graphs
  • Have consumable and reusable resources
  • Apply consumable reductions to consumables, and
    reusable reductions to reusables

52
GRG Example
p3
p2
R2
R0
R1
p0
p1
Reusable
Consumable
53
GRG Example (Fig 10.29)
p3
p2
Reduce by p3
R2
R0
R1
p0
p1
Reusable
Consumable
54
GRG Example
p3
p2
R2
R0
?
R1
p0
p1
Reduce by p0
Reusable
Consumable
55
Recovery
  • No magic here
  • Choose a blocked resource
  • Preempt it (releasing its resources)
  • Run the detection algorithm
  • Iterate if until the state is not a deadlock state

56
Memory Management
Chapter 11
57
Memory Manager
  • Requirements
  • Minimize executable memory access time
  • Maximize executable memory size
  • Executable memory must be cost-effective
  • Todays memory manager
  • Allocates primary memory to processes
  • Maps process address space to primary memory
  • Minimizes access time using cost-effective memory
    configuration
  • May use static or dynamic techniques

58
Address Space vs Primary Memory
Hardware Primary Memory
Process Address Space
Mapped to object other than memory
59
Building the Address Space
Source code
C
Reloc Object code
  • Compile time Translate elements

60
Primary Secondary Memory
CPU
  • CPU can load/store
  • Ctl Unit executes code from this memory
  • Transient storage

Primary Memory (Executable Memory) e.g. RAM
Secondary Memory e.g. Disk or Tape
  • Access using I/O operations
  • Persistent storage

Information can be loaded statically or
dynamically
61
Static Memory Allocation
Operating System
Unused
In Use
Process 3
Process 0
pi
Process 2
Issue Need a mechanism/policy for loading pis
address space into primary memory
Process 1
62
Fixed-Partition Memory Mechanism
Operating System
pi needs ni units
Region 0
N0
pi
ni
Region 1
N1
N2
Region 2
Region 3
N3
63
Fixed-Partition Memory -- Best-Fit
Operating System
  • Loader must adjust every address in the absolute
    module when placed in memory

Region 0
N0
Region 1
N1
Internal Fragmentation
pi
N2
Region 2
Region 3
N3
64
Fixed-Partition Memory -- Worst-Fit
Operating System
pi
Region 0
N0
Region 1
N1
N2
Region 2
Region 3
N3
65
Fixed-Partition Memory -- First-Fit
Operating System
pi
Region 0
N0
Region 1
N1
N2
Region 2
Region 3
N3
66
Fixed-Partition Memory -- Next-Fit
Operating System
Region 0
N0
pi
Region 1
N1
Pi1
N2
Region 2
Region 3
N3
67
Variable Partition Memory Mechanism
Operating System
68
Cost of Moving Programs
load R1, 0x02010
3F013010
Program loaded at 0x01000
Consider dynamic techniques
69
Dynamic Memory Allocation
  • Could use dynamically allocated memory
  • Process wants to change the size of its address
    space
  • Smaller ? Creates an external fragment
  • Larger ? May have to move/relocate the program
  • Allocate holes in memory according to
  • Best- /Worst- / First- /Next-fit

70
Special Case Swapping
  • Special case of dynamic memory allocation
  • Suppose there is high demand for executable
    memory
  • Equitable policy might be to time-multiplex
    processes into the memory (also space-mux)
  • Means that process can have its address space
    unloaded when it still needs memory
  • Usually only happens when it is blocked

71
Dynamic Address Relocation
CPU
Relative Address
0x02010
0x12010
Relocation Register
0x10000
load R1, 0x02010
MAR
  • Program loaded at 0x10000 ? Relocation Register
    0x10000
  • Program loaded at 0x04000 ? Relocation Register
    0x04000

We never have to change the load module addresses!
72
Runtime Bound Checking
CPU
Relative Address
Relocation Register
Limit Register
  • Bound checking is inexpensive to add
  • Provides excellent memory protection

MAR
Interrupt
73
Memory Hierarchies Dynamic Loading
CPU Registers
Primary (Executable)
L1 Cache Memory
L2 Cache Memory
Main Memory
Larger storage
Rotating Magnetic Memory
Faster access
Optical Memory
Secondary
Sequentially Accessed Memory
74
Exploiting the Hierarchy
  • Upward moves are (usually) copy operations
  • Require allocation in upper memory
  • Image exists in both higher lower memories
  • Updates are first applied to upper memory
  • Downward move is (usually) destructive
  • Destroy image in upper memory
  • Update image in lower memory
  • Place frequently-used info high,
    infrequently-used info low in the hierarchy
  • Reconfigure as process changes phases

75
Memory Mgmt Strategies
  • Fixed-Partition used only in batch systems
  • Variable-Partition used everywhere (except in
    virtual memory)
  • Swapping systems
  • Popularized in timesharing
  • Relies on dynamic address relocation
  • Now dated
  • Dynamic Loading (Virtual Memory)
  • Exploit the memory hierarchy
  • Paging -- mainstream in contemporary systems
  • Segmentation -- the future

76
NT Memory-mapped Files
Secondary memory
Ordinary file
  • Open the file
  • Create a section object (that maps file)
  • Identify point in address space to place the file

Executable memory
Memory mapped files
Section object
77
Virtual Memory
78
Names, Virtual Addresses Physical Addresses
Dynamically
Source Program
Absolute Module
Name Space
Pis Virtual Address Space
79
Locality
Address Space for pi
  • Address space is logically partitioned
  • Text, data, stack
  • Initialization, main, error handle
  • Different parts have different reference patterns

Initialization code (used once)
Code for ?1
Code for ?2
Code for ?3
Code for error 1
Code for error 2
Code for error 3
Data stack
80
Virtual Memory
Secondary Memory
Virtual Address Space for pi
Virtual Address Space for pj
Virtual Address Space for pk
  • Complete virtual address space is stored in
    secondary memory

81
Virtual Memory
  • Every process has code and data locality
  • Code tends to execute in a few fragments at one
    time
  • Tend to reference same set of data structures
  • Dynamically load/unload currently-used address
    space fragments as the process executes
  • Uses dynamic address relocation/binding
  • Generalization of base-limit registers
  • Physical address corresponding to a compile-time
    address is not bound until run time

82
Virtual Memory (cont)
  • Since binding changes with time, use a dynamic
    virtual address map, Yt

Virtual Address Space
Yt
83
Address Formation
  • Translation system creates an address space, but
    its address are virtual instead of physical
  • A virtual address, x
  • Is mapped to physical address y Yt(x) if x is
    loaded at physical address y
  • Is mapped to W if x is not loaded
  • The map, Yt, changes as the process executes --
    it is time varying
  • Yt Virtual Address ? Physical Address ? W

84
Size of Blocks of Memory
  • Virtual memory system transfers blocks of the
    address space to/from primary memory
  • Fixed size blocks System-defined pages are moved
    back and forth between primary and secondary
    memory
  • Variable size blocks Programmer-defined segments
    corresponding to logical fragments are the
    unit of movement
  • Paging is the commercially dominant form of
    virtual memory today

85
Paging
  • A page is a fixed size, 2h, block of virtual
    addresses
  • A page frame is a fixed size, 2h, block of
    physical memory (the same size as a page)
  • When a virtual address, x, in page i is
    referenced by the CPU
  • If page i is loaded at page frame j, the virtual
    address is relocated to page frame j
  • If page is not loaded, the OS interrupts the
    process and loads the page into a page frame

86
Addresses
  • Suppose there are G 2g?2h2gh virtual addresses
    and H2jh physical addresses assigned to a
    process
  • Each page/page frame is 2h addresses
  • There are 2g pages in the virtual address space
  • 2j page frames are allocated to the process
  • Rather than map individual addresses
  • Yt maps the 2g pages to the 2j page frames
  • That is, page_framej Yt(pagei)
  • Address k in pagei corresponds to address k in
    page_framej

87
Page-Based Address Translation
  • Let N d0, d1, dn-1 be the pages
  • Let M b0, b1, , bm-1 be page frames
  • Virtual address, i, satisfies 0?i
  • Physical address, k U2hV (0?V
  • U is page frame number
  • V is the line number within the page
  • Yt0G-1 ? ? W
  • Since every page is size c2h
  • page number U ?i/c?
  • line number V i mod c

88
Address Translation (cont)
g bits
h bits
Virtual Address
Page
Line
page table
Missing Page
Yt
j bits
h bits
Physical Address
Frame
Line
CPU
Memory
MAR
89
Demand Paging Algorithm
  • Page fault occurs
  • Process with missing page is interrupted
  • Memory manager locates the missing page
  • Page frame is unloaded (replacement policy)
  • Page is loaded in the vacated page frame
  • Page table is updated
  • Process is restarted

90
Modeling Page Behavior
  • Let w r1, r2, r3, , ri, be a page reference
    stream
  • ri is the ith page referenced by the process
  • The subscript is the virtual time for the process
  • Given a page frame allocation of m, the memory
    state at time t, St(m), is set of pages loaded
  • St(m) St-1(m) ? Xt - Yt
  • Xt is the set of fetched pages at time t
  • Yt is the set of replaced pages at time t

91
More on Demand Paging
  • If rt was loaded at time t-1, St(m) St-1(m)
  • If rt was not loaded at time t-1 and there were
    empty page frames
  • St(m) St-1(m) ? rt
  • If rt was not loaded at time t-1 and there were
    no empty page frames
  • St(m) St-1(m) ? rt - y
  • The alternative is prefetch paging

92
Static Allocation, Demand Paging
  • Number of page frames is static over the life of
    the process
  • Fetch policy is demand
  • Since St(m) St-1(m) ? rt - y, the
    replacement policy must choose y -- which
    uniquely identifies the paging policy

93
Random Replacement
  • Replaced page, y, is chosen from the m loaded
    page frames with probability 1/m

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 1 2
2 1 3
2 1 3
2 1 0
3 1 0
3 1 0
3 1 2
0 1 2
0 3 2
0 3 2
0 6 2
4 6 2
4 5 2
7 5 2
2 0 3
2 0
2
94
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
0 0 2 3
95
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
0 0 2 3
FWD4(2) 1 FWD4(0) 2 FWD4(3) 3
96
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
2 1 0 0 0 2 3 1
FWD4(2) 1 FWD4(0) 2 FWD4(3) 3
97
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 1 0 0 0 0 0 2 3 1 1 1
98
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 2 1 0 0 0 0 0 3 2 3 1 1 1 1
FWD7(2) 2 FWD7(0) 3 FWD7(1) 1
99
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 2 2 2 0 1 0 0 0 0 0 3 3 3 3 2 3 1 1 1 1
1 1 1
FWD10(2) ? FWD10(3) 2 FWD10(1) 3
100
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 2 2 2 0 0 0 1 0 0 0 0 0 3 3 3 3 3 3 2 3
1 1 1 1 1 1 1 1 1
FWD13(0) ? FWD13(3) ? FWD13(1) ?
101
Beladys Optimal Algorithm
  • Replace page with maximal forward distance yt
    max xeS t-1(m)FWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 2 2 2 0 0 0 0 4 4 4 1 0 0 0 0 0 3 3 3 3 3 3
6 6 6 7 2 3 1 1 1 1 1 1 1 1 1 1 1 5 5
10 page faults
  • Perfect knowledge of v ? perfect performance
  • Impossible to implement

102
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
0 0 2 3
BKWD4(2) 3 BKWD4(0) 2 BKWD4(3) 1
103
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
1 1 0 0 0 2 3 3
BKWD4(2) 3 BKWD4(0) 2 BKWD4(3) 1
104
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
1 1 0 0 0 2 2 3 3 3
BKWD5(1) 1 BKWD5(0) 3 BKWD5(3) 2
105
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1 1
1 1 0 0 0 2 2 2 3 3 3 0
BKWD6(1) 2 BKWD6(2) 1 BKWD6(3) 3
106
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1 1
1 3 3 3 0 0 0 6 6 6 7 1 0 0 0 2 2 2 1 1 1 3 3
3 4 4 4 2 3 3 3 0 0 0 2 2 2 1 1 1 5 5
107
Least Recently Used (LRU)
  • Replace page with maximal forward distance yt
    max xeS t-1(m)BKWDt(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 2 3 2 2 2 2 6 6 6 6 1 0 0 0 0 0 0 0 0 0 0 0 0
4 4 4 2 3 3 3 3 3 3 3 3 3 3 3 3 5 5 3
1 1 1 1 1 1 1 1 1 1 1 1 7
  • Backward distance is a good predictor of forward
    distance -- locality

108
Least Frequently Used (LFU)
  • Replace page with minimum use yt
    min xeS t-1(m)FREQ(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
0 0 2 3
FREQ4(2) 1 FREQ4(0) 1 FREQ4(3) 1
109
Least Frequently Used (LFU)
  • Replace page with minimum use yt
    min xeS t-1(m)FREQ(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
2 1 0 0 1 2 3 3
FREQ4(2) 1 FREQ4(0) 1 FREQ4(3) 1
110
Least Frequently Used (LFU)
  • Replace page with minimum use yt
    min xeS t-1(m)FREQ(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 1 0 0 1 1 1 2 3 3 3 0
FREQ6(2) 2 FREQ6(1) 1 FREQ6(3) 1
111
Least Frequently Used (LFU)
  • Replace page with minimum use yt
    min xeS t-1(m)FREQ(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 2 2
2 1 0 0 1 1 1 2 3 3 3 0
FREQ7(2) ? FREQ7(1) ? FREQ7(0) ?
112
First In First Out (FIFO)
  • Replace page that has been in memory the longest
    yt max xeS t-1(m)AGE(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
1 1 0 0 0 2 3 3
113
First In First Out (FIFO)
  • Replace page that has been in memory the longest
    yt max xeS t-1(m)AGE(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2 1
0 0 2 3
AGE4(2) 3 AGE4(0) 2 AGE4(3) 1
114
First In First Out (FIFO)
  • Replace page that has been in memory the longest
    yt max xeS t-1(m)AGE(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
1 1 0 0 0 2 3 3
AGE4(2) 3 AGE4(0) 2 AGE4(3) 1
115
First In First Out (FIFO)
  • Replace page that has been in memory the longest
    yt max xeS t-1(m)AGE(x)

Let page reference stream, v 2031203120316457
Frame 2 0 3 1 2 0 3 1 2 0 3 1 6 4 5 7 0 2 2 2
1 1 0 0 0 2 3 3
AGE5(1) ? AGE5(0) ? AGE5(3) ?
116
Beladys Anomaly
Let page reference stream, v 012301401234
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 3 4 4 4
4 4 4 1 1 1 1 0 0 0 0 0 2 2 2 2 2 2 2 1 1 1
1 1 3 3
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 4 4 4
4 3 3 1 1 1 1 1 1 1 0 0 0 0 4 2 2 2 2 2 2 2
1 1 1 1 3 3 3 3 3 3 3 2 2 2
  • FIFO with m 3 has 9 faults
  • FIFO with m 4 has 10 faults

117
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 1 1 1
1 2 2 2
LRU
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 1 1 1
1 2 2 2 3 3
118
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 1 1 1
1 1 2 2 2 0
LRU
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 1 1 1
1 1 2 2 2 2 3 3 3
119
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 3 1 1
1 1 0 0 2 2 2 2 1
LRU
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 1 1
1 1 1 1 2 2 2 2 2 3 3 3 3
120
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 3 4 1
1 1 1 0 0 0 2 2 2 2 1 1
LRU
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 2 2 2 2 2 4 3 3 3 3 3
121
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 3 4 4 4
2 2 2 1 1 1 1 0 0 0 0 0 0 3 3 2 2 2 2 1 1 1
1 1 1 4
LRU
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 0 0 0
0 0 4 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 4 4
4 4 3 3 3 3 3 3 3 3 3 2 2 2
122
Stack Algorithms
  • Some algorithms are well-behaved
  • Inclusion Property Pages loaded at time t with m
    is also loaded at time t with m1

Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 3 3 3 4 4 4
4 4 4 1 1 1 1 0 0 0 0 0 2 2 2 2 2 2 2 1 1 1
1 1 3 3
FIFO
Frame 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 4 4 4
4 3 3 1 1 1 1 1 1 1 0 0 0 0 4 2 2 2 2 2 2 2
1 1 1 1 3 3 3 3 3 3 3 2 2 2
123
Implementation
  • LRU has become preferred algorithm
  • Difficult to implement
  • Must record when each page was referenced
  • Difficult to do in hardware
  • Approximate LRU with a reference bit
  • Periodically reset
  • Set for a page when it is referenced
  • Dirty bit

124
Dynamic Paging Algorithms
  • The amount of physical memory -- the number of
    page frames -- varies as the process executes
  • How much memory should be allocated?
  • Fault rate must be tolerable
  • Will change according to the phase of process
  • Need to define a placement replacement policy
  • Contemporary models based on working set

125
Working Set
  • Intuitively, the working set is the set of pages
    in the processs locality
  • Somewhat imprecise
  • Time varying
  • Given k processes in memory, let mi(t) be of
    pages frames allocated to pi at time t
  • mi(0) 0
  • ?i1k mi(t) ? primary memory
  • Also have St(mi(t)) St(mi(t-1)) ? Xt - Yt
  • Or, more simply S(mi(t)) S(mi(t-1)) ? Xt - Yt

126
Placed/Replaced Pages
  • S(mi(t)) S(mi(t-1)) ? Xt - Yt
  • For the missing page
  • Allocate a new page frame
  • Xt rt in the new page frame
  • How should Yt be defined?
  • Consider a parameter, ?, called the window size
  • Determine BKWDt(y) for every y?S(mi(t-1))
  • if BKWDt(y) ? ?, unload y and deallocate frame
  • if BKWDt(y)

127
Working Set Principle
  • Process pi should only be loaded and active if it
    can be allocated enough page frames to hold its
    entire working set
  • The size of the working set is estimated using ?
  • Unfortunately, a good value of ? depends on the
    size of the locality
  • Empirically this works with a fixed ?

128
Example (? 3)
Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0
1
129
Example (? 4)
Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0
1
130
Implementing the Working Set
  • Global LRU will behave similarly to a working set
    algorithm
  • Page fault
  • Add a page frame to one process
  • Take away a page frame from another process
  • Use LRU implementation idea
  • Reference bit for every page frame
  • Cleared periodically, set with each reference
  • Change allocation of some page frame with a clear
    reference bit
  • Clock algorithms use this technique by searching
    for cleared ref bits in a circular fashion

131
Segmentation
  • Unit of memory movement is
  • Variably sized
  • Defined by the programmer
  • Two component addresses,
  • Address translation is more complex than paging
  • Yt segments x offsets ? Physical Address ? W
  • Yt(i, j) k

132
Segment Address Translation
  • Yt segments x offsets ? physical address ? W
  • Yt(i, j) k
  • s segments ? segment addresses
  • Yt(s(segName), j) k
  • l offset names ? offset addresses
  • Yt(s(segName), l(offsetName)) k
  • Read implementation in Section 12.5.2

133
Address Translation

s
l
segment
offset
?
Limit
Yt
Relocation
Missing segment

Limit
Base
P
To Memory Address Register
134
Implementation
  • Segmentation requires special hardware
  • Segment descriptor support
  • Segment base registers (segment, code, stack)
  • Translation hardware
  • Some of translation can be static
  • No dynamic offset name binding
  • Limited protection

135
Multics
  • Old, but still state-of-the-art segmentation
  • Uses linkage segments to support sharing
  • Uses dynamic offset name binding
  • Requires sophisticated memory management unit
Write a Comment
User Comments (0)
About PowerShow.com