Title: Course Overview Principles of Operating Systems
1Course OverviewPrinciples of Operating Systems
- Introduction
- Computer System Structures
- Operating System Structures
- Processes
- Process Synchronization
- Deadlocks
- CPU Scheduling
- Memory Management
- Virtual Memory
- File Management
- Security
- Networking
- Distributed Systems
- Case Studies
- Conclusions
2Chapter Overview Virtual Memory
- Motivation
- Objectives
- Background
- System Requirements
- Virtual Memory
- Page Replacement Algorithms
- FIFO
- Least Recently Used
- Clock Algorithms
- Frame Allocation
- Thrashing
- Working Set Model
- Page Fault Frequency
- Implementation Issues
- Important Concepts and Terms
- Chapter Summary
3Motivation
- not all parts of a process image are needed
- at all times during the execution
- every time the program is run
- unnecessary parts can be kept on secondary
storage (hard disk) - they need to be brought into main memory when
needed - improves the utilization of main memory
- fewer infrequently used sections of main memory
- more processes can be accommodated
- must be managed carefully to avoid substantial
decrease in performance
4Objectives
- realize the limitations of memory management
without virtual memory - understand the basic techniques of virtual memory
- fetch methods for requested pages
- page replacement methods
- allocation of frames to processes
- be aware of the tradeoffs and limitations
- locality of reference
- predicting future page references
- evaluate the performance impact of virtual memory
on the overall system - effective access time
5Background
- limitations of systems without virtual memory
- memory is not fully utilized
- sometimes overlays are used by programmers to
time-share parts of main memory - lower degree of multiprogramming
- there is not enough physical memory to
accommodate all processes - program size
- the size of programs (process images) is limited
by the available physical memory - level of abstraction
- the programmer must be aware of hardware details
like the size of physical memory
6Processes in Main Memory
- it is not really necessary to keep the complete
process image always in main memory - currently used code
- currently used (user) data structures
- system data (heap, stack)
- some parts of the process image may be kept on
secondary storage - must be swapped in when needed
7Virtual Memory
- principle
- hardware requirements
- software components
- data structures
- advantages and problems
8Virtual Memory Principle
- a technique that allows the execution of
processes that are not completely in main memory - separates logical memory (as viewed by the
process) from physical memory
9System Diagram
CPU
Main Memory
Hard Disk
Control Unit
Registers
Arithmetic Logic Unit (ALU)
System Bus
David Jones
10Virtual Memory Diagram
Process
Page Table
Process Control Block
1
34
2
58
3
122
4
68
5
99
6
38
7
55
Program
8
43
9
131
10
102
11
171
12
76
13
123
Data
14
144
15
93
User Stack
Shared Address Space
Reference Bit
11Hardware Requirements
- virtual memory usually is supported by a memory
management unit (MMU) - minimum requirements
- address conversion support
- as in paging or segmentation
- additional entry in page tables
- present bit (valid/invalid bit)
12Software Components
- paging or segmentation
- to keep track of the parts of processes and their
locations - swapper
- loads and unloads parts of processes between main
memory and hard disk - fetch algorithm
- determines which parts of the process should be
brought into main memory - demand paging is the most frequently used one
- page replacement algorithm
- determines which parts are swapped out when
memory needs to be freed up
13Data Structures
- pages or segments
- parts of the process image handled by virtual
memory - page frames
- sections of physical memory that hold pages
- page tables
- keep track of the allocation of pages to frames
- free page frame list
- list of all frames currently not in use
- replacement algorithm tables
- contain data about pages and frames
- used to decide which pages to replace
- swap area
- secondary memory area for pages not currently in
main memory - a complete image of every process is kept here
14Advantages and Problems
- advantages
- large virtual address space
- processes can be larger than physical memory
- better memory utilization
- only the parts of a process actually used are
kept in main memory - less I/O for loading or swapping the process
- only parts of the process need to be transferred
- problems
- complex implementation
- additional hardware, components, data structures
- performance loss
- overhead due to VM management
- delay for transferring pages
15Page Faults
- an interrupt/trap is generated if a memory
reference leads to a page that is currently not
in main memory - may be caused by any memory access(instructions,
user data, system data) - additional information must be kept in the page
table to indicate if a process is currently in
main memory - present bit
- valid/invalid bit may be used
- the page needs to be loaded before execution can
continue
16Page Fault Handling
- a page fault trap is generated if the present bit
is not set - a page frame is allocated
- if there are no free frames left, the page
replacement algorithm must be invoked - a page is read into the page frame
- the page table is updated
- the instruction is restarted
17Page Fault Times
- assumptions 100 MHz CPU clock cycle, 20 ms
average access and transfer time per page
18Page Fault Rate
- an instruction that takes normally tens of
nanoseconds will take tens of milliseconds if a
page fault occurs - a factor of 100,000 longer
- this is an intolerable slowdown
- the frequency of page faults is very important
for the performance of the system - the page fault rate must be kept very low
19Page Faults and EAT
- effective access time (p page fault rate)
- EAT p access with page fault
(1-p) access without page fault
20Page Faults and Performance
- what is the performance loss for a page fault
rate of 1 in 100,000? - access time without page fault 100 ns
- memory access time including page table and TLB
effects - access time with page fault0.5 20 ms 0.5
40 ms 30 ms - page replacement for 50 of page faults
- EAT 0.00001 30 ms (1- 0.00001)
100 ns 300 ns 99.999999 ns 400 ns - this is a 300 performance loss
21Page Faults and Performance
- what page fault rate is needed for a 10
performance loss? - same conditions as previous example
- a 10 performance loss corresponds to a 110 ns
EAT - 110 ns gt p 30 ms (1- p) 100 ns
gt p 30,000,000 ns 100ns 10 ns gt p
30,000,000 ns p lt 10/30,000,000
lt 1/3,000,000 1 in 3 million - only one out of 3 million memory accesses may
cause a page fault
22Locality of Reference
- locality of reference indicates that the next
memory access will be in the vicinity of the
current one - spatial
- successive memory accesses will be in the same
neighborhood - temporal
- the same memory location(s) will be accessed
repeatedly - locality of reference is the main reason why the
number of page faults can be low
23Reasons for Locality of Reference
- instruction execution
- the execution of a program proceeds largely in
sequence - except for branch and call instructions
- iterative constructs repeat a rather small number
of instructions several times - data access
- many data structures are accessed in sequence
- list, array, tree
- may depend on programming style and the code
generated by the compiler
24Data Structures and Locality
- good locality
- stack, queue, array, record
- medium locality
- linked list, tree
- bad locality
- graph, pointer
25Fetch Policy
- determines which pages will be brought into main
memory from secondary storage - easy if it is known which pages will be used next
- in practice this is not known
- the most popular approach is demand paging
- pages are fetched when needed
- this is indicated by a page fault
- an alternative is prepaging (anticipative paging)
- an attempt is made to load pages before they are
actually needed - is not always feasible
26Demand Paging
- an attempt to access a page that is currently not
in main memory generates a page fault - in response to the page fault, the respective
page is loaded into main memory - requires an available frame
27Prepaging
- anticipative paging
- pages that are likely to be used in the near
future are loaded before they are needed - this saves the waiting time for the page to be
brought in - especially with DMA it can be performed
concurrently with program execution - can depend on secondary storage policies
- contiguously stored pages are advantageous
because of lower seek times
28Placement Policy
- tries to identify a good location for a program
part to be brought into main memory - not a problem with paging
- each page fits in every frame
- for (pure) segmentation it is essentially a
variation of the general memory allocation
problem - find a fitting hole for the segment to be brought
in
29Replacement Policy
- identifies a frame to be freed when a page fault
occurs, but no free frame is available - the page occupying that frame must be swapped out
to secondary storage - can be based on a number of algorithms
- FIFO, least recently used, clock-based, etc.
30Replacement Algorithm Objective
- ideally, the page being replaced should be the
page least likely to be referenced in the near
future - in practice, this is not possible because future
page references are unknown - many algorithms try to predict the future by
looking into the past - assuming that the past behavior of a process is
similar to its future behavior - locality of reference enables the connection
between the past history and the future
31Page Replacement Algorithms
- first-in, first-out (FIFO) algorithm
- optimal algorithm
- least recently used (LRU) algorithm
- LRU approximation algorithms
32Evaluation of Algorithms
- keep the number of page faults as low as possible
- the performance of page replacement algorithms is
often compared on the basis of a reference string - the reference string indicates the sequence in
which pages are used - it is derived from a trace of the addresses
accessed in a system - if several addresses point to the same page, only
one entry for the reference string is generated - depends on the number of available frames
- with more frames, the number of page faults
decreases until all pages can be accommodated
33First-In, First-Out
- the page that was brought in first will be
replaced first - must keep track of the time when a page is loaded
- this can also be done in a FIFO queue
- very simple, but not very good
- the oldest page may have been used very recently
- there is a good chance that the replaced page
will be needed again shortly
34FIFO Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
2
2
5
5
5
5
5
5
3
3
3
3
3
3
6
6
6
6
6
6
1
1
3
3
1
1
1
1
1
1
7
7
7
7
7
7
3
3
3
5
5
5
5
5
5
4
4
4
4
4
4
4
7
F
F
F
F
F
F
F
F
F
F
F
F
Page Faults 12
35FIFO Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
6
6
6
6
6
6
6
6
6
6
3
3
3
3
3
3
3
7
7
7
7
7
7
7
7
7
1
1
1
1
1
1
1
4
4
4
4
4
4
4
4
5
5
5
5
5
5
5
5
5
5
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
F
F
F
F
F
F
F
F
F
F
Page Faults 10
36Optimal
- the page that will not be used for the longest
period is replaced - looks forward in the reference string (into the
future) - provably optimal algorithm
- impractical for real systems
- cant be implemented since it is not known when a
page will be used again - used as a benchmark
37Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
3
3
3
3
3
1
1
1
1
5
5
5
F
F
F
F
F
Replacement Candidates 4, 3, 1, 5 Selected 1
38Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
1
1
1
2
2
2
5
5
5
5
5
F
F
F
F
F
Replacement Candidates 4, 3, 2, 5 Selected 3
39Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
6
6
1
1
1
2
2
2
2
5
5
5
5
5
5
F
F
F
F
F
F
F
Replacement Candidates 4, 6, 2, 5 Selected 3
40Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
4
4
4
4
4
7
7
3
3
3
3
3
3
6
7
7
7
2
2
1
1
1
2
2
2
2
2
2
5
5
5
5
5
5
5
5
5
5
F
F
F
F
F
F
F
F
Replacement Candidates 4, 7, 2, 5 Selected 2 or
5 (both pages wont be used anymore)
41Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
4
4
4
4
4
4
7
7
3
3
3
3
3
3
6
7
7
7
7
6
2
1
1
1
2
2
2
2
2
2
6
5
5
5
5
5
5
5
5
5
5
5
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 7, 6, 5 Selected 6 or
5 (pages wont be used anymore)
42Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
7
7
3
3
3
3
3
3
6
7
7
7
7
7
6
2
1
1
1
2
2
2
2
2
2
6
6
5
5
5
5
5
5
5
5
5
5
1
1
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 7, 6, 1 Selected 6 or
1 (pages wont be used anymore)
43Optimal Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
7
7
3
3
3
3
3
3
6
7
7
7
7
7
7
7
6
2
1
1
1
2
2
2
2
2
2
6
3
3
3
5
5
5
5
5
5
5
5
5
5
1
1
1
1
F
F
F
F
F
F
F
F
F
F
Page Faults 10
44Least Recently Used (LRU)
- the page that has not been used for the longest
period is replaced - looks backward in the reference string (into the
past) - similar to the optimal algorithm, but practical
- requires additional information about the pages
- last usage (time stamp, ordering of the pages)
45LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
4
3
3
3
3
3
1
1
1
1
5
5
5
F
F
F
F
F
Replacement Candidates 4, 3, 1, 5 Selected 4
46LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
3
3
3
3
3
3
3
1
1
1
1
1
1
5
5
5
5
5
F
F
F
F
F
F
Replacement Candidates 2, 3, 1, 5 Selected 5
47LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
5
5
5
6
5
6
F
F
F
F
F
F
F
Replacement Candidates 2, 3, 1, 6 Selected 1
48LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
7
7
5
5
5
6
5
6
6
F
F
F
F
F
F
F
F
Replacement Candidates 2, 3, 7, 6 Selected 2
49LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
4
4
3
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
7
7
7
5
5
5
6
5
6
6
6
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 3, 7, 6 Selected 3
50LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
4
4
4
3
3
3
3
3
3
3
3
2
3
2
1
1
1
1
1
1
7
7
7
7
5
5
5
6
5
6
6
6
6
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 2, 7, 6 Selected 6
51LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
4
4
4
4
3
3
3
3
3
3
3
3
2
3
2
2
1
1
1
1
1
1
7
7
7
7
7
5
5
5
6
5
6
5
6
6
5
F
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 2, 7, 5 Selected 7
52LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
1
3
4
7
4
4
4
4
4
2
2
2
2
4
4
4
4
4
3
3
3
3
3
3
3
3
2
3
2
2
2
1
1
1
1
1
1
7
6
7
7
7
6
5
5
5
6
5
6
5
6
6
5
5
F
F
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 4, 2, 6, 5 Selected 4
53LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
3
4
7
1
4
4
4
4
4
2
2
2
2
1
4
4
4
4
1
3
3
3
3
3
3
3
3
2
3
2
2
2
2
1
1
1
1
1
1
7
6
7
7
7
6
6
5
5
5
6
5
6
5
6
6
5
5
5
F
F
F
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 1, 2, 6, 5 Selected 2
54LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
3
4
7
1
4
4
4
4
4
2
2
2
2
1
4
4
4
4
1
1
3
3
3
3
3
3
3
3
3
3
2
2
2
2
3
1
1
1
1
1
1
7
6
7
7
7
6
6
6
5
5
5
6
5
6
5
6
6
5
5
5
5
F
F
F
F
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 1, 3, 6, 5 Selected 5
55LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
3
4
7
1
1
4
4
4
4
4
2
2
2
2
1
4
4
4
4
1
1
3
3
3
3
3
3
3
3
3
3
2
2
2
2
3
3
1
1
1
1
1
1
7
6
7
7
7
6
6
6
6
5
5
5
6
5
6
4
6
6
5
5
5
5
4
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
Replacement Candidates 1, 3, 6, 4 Selected 6
56LRU Example
Reference String
4
3
1
5
1
2
3
6
7
4
2
5
6
3
4
7
1
1
4
4
4
4
4
2
2
2
2
4
4
4
4
1
1
1
3
3
3
3
3
3
3
3
3
2
2
2
2
3
3
3
1
1
1
1
1
1
7
7
7
7
6
6
6
6
7
5
5
5
6
5
6
6
6
5
5
5
5
4
4
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
Number of Page Faults 15
57Not Recently Used (NRU) Alg.
- replaces one of the pages that have not been
recently used - simplification of the LRU algorithm
- only records if a page has been used or not
- the actual time when it was used last is not
recorded - status bits are associated with each page in
memory - R bit is set when the page is referenced
- M bit is set when the page is modified
58NRU Usage
- pages can be divided into 4 categories
- not referenced, not modified
- not referenced, modified
- referenced, not modified
- referenced, modified
- not referenced/modified pages are replaced before
referenced/modified pages - modified pages have to be written to secondary
storage before their frames can be reused
59Simulating the LRU Algorithm
- aging
- associate a byte with each page in memory
- periodically, shift all the bits to the right and
inserts the reference bit into the high-order bit - the page with the lowest number is the least
recently used page
60Clock Replacement Algorithms
- pages are arranged in a circular buffer
- each page table entry has at least a reference
bit - the reference bit is sometimes also called use
bit - sometimes an additional modified bit is used
- the reference bit is set to 1 every time the page
is accessed - the algorithm looks for a page whose reference
bit is 0 - the bit of every page checked by the algorithm is
set to 0 - approximation of LRU
- tolerable overhead
- variations are used in many real systems
61Clock Replacement Algorithm
reference bit R 0 evict page R 1 clear R
and advance hand
62Second Chance Algorithm
- example of a clock replacement algorithm
- after the first visit, the page gets another
chance to be referenced until the clock algorithm
visits again - modification of a FIFO replacement strategy
- will evict the oldest page only if its reference
bit is set to 0 - avoids evicting a heavily used page that happens
to be the oldest in memory
63Frame Allocation
- determines how many frames a process may use
- fixed number of frames
- flexible number of frames
- allocation methods
- equal share for each process
- proportional allocation
- factors for determining the number of frames
- process image size
- priority
- accumulated CPU time
- time to finish
- CPU-bound vs. I/O bound-process
64Frame Allocation Strategy
- global allocation
- a replacement frame is selected from the set of
all frames - may reduce or increase the number of frames
allocated to a process - may favor some processes over others
- generally better overall system throughput
- local allocation
- a replacement frame is selected only from the
frames allocated to that process - less influence from other processes
65Replacement and Allocation
66Fixed Allocation, Local Scope
- the number of frames per process is decided when
the process is loaded, and cant be changed - replacements must be done within the frames of
the process - too small
- large number of page faults
- many active processes
- too large
- small number of active processes
- low CPU utilization
- few page faults
67Variable Allocation, Local Scope
- allocate a number of frames to new processes
- referred to as resident set
- prepaging or demand paging to fill up the
allocation - replacements are selected from the frames of this
process - the resident set size is re-evaluated occasionally
68Variable Allocation, Global Scope
- similar to above, but replacements may be
selected from frames of other processes - relatively easy to implement
- processes with high page fault rates grow
- even if they dont really need more frames
- may be unfair towards some processes
69Thrashing
- if a process doesnt have enough pages, it will
generate lots of page faults - the replaced page may be needed again soon
- as a consequence, CPU utilization may go down
- some older operating systems compensated for this
by bringing in more processes - this leads to each process having even fewer
pages, and more page faults CPU utilization goes
down, and more processes are brought in, etc. - leads to severe performance problems
70Thrashing Diagram
thrashing
CPU utilization
level of multiprogramming
71Overcoming Thrashing
- local page replacement will limit the effects of
thrashing to the affected processes - does not eliminate it completely
- thrashing processes still increase the effective
access time for other processes - thrashing can be prevented by providing a process
with enough pages - can be determined through locality models
- almost all memory references will be in one or a
few localities of the process
72Working Set Model
- is derived from the locality model
- the working set comprises all pages that have
been used during the last n page references - n defines the size of the working set window
- the working set is an approximation of the
localities of a process - if the sum of all working sets for active
processes is greater than the total number of
available frames there is a danger of thrashing - the working set of a process changes size as the
process moves between localities
73Working Set Model Usage
- the operating system monitors the working set of
each process - each process is allocated enough frames to
accommodate its working set - if enough frames are free, more processes are
brought in - if the sum of the working set sizes exceeds the
total number of available frames, a process must
be suspended - optimizes CPU utilization
- keeps the degree of multiprogramming high
- prevents thrashing
74Problems Working Set
- size and membership of the working set may change
- the past doesn't always predict the future
- considerable overhead for an exact determination
of the working set - the optimal value for the working set window size
is unknown - the page fault frequency may be used for an
approximation
75Page Fault Frequency Model
- the operating system monitors the page fault
frequency (PFF) of each process - if it gets too high, the process needs more
frames - if it gets too low, the process can do with fewer
frames - if the PFFs of too many processes are too high,
some processes have to be suspended
76Page Fault Frequency Diagram
process needs more frames
upper bound
page fault rate
lower bound
process needs fewer frames
allocated page frames
77Paging Demons
- background process that periodically inspects
memory looking for pages to evict based on the
page replacement algorithm - contents of the page may be kept in a page pool
in case the page is needed again before it gets
overwritten
78Locking Pages
- a page is locked in memory to prevent its
replacement - page locks can be used to prevent problems with
I/O operations - allows I/O to complete before the page involved
in the I/O operation is replaced - an alternative is to use system buffers for I/O
- better control of the operating system over the
use of pages
79Cleaning Policy
- determines when modified pages should be written
out to secondary memory - demand cleaning
- a page is written out only when it has been
selected for replacement - one page fault involves writing one page and
reading in another page - precleaning
- pages are written before frames are needed
- may involve unnecessary write operations for
pages that change frequently - compromise page buffering
80Page Buffering
- a number of page frames are not allocated to
processes, but used by the operating system as
cache or buffer - keeps some recently swapped out pages in main
memory so they dont need to be swapped in in
case they are needed again soon - the processes may not be aware of that
81Load Control
- determines the number of processes in main memory
(degree of multiprogramming) - too few processes
- not enough processes in the ready queue
- may lead to new processes being swapped in
- too many processes
- high page fault frequency
- danger of thrashing
82Process Suspension
- determines the process(es) that will be swapped
out of main memory to the secondary storage - low priority processes
- process causing many page faults
- last process activated
- least likely to have working set resident
- smallest process
- will be easy to swap in again
- largest process
- frees up the largest number of frames
- process with the largest remaining execution time
83Virtual Memory Implementation
- page replacement and frame allocation are not the
only considerations for a virtual memory system - prepaging
- page size
- program structure
- I/O and virtual memory
- file mapping (memory-mapped files)
84Direct Memory Access
- DMA unit is capable of transferring data directly
between memory and I/O device - CPU doesnt have to be involved
- cycle stealing
- the DMA unit steals bus access cycles from the
CPU - CPU has to wait because it would be too costly to
restart the DMA transfer - the execution of instructions is suspended, not
interrupted
85Real-Time Systems
- with virtual memory, unpredictable and very long
delays may occur - this is very bad for real-time system
- real-time processes need small and bounded
latencies - partial solution important pages may be locked
into main memory - used in Solaris 2.x
86Important Concepts and Terms
- clock replacement algorithm
- contiguous allocation
- effective access time
- file mapping
- first-in, first-out algorithm
- fragmentation
- global frame allocation
- least recently used algorithm
- local frame allocation
- locality
- logical address
- memory-mapped files
- memory management unit (MMU)
- non-contiguous allocation
- optimal replacement algorithm
- page fault
- page replacement algorithm
- page table
- physical address
- present bit
- reference string
- segmentation
- spatial locality
- swap area
- temporal locality
- thrashing
- virtual memory
- working set
87Chapter Summary
- virtual memory keeps only parts of processes in
main memory - the rest is stored on secondary storage
- virtual memory separates the logical address
space from physical memory - allows the execution of processes larger than
physical memory - the programmer does not have to know the size of
physical memory - leads to better resource utilization
- performance loss due to page faults