Title: Chapter 4 Memory Management
1Chapter 4Memory Management
2Why Memory Management?
- Parkinsons law programs expand to fill the
memory available to hold them - Programmers ideal an infinitely large,
infinitely fast memory, nonvolatile - Reality memory hierarchy
Registers
Cache
Main memory
Magnetic disk
Magnetic tape
3What Is Memory Management?
- Memory manager the part of the OS managing the
memory hierarchy - Keep track of memory parts in use/not in use
- Allocate/de-allocate memory to processes
- Manage swapping between main memory and disk
- Basic memory management every program is put and
run in main memory as whole - Swapping paging move processes back and forth
between main memory and disk
4Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
5Mono Programming
- One program at a time
- Share memory with OS
- Three variations
0xFFF
User program
OS in RAM
OS in ROM
User program
Device drivers in ROM
User program
OS in RAM
0
0
0
6Multiprogramming With Fixed Partitions
- Scenario multiple programs at a time
- Problem how to allocate memory?
- Divide memory up into n partitions
- Equal partitions vs. unequal partitions
- Can be done manually when system is up
- A job arrives, put it into the input queue for
the smallest partition large enough to hold it - Any space in a partition not used by a job is lost
7Example Multiprogramming With Fixed Partitions
800K
Partition 4
Partition 3
Partition 2
Partition 1
OS
700K
Multiple input queues
400K
200K
100K
0
8Single Input Queue
800K
- Disadvantage of multiple input queues
- Small jobs may wait, while a queue with larger
memory is empty - Solution single input queue
Partition 4
Partition 3
Partition 2
Partition 1
OS
700K
400K
200K
100K
0
9How to Pick Jobs?
- Pick the first job in the queue fitting an empty
partition - Fast, but may waste a large partition on a small
job - Pick the largest job fitting an empty partition
- Memory efficient
- Smallest jobs may be interactive ones, need best
service, slow - Policies for efficiency and fairness
- Have at least one small partition around
- A job may not be skipped more than k times
10A Naïve Model for Multiprogramming
- Multiprogramming improves CPU utilization
- If on average, a process computes 20 of the time
it sitting in memory ? 5 processes can keep CPU
busy all the time - Assume all processes never wait for I/O at the
same time. - Not unrealistic!
11A Probabilistic Model
- A process spends p of its time waiting for I/O
to complete - At once n processes in memory
- CPU utilization 1 (p/100)n
- Probability that all n processes are waiting for
I/O (p/100)n
12CPU Utilization
13Memory Management for Multiprogramming
- Relocation
- What address the program will begin in memory
- Protection
- A programs access should be confined to proper
area
14Relocation Problem
- Absolute address for programming
- A procedure at 100
- Real address
- When the module is in partition 1 (started from
physical address 100k), then the procedure is at
100K100 - Relocation problem translation between absolute
address and real address - Actually modify the instructions as loading the
program - Program must include a list of program words for
addresses to be relocated
15Protection
- A malicious program can jump to space belonging
to other users - Generate a new instruction on the fly
- Jump to the new instruction
- PSW (for Program Status Word) solution
- PSW protection code 0x0100
- Program space is confined 0x0100 0000 0x0100
FFFF
16Relocation/Protection Using Registers
- Base register start of the partition
- Every memory address generated adds the content
of base register - Base register 100K, CALL 100 ? CALL 100K 100
- Limit register length of the partition
- Addresses are checked against the limit register
- Disadvantage perform addition and comparison on
every memory reference
17Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
18In Time-sharing/Interactive Systems
- Not enough main memory to hold all currently
active processes - Intuition excess processes must be kept on disk
and brought in to run dynamically - Swapping bring in each process in entirely
- Assumption each process can be held in main
memory, but cannot finish at a run - Virtual memory allow programs to run even when
they are only partially in main memory - No assumption about program size
19Swapping
B
A
OS
A
OS
C
B
A
OS
C
B
OS
C
B
D
OS
C
D
OS
C
A
D
OS
Time ?
20Swapping V.S. Fixed Partitions
- The number, location and size of partitions vary
dynamically in swapping - Flexibility, improve memory utilization
- Complicate allocating, deallocating and keeping
track of memory - Memory compaction combine holes in memory into
a big one - More efficient in allocation
- Requires a lot of CPU time
- Rarely used in real systems
21How Enlarge Memory for a Process?
- Fixed size process easy
- Growing process
- Expand to the adjacent hole, if there is a hole
- Swap it out to create a large enough hole
- If swap area on the disk is full, wait or be
killed - Allocate extra space whenever a process is
swapped in or move - More than one growing segments?
- Share extra space for growing
22Handling Growing Processes
Room for growth of B
B
Room for growth of A
A
OS
B-Stack
Room for growth
B-Data
B-Program
A-Stack
Room for growth
A-Data
A-Program
OS
23Memory Management With Bitmaps
- Two ways to keep track of memory usage
- Bitmaps and free lists
- Bitmaps
- Memory is divided into allocation units
- One bit per unit 0-free, 1-occupied
A A A A A B B B B B B C C C C D D D D D D E E E E
1 1 1 1 1 0 0 0
1 1 1 1 1 1 1 1
1 1 0 0 1 1 1 1
1 1 1 1 1 0 0 0
24Size of Allocation Units
- 4 bytes/unit ? 1 bit in map for 32 bits of memory
? bitmap takes 1/33 of memory - Trade-off between allocation unit and memory
utilization - Smaller allocation unit ? larger the bitmap
- Larger allocation unit ? smaller the bitmap
- On average, half of the last unit is wasted
- To find a hole of k units ? search the entire
bitmap, costly!
25Memory Management With Linked Lists
- Each entry in the list is a hole(H)/process(P)
A A A A A B B B B B B C C C C D D D D D D E E E E
P 0 5
H 5 3
P 8 6
P 14 4
H 18 2
P 20 6
P 26 3
H 29 3 X
Length 6
Starts at 20
Process
26Updating Linked Lists
- Combine holes if possible
Before process X terminates
After process X terminates
A X B
A B
A X
A
X B
B
X
27Allocate Memory for New Processes
- First fit find the first hole fitting
requirement - Break the hole into two pieces P smaller H
- Next fit start search from the place of last fit
- Slightly worse performance than first fit
- Best fit take the smallest hole that is adequate
- Slower
- Generate tiny useless holes
- Worst fit always take the largest hole
- Not a very good idea
28Using Distinct Lists
- Distinct lists for processes and holes
- List of holes can be sorted on size
- Best fit becomes fast
- Problem how to free a process?
- Merging holes is very costly
- Quick fit grouping holes based on size
- Merging holes is still costly
- Information about holes can be stored in holes
29Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
30Why Virtual Memory?
- If the program is too big to fit in memory
- Split the program into pieces overlays
- Swapping overlays in and out
- Problem splitting programs into small pieces is
boring - Virtual memory OS takes care large programs
- Keep the parts currently used in memory
- Put other parts on disk
31Virtual and Physical Addresses
- Virtual addresses are used/generated by programs
- Physical addresses are used in execution
- If only one program ? VA PA
- MMU maps VA to PA
CPU package
Disk controller
CPU
MMU
Memory
Bus
32Paging
Virtual address space
60-64K X
56-6K X
52-56K X
48-52K X
44-48K 7
40-44K X
36-40K 5
32-36K X
28-32K X
24-28K X
20-24K 3
16-20K 4
12-16K 0
8-12K 6
4-8K 1
0-4K 2
Pages
- Virtual address space is divided into pages
- Page frames in physical memory
- Pages and page frames are with same size
- Transfers between RAM and disk are always in
units of pages
Page frames
Virtual address space
28-32K
24-28K
20-24K
16-20K
12-16K
8-12K
4-8K
0-4K
33The Magic in MMU
34Challenges for Page Table
- Page table can be extremely large
- 32 bits virtual addresses, 4kb/page?1m pages
- Each process needs its own page table
- Virtual to physical mapping must be fast
- 1 page table references/instruction
- Have to seek for hardware solutions
35Two Simple Designs for Page Table
- Use fast hardware registers for page table
- Load registers at every process switching
- Requires no memory reference during mapping
- Expensive if the page table is large
- Put the whole table in main memory
- Only one register pointing to the start of table
- Fast switching
- 1 memory references/instruction
- Pure memory solution is slow, pure register
solution is expensive, so
36Multiple Level Page Tables
- Avoid keeping all the page tables in memory all
the time
. . .
Bits 10 10 12
PT1 PT2 Offset
1023
. . .
. . .
6
5
4
3
2
1
0
. . .
Top level page table
Second level page table
37Typical Page Table Entry
- Page frame number goal of page mapping
- Present/absent bit page in memory?
- Protection what kinds of access permitted
- Modified is the page is written? (If so, need to
write back to disk later) - Referenced is someone using this page?
- Caching disable read from the disk?
Present/absent
Caching disabled
Modified
Page frame number
Referenced
Protection
38Translation Lookaside Buffers
- Most programs tend to make a large number of
references to a small number of pages - Put the heavily read fraction in registers
- TLB/associative memory
- Also doable using software
check
Page table
TLB
Virtual address
found
Not found
Physical address
39Inverted Page Table
- In 64-bit computers, 4kb/page ? 30million
gigabytes page table! - Only keep one entry / page frame
- 256M memory ? 65,536 entries. Save space!
- Hard to do virtual-physical mapping
- Solutions
- Use TLB
- Hashing virtual address
40Warm-up
- Memory management
- Provide a virtual consistent memory to users
- Memory allocation/deallocation
- Techniques
- Only one small process at a time easy
- Multiple small processes fixed partitions and
swapping - Multiple large processes virtual memory
- Page table
41Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
42Page Replacement
- When page fault
- Choose one page to kick out, if modified, update
disk copy - Better choose an unmodified page
- Better choose a seldom used page
- Many similar problems in computer systems
- Caches page replacement
- Web page replacement in web server
43Optimal Algorithm
- Label each page with number of instructions will
be executed before next reference - Remove the page with highest label
- Unrealizable!
Page id First next use
108 600
109 8026
44Remove Not Recently Used Pages
- Reference flag R and modification flag M
- Clear R bit periodically
- Four classes of pages when a page fault
- Class 0 (R0M0) not referenced, not modified
- Class 1 (R0M1) not referenced, modified
- Class 2 (R1M0) referenced, not modified
- Class 3 (R1M1) referenced, modified
- NRU removes a page at random from the lowest
numbered nonempty class
45First-in, First-out
- Remove the oldest page
- Old pages may be the commonest ones
- Pure FIFO is rarely used
46Second Chance Algorithm
- Looking for an old page not recently referenced
- Inspect the R bit before removing old pages
- 0 throw it away
- 1 clear R bit, move the page to the tail
0
18
18
20
A
A
A is treated like a newly loaded page
Page loaded first
Most recently loaded page
47Clock Page Algorithm
- Moving pages around in second chance algorithm is
inefficient - Keep all page frames on a circular list
- A hand points to the oldest one
- A variation of second chance algorithm
48Least Recently Used Algorithm
- Pages heavily used in the last few instructions
will probably be heavily used again in next few. - Remove page unused for the longest time
- Maintaining a list of pages?
- Most recently used page at the front
- Least recently used page at the rear
- Update the list on every memory reference
- Very costly!
49Efficient Implementations of LRU
- A global counter automatically incremented after
each instruction - A 64-bit counter/entry in page table
- Local counter ? global counter when page is
referenced - The page with lowest value is the least recently
used one
50LRU Using Matrix
- N page frames ? nn matrix, initial all zero
- Frame k is referenced
- Set all bits of row k to 1
- Set all bits of column k to 0
- The row with lowest binary value is the least
recently used
0 1 2 3
0 0 0 1 1
1 1 0 1 1
2 0 0 0 0
3 0 0 0 0
0 1 2 3
0 0 0 0 1
1 1 0 0 1
2 1 1 0 1
3 0 0 0 0
Page 2 is referenced
51Software LRU
- NFU Not Frequently Used
- Each page has a counter initially zero
- Add R bit to the counter every clock interrupt
- The page with least counter value is removed
- Never forget anything, hurt recently used pages
- Improvement
- Counters are shifted right 1 bit before the R bit
is added in - The R bit is added to the leftmost
52Example
- The page with lowest counter value is removed
- Choice based on limited history
Before clock tick 1
After clock tick 1
Page R bit Counter
0 0 11110000
1 1 01100000
2 1 00100000
3 0 01000000
4 0 10110000
5 0 01010000
Page R bit Counter
0 0 01111000
1 0 10110000
2 0 10010000
3 0 00100000
4 0 01011000
5 0 00101000
53Why Page Faults
- When a process starts, no page
- 1st instruction ? 1st page fault
- Visit global variables ? page fault
- Visit stack ? page fault
- After a while, most of the pages in memory
- Run with relatively few page faults
- Demand paging locality of reference
54Working Set Thrashing
- Working set the set of pages currently used by a
process - The entire working set in main memory ? few page
faults - Only part of the working set in main memory ?
many page faults, slow - Trashing a program causing page faults every few
instructions
55Working Set Model Avoid Trashing
- In multiprogramming systems, processes are back
and forth between disk memory - Many page faults when a process is brought in
- Worst case before the process working set is
brought in, the process is switched out again - Many page faults waste much CPU time
- Working set model
- Keep track of working sets
- Prepaging before run a process, allocate its
working set
56The Function of Working Set
- Working set at time t, pages used by the k most
recent memory references - An alternation the k most recently referenced
pages - Content of working set non-sensitive to k
- Evict pages not in the working set
w(k, t)
k
57Maintaining Working Set
New referenced page
Remove duplicate pages in registers
Page 1
Page k
Working set
Page fault
Remove pages not in working set
58Approximating Working Set
- Use last k execution time instead of last k
references - Current virtual time the amount of CPU time a
process has actually used since it started
59Working Set Page Replacement
60WSClock
- Working set page replacement scans the entire
page table per page fault, costly! - Improvement clock algorithm working set
information - Widely used
61Example
2014 1
2014 0
1213 1
1213 1
1980 0
1980 0
2014 1
2014 0
1213 0
1213 0
1251 1
New page
1980 0
62Algorithm
- Find the first page can be evicted
- A page not in working set
- Clean (no modification) ? use it
- Dirty ? write back to disk (use it if no other
choice) and keep searching - The hand all way round
- Some write-backs ? find a clean page (it must be
not in working set) - No write-back ? just grab a clean page or current
page
63Summary
Algorithm Comment
Optimal Not implementable, good as benchmark
NRU Very crude
FIFO Might throw out important pages
Second chance Big improvement over FIFO
Clock Realistic
LRU Excellent, but difficult to implement exactly
NFU Fairly crude approximation to LRU
Aging Efficient algorithm approximates LRU well
Working set Somewhat expensive to implement
WSClock Good efficient algorithm
64Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
65More Frames ? Fewer Page Faults?
- Intuitively correct
- Beladys anomaly
- Reference string 0 1 2 3 0 1 4 0 1 2 3 4
- FIFO is used
0 1 2 3 0 1 4 0 1 2 3 4
Youngest 0 1 2 3 0 1 4 4 4 2 3 3
0 1 2 3 0 1 1 1 4 2 2
Oldest 0 1 2 3 0 0 0 1 4 4
P P P P P P P P P 9 page faults
0 1 2 3 0 1 4 0 1 2 3 4
Youngest 0 1 2 3 3 3 4 0 1 2 3 4
0 1 2 2 2 3 4 0 1 2 3
0 1 1 1 2 3 4 0 1 2
Oldest 0 0 0 1 2 3 4 0 1
P P P P P P P P P P 10 page faults
66Model of Page Replacement
- When a page is referenced, it is always moved to
the top entry in memory - All pages above a just referenced page move down
one position - All pages below the referenced pages stay
Reference string 0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 7 3 3 5 3 3 3 1 7 1 3 4
0 2 1 3 5 4 6 3 3 4 4 7 7 7 5 5 5 3 3 7 1 3
0 2 1 3 5 4 6 6 6 6 4 4 4 7 7 7 5 5 5 7 7
0 2 1 1 5 5 5 5 5 6 6 6 4 4 4 4 4 4 5 5
0 2 2 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6 6
0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Page faults P P P P P P P P P P P
67An Observation
Reference string 0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 7 3 3 5 3 3 3 1 7 1 3 4
0 2 1 3 5 4 6 3 3 4 4 7 7 7 5 5 5 3 3 7 1 3
0 2 1 3 5 4 6 6 6 6 4 4 4 7 7 7 5 5 5 7 7
0 2 1 1 5 5 5 5 5 6 6 6 4 4 4 4 4 4 5 5
0 2 2 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6 6
0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Page faults P P P P P P P P P P P
LRU
0 1 2 3 0 1 4 0 1 2 3 4
Youngest 0 1 2 3 0 1 4 4 4 2 3 3
0 1 2 3 0 1 1 1 4 2 2
Oldest 0 1 2 3 0 0 0 1 4 4
P P P P P P P P P 9 page faults
0 1 2 3 0 1 4 0 1 2 3 4
Youngest 0 1 2 3 3 3 4 0 1 2 3 4
0 1 2 2 2 3 4 0 1 2 3
0 1 1 1 2 3 4 0 1 2
Oldest 0 0 0 1 2 3 4 0 1
P P P P P P P P P P 10 page faults
FIFO
68Stack Algorithms
- If increase memory size by one page frame and
re-execute the process, at every point during the
execution, all the pages that were present in
memory in the first run are also present in the
second run - Stack algorithms do not suffer from Beladys
anomaly
69Distance String
- Page reference denoted by the distance from the
top of the stack
Reference string 0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 7 3 3 5 3 3 3 1 7 1 3 4
0 2 1 3 5 4 6 3 3 4 4 7 7 7 5 5 5 3 3 7 1 3
0 2 1 3 5 4 6 6 6 6 4 4 4 7 7 7 5 5 5 7 7
0 2 1 1 5 5 5 5 5 6 6 6 4 4 4 4 4 4 5 5
0 2 2 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6 6
0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Page faults P P P P P P P P P P P
Distance string ? ? ? ? ? ? ? 4 ? 4 2 3 1 5 1 2 6 1 1 4 2 3 5 3
70Properties of Distance String
- It depends on both the reference string and
paging algorithm - Statistical properties
k pages can achieve good effect
Consistent page faults
P(d)
P(d)
k
d
d
n
n
1
1
71Predicting Page Fault Rates
- Scan distance string only once
Reference string 0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1 1 7 1 3 4 1
0 2 1 3 5 4 6 3 7 4 7 7 3 3 5 3 3 3 1 7 1 3 4
0 2 1 3 5 4 6 3 3 4 4 7 7 7 5 5 5 3 3 7 1 3
0 2 1 3 5 4 6 6 6 6 4 4 4 7 7 7 5 5 5 7 7
0 2 1 1 5 5 5 5 5 6 6 6 4 4 4 4 4 4 5 5
0 2 2 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6 6
0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Page faults P P P P P P P P P P P
Distance string ? ? ? ? ? ? ? 4 ? 4 2 3 1 5 1 2 6 1 1 4 2 3 5 3
C1 4
C2 2
C3 1
C4 4
C5 2
C6 2
C7 1
C? 8
times 1 occurs in distance string
F1 19
F2 17
F3 16
F4 12
F5 10
F6 10
F7 8
F? 8
of page faults with 5 frames C6C7C?
72Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
73Allocating Memory for Processes
- Multiple processes in main memory
Local page replacement
Global page replacement
Age
A0 10
A1 7
A2 5
A3 4
A4 6
A5 3
B0 9
B1 4
B2 6
B3 2
B4 5
B5 6
B6 12
C1 3
C2 5
C3 6
A0
A1
A2
A3
A6
A5
B0
B1
B2
A6
B4
B5
B6
C1
C2
C3
A0
A1
A2
A3
A6
A5
B0
B1
B2
B3
B4
B5
B6
C1
C2
C3
Local lowest
Global lowest
74Local/global Page Replacement
- Local page replacement every process with a
fixed fraction of memory - Thrashing even if plenty of free page frames
- Waste memory if working set shrinks
- Global page replacement dynamically allocate
page frames for each process - Works better
75Assign Page Frames to Processes
- Monitor working set size by aging bits
- Working set may change size quickly, aging bits
are crude measure spread over multiple ticks - Page frame allocating algorithms
- Periodically allocate each proc an equal share
- Variations adjust based on program size/minimum
size - Allocate based on page fault frequency (PFF)
76Load Control
- System thrash
- Combined working sets gt memory size
- Some processes need more memory, but no one needs
less - Swap some processes to disk
- Two level scheduling CPU scheduler and memory
scheduler - Can be combined together
- Consider characteristics of processes CPU bound
or I/O bound
77Page Size
- Hardware page size and OS page size
- Arguments for small page size
- Internal fragmentation on average, half of the
final page is empty and wasted - Large page ? more unused program in mem
- Arguments for large page size
- Small pages ? large page table and overhead
Overhead s e / p p / 2 ? p sqrt(2 s e)
78Separate Instruction and Data Space
- Both page spaces are paged independently
- Each has its own page table
Instruction space
Data space
Single address space
232-1
232-1
Data
Program
Program
Data
0
0
79Shared Pages
- Processes sharing same program should share
memory pages - Separate page table and process table
- Two processes may use two working sets
- Searching all page tables for shared page?
Costly! - Special data structure to keep track of shared
pages - Share data
- Copy on write share clean data, keep dirty data
private
80Cleaning Policy
- Writing back dirty pages when free pages are
needed is costly! - Processes are blocked and have to wait
- Keep enough clean pages for paging
- Background paging daemon
- Keep page content, can be reused
- Two handed clock
- Front hand controlled by paging daemon
- Back hand page replacement
81Virtual Memory Interface
- Multiple processes share same memory
- High bandwidth sharing
- High-performance message-passing system
- Distributed shared memory share memory over a
network
82Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
83Swap Area
- A chunk of pages reserved on disk, same size of
of pages of the process in memory - Each process has its own swap area, recorded in
process table - System swap area a list of free chunks
- Initialize swap area before a process runs
- Copy entire process image, or
- Load the process in memory and let it be paged
out as needed
84Swap Area for Growing Processes
- Data area and stack may grow
- Reserve separate swap areas for text (program),
data, and stack - Each areas consist of more than one chunk
- Reserve nothing in advance
- Allocate disk space when pages swapped out
- Deallocate when page swapped in again
- Have to keep track of pages on disk
- A table per process, costly!
85Comparison of Two Methods
Paging to a static swap area
Backing up pages dynamically
86When OS Involves Paging?
- Process creation
- Process execution
- Page faults
- Process termination
87Paging in Process Creation
- Determine (initial) size of program and data
- Create page table
- Process is running ? page table in memory
- Create swap area
- Record info about page table and swap area in
process table
88Paging During Process Running
- Reset MMU
- Flush TLB (translation Lookaside Buffer)
- Copy or point to new process page table
- Bring some or all of new process pages into
memory
89Paging When Page Faults
- Read registers to identify virtual address
causing the page fault - Compute page address on disk
- Find available page frame
- Read in the page
- Execute the instruction again
90Paging When Process Exit
- Release page table, pages, disk space
91What Happens on A Page Fault
- Hardware traps to kernel, saves PC on stack
- Assembly language routine saves registers and
calls OS - Identify which virtual page is needed
- Approve the virtual page, locate a free frame
- Write-back the page frame, mark the frame, switch
- Schedule a disk read
- Mark the page normal
- Back up to the faulting instruction
- OS returns to the assembly language routine
- Reload registers other info, return to user
space
92Outline
- Basic memory management
- Swapping
- Virtual memory
- Page replacement algorithms
- Modeling page replacement algorithms
- Design issues for paging systems
- Implementation issues
- Segmentation
93A Motivating Example
- Many large tables in a compiler
- Symbol table names and attributes of variables
- Constant table integer/floating-point constants
- Parse tree syntactic analysis
- Stack for procedure calls within the compiler
- Tables grow/shrink as compilation proceeds
- How to manage space for compiler?
94Space Management for Tables
Virtual address space
- Moving space from large tables to small ones
- Tedious work
- Free programmers from managing expanding and
contracting tables - Eliminate organizing program into overlays
- Segments many completely independent address
spaces
Call stack
Parse tree
Constant table
Source text
Symbol table
95Segments
- Segment a two dimensional memory
- Each segment has a linear sequence of address (0
to seg_max) - Different segments may have different length
- Segment lengths may change during execution
- Different segments can grow/shrink independently
- Address segment number address within segment
96Multiple Segments in A Process
- Segments are logical entries
- Simplify linking multiple components
- Facilitate sharing among multiple processes
Symbol table
Parse tree
Source text
Call stack
Constants
97Comparing Paging Segmentation
Consideration Paging Segmentation
Aware by programmers? NO Yes
of linear address spaces 1 Many
Total address space gt physical memory? Yes Yes
Distinguish and separately protect procedures and data? No Yes
Accommodate fluctuating tables? No Yes
Facilitate sharing among procedures? No Yes
Why is this technique? Get larger linear space than that of physical memory 1. Break programs/data into logically independent address space 2. Aid sharing and protection
98Implementing Pure Segmentation
- Checkerboarding / external fragmentation
Time ?
99Segmentation With Paging
- For large segments, only the working set should
be kept in memory - Paging segments
- Each program has a segment table
- Segment table is itself a segment
- If (part of) a segment is in memory, its page
table must be in memory - Address segment virtual page offset
- Segment ? page table
- Page table virtual page ? page address
- Page address offset ? word address
100The Intel Pentium Case
- Very large segments
- LDT Local Descriptor Table
- Each program has its own LDT
- Describe segments local to each program
- GDT Global Descriptor Table
- One GDT in the whole system
- Describe system segments, including OS
101Converting to an Address
Offset
Selector
Base address
Limit
Other fields
32-bit linear address
102Summary
- Simplest case no swapping nor paging
- Swapping
- Virtual memory, page replacement
- Aging and WSClock
- Modeling paging systems
- Implementation issues
- Segmentation