Title: Virtual Memory
1Virtual Memory
2Outline
3Background
- Virtual Memory - separation of user logical
memory from physical memory. - Only part of the program need to be in memory
for execution. The rest is kept in secondary
storage (disk). - When program needs to access a page that is not
in memory, OS swaps out a page that is no longer
needed and swaps in the needed page into memory. - The success of virtual memory is due to the
locality of reference exhibited by most programs.
4Principle of Locality of Reference
- Memory references within a process tend to be
clustered - Temporal locality of reference recently accessed
items are most likely to be accessed in the near
future - Spatial locality of reference items accessed in
the near future are most likely to be adjacent to
items accessed recently. -
5Locality of Virtual Memory
- Only small part of the address space of a process
(code and data) will be needed in a short period
of time. - Possible to make intelligent guesses about which
pieces will be needed in the near future. - This suggests that virtual memory may work
efficiently. - Note that even if guess wrong, pay performance
penalty but computation is correct.
6Paged Virtual Memory
6
7Paged Virtual Memory
- In page table use a valid-invalid bit
- valid bit 1 gt page is in memory
- valid bit 0 gt page not in memory
- Each time a reference is made
- lookup page table
- if valid bit 0 gt page fault
- page fault gt trap to the OS to service the
page fault.
8Paged Virtual Memory
- Page Fault
- OS checks reference
- if referenced page is not in memory
- Find an empty frame
- Bring referenced page from secondary storage into
frame - update page table new frame and valid-bit1
- return back to interrupted process
9Page Size Issue
- large page size is good since this means smaller
page table. Note that when page table is very
large, the system may have to store part of it in
virtual memory. - Large page size is good because this means fewer
pages per process this increases TLB hit ratio. - Larger page sizes is good because disks are
designed to transfer large blocks of data - Smaller page sizes is good to reduce internal
fragmentation
10Page Size Issue (cont.)
- With a small page size, each page matches the
code that is actually used by process gt better
match process locality of reference less fault
rate - Increased page size causes each page to contain
more code data that are not referenced gtpage
fault rise - Pages size is generally 1K to 4K
11What happens if there is no free frame?
12Page Replacement
13Performance of Demand Paging
- Page-fault rate p
- p 0 gt no page fault
- p 1 gt every memory reference results in a
page fault. - Effective memory access time (1-p) memory
access time p time to service a page fault - example Tmm 100 nsec
- T pg_fault 25 msec
14Page Replacement Algorithms
15First-In-First-Out (FIFO) Algorithm
16Optimal Algorithm
17Least Recently Used (LRU) Algorithm
18LRU Approximation Algorithms
- Reference bit
- with each page associate a bit, initially 0.
- when a page is referenced set bit to 1.
- Replace page with reference bit 0. If there are
more than one pick one randomly. - At regular intervals, clear all reference bits
and start all over.
19LRU Approximation Algorithms
- Enhanced reference bit
- Instead of clearing the reference bits
periodically, save it to gain more info. - keep a 1 byte with each page (reference byte)
- most significant bit of byte is the reference
bit. - At regular intervals, shift all bits in reference
byte 1 bit position to the right. 0 goes to MSB - If reference byte 0000 0000 gt page has not
been referenced for the last 8 time periods - if byte 1111 1111 gt page has been referenced
in each of the past 8 period - two pages 0101 1111 and 0110 0011, page with
smaller byte was accessed less recently.
20LRU Approximation Algorithms
- Second-Chance
- Based on reference bit and FIFO
- Use FIFO algorithm
- Select page at head of FIFO queue
- If reference bit is 1, give it a second chance
- clear its reference bit.
- send it to back of queue
- Can be implemented using a circular queue
21Counting Algorithms
22Local vs. Global Replacement
- Local replacement each process selects from its
own set of allocated frames - Global replacement process selects a replacement
frame from the set of all frames a process can
take a frame from another process
23Dynamic Page Replacement
- The amount of physical memory-- number of memory
frames-- needed by a process varies as the
process executes - How much memory should be allocated?
- Fault rate "tolerable'
- will change according to the phase of the process
- Ideally, this should be set based on the locality
of reference of the process
24Thrashing
- If a process does not have "enough" pages, the
page fault rate is very high. This leads to - low CPU utilization
- Operating system thinks it needs to increase
degree of multiprogramming - another process is brought to the system
- this leads to more page faults
- Thrashing process spends most of its time
swapping pages in and out.
25Dynamic Replacement
- Contemporary models use the working set
replacement strategy - Dynamic and global
- Intuitively, the working set is the set of pages
in the process locality - somewhat imprecise
- varies with time as process migrates from one
locality to another.
26Working Set Replacement
- ? working-set window a fixed number of page
references - Working Set the set of pages in the most recent
? page references. - If a page is in active use, it will be in the
working set. If not it will drop from the
working set. - gt working set is an approximation of the
program's locality.
27Working Set Replacement--Example
- Page references 2,6,1,5,7,7,7,7,5,1.,6,2,3,4,2,2,
3,4,4,4. - t1 t2
- if ? 10
- at time t1 working set 1, 2,5,6,7
- at t2 2,3,4,6
- Selection of ? is important
- too small, will not encompass entire locality
- too large, will include multiple localities
Performance depends on locality and choice of ?
28Working Set Algorithm--Example
29Working Set Algorithm--Example
30Working Set Algorithm--Example
31Working Set Algorithm--Example
32Other Considerations
- Page size selection
- fragmentation
- page table size
- I/O overhead
- locality
- Write policy
- associate a "dirty" with each page. On
replacement if page is not dirty, do not write it
to disk
33Other Considerations (cont.)
- Page buffering
- keep a pool of free frames
- On a page fault select victim frame to replace
- before writing victim to disk, read new page from
disk into one of the frames in free pool - Later, write replaced paged to disk and add its
frame to free pool.
34Other Considerations (cont.)
- Pre-fetching strategies
- attempt to predict pages that are going to be
needed in the near future and bring into memory
before they are actually requested - Gain
- can reduce page fault rate, good when process is
swapped in and out. - Drawback
- if wrong prediction, may increase page fault.
35Other Considerations (cont.)