Chapter 9: Virtual Memory - PowerPoint PPT Presentation

About This Presentation
Title:

Chapter 9: Virtual Memory

Description:

Based on originals provided by Sliberschatz and Galvin for the Addison-Weslet text 'Operating System Concepts' – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 19
Provided by: Dr1537
Category:

less

Transcript and Presenter's Notes

Title: Chapter 9: Virtual Memory


1
Chapter 9 Virtual Memory
  • As each process requests memory, the memory
    manager must satisfy or deny the request
  • If free memory is not available, the request is
    denied
  • If the physical memory of the system is not
    sufficient, overlays must be used to allow a
    process to overlay its allocated memory
  • The memory thus limits the number and size of
    ready processes
  • How can we more efficiently use our limited
    physical memory?
  • Virtual address separation of user logical
    memory address from physical memory address
  • Virtual memory separation of user logical
    memory from physical memory
  • We can extend the concept of swapping -
    instead of swapping the entire memory space of a
    process in or out, we swap part of the memory
    space in or out on demand
  • Logical address space can therefore be much
    larger than physical address space

2
Demand Paging
  • Only part of the program needs to be in memory
    for execution.
  • Need to allow pages to be swapped in and out
  • Parts of a program (rare features, error handling
    routines, large static data structures, etc.) may
    never be needed in a particular run
  • Virtual memory can be implemented via
  • Demand paging (swapper replaced with pager)
  • Demand segmentation
  • Bring a page into memory (via pager) only when it
    is needed
  • Less I/O needed (initially)
  • Less memory needed
  • Faster response
  • More users/processes
  • When memory is accessed, one of the following
    occurs
  • Page is memory resident ? satisfy reference
  • Page is not memory resident ? bring page into
    memory (I/O)
  • Page is invalid ? abort

Page fault
3
Valid-Invalid Bit
  • With each page table entry a validinvalid bit is
    associated(1 ? in-memory, 0 ? not-in-memory)
  • Initially validinvalid but is set to 0 on all
    entries
  • Example of a page table snapshot
  • During address translation, if validinvalid bit
    in page table entry is 0 ? page fault

4
Page Fault
  • If there is ever a reference to a page, first
    reference will trap to OS ? page fault
  • OS looks at another table to decide
  • Just not currently in memory
  • Processing a page fault
  • If invalid, abort process
  • Find (or make) an empty frame
  • Swap page into frame (I/O)
  • Reset tables, validation bit 1.
  • Restart instruction
  • Systems which start executing a process with no
    frames initially in memory use the pure demand
    pageing scheme
  • The hardware to support demand paging is the same
    as the hardware for paged memory (Page table) and
    swapping (backing store)

5
What happens if there is no free frame?
  • When a page fault occur the OS must find a frame
    for the new page
  • If no frames are free, a victim must be selected
    for eviction
  • page replacement find some page in memory, but
    not currently in use, page (swap) it out
  • this requires a decision, thus we need an
    algorithm
  • performance goal minimum number of page faults
    (effectively the same as increasing the hit
    ratio)
  • If the victim page has been modified (dirty), it
    must first be stored to disk (Is RAM a cache?)
  • Each page must be brought into memory at least
    once
  • try to avoid paging the same page several times.
  • Use modify (dirty) bit to reduce overhead of page
    transfers only modified pages are written to
    disk
  • Page replacement completes separation between
    logical memory and physical memory large
    virtual memory can be provided on a smaller
    physical memory

6
Performance of Demand Paging
  • Page Fault Rate 0 ? p ? 1.0
  • if p 0 no page faults
  • if p 1, every reference is a fault
  • Effective Access Time (EAT)
  • EAT (1 p) x memory access
  • p ( page fault overhead
  • swap page out (if dirty)
  • swap page in
  • restart overhead )
  • Goal Reduce PFR to decrease EAT

7
Optimal Page-Replacement Algorithm
  • Page-replacement algorithms are evaluated by
    running them on a particular string of memory
    references (a reference string) and computing the
    number of page faults on that string
  • Optimal Algorithm
  • remove the page which will be referenced farthest
    in the future
  • not used for longest period of time
  • requires an oracle
  • use as a metric to judge how well other
    algorithms perform
  • Example Consider the following references on a
    system w/4 frames
  • 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

8
First-In-First-Out (FIFO) Algorithm
  • Keep a list, remove oldest page.
  • Example reference string 1, 2,
    3, 4, 1, 2, 5, 1, 2, 3, 4, 5
  • Note More frames does not imply less page
    faults!
  • FIFO replacement suffers Beladys Anomaly

4 frames
3 frames
9
Least Recently Used (LRU) Algorithm
  • The victim is the page not used for the longest
    time
  • Example reference string 1, 2, 3, 4, 1, 2,
    5, 1, 2, 3, 4, 5
  • Problem How do we implement this?
  • Updating a list on every memory reference is
    expensive
  • Solutions
  • Counter implementation
  • Add a hardware counter when the page is
    referenced copy the counter value into the page
    table entry
  • When a page needs to be changed, look at the
    counters to determine which are to change
  • Software/Stack implementation keep a stack of
    page numbers in a double link form, update on
    page table lookup
  • Page referenced move it to the top, change 6
    pointers
  • No search for replacement

10
LRU Approximation Algorithms
  • Reference bit
  • With each page associate a bit, initially cleared
    to 0
  • When page is referenced bit set to 1
  • Select as a victim a page whose reference bit 0
  • Clear bits periodically
  • FIFO - second chance algorithm
  • If the victim has been referenced lately (R bit
    set) clear the bit and put it on the tail of the
    FIFO, clear reference bits periodically
  • Slightly better, but still suffers from Beladys
    Anomaly
  • LRU w/aging
  • Requires a n-bit register for each entry
  • The MSB is the reference bit
  • Right shift periodically (100ms or so)
  • Frames with the largest value are (approximately)
    the most frequently referenced

11
Not-Recently-Used (NFU) Algorithm
  • AKA enhanced second chance LFU approximation
    algorithm
  • R bit is set when page is referenced (memory
    read)
  • M bit is set when page is modified (memory write)
  • Clear all R bits on each clock interrupt
  • Each page is a member of one of four classes
  • Class 0 R M
  • Class 1 R M
  • Class 2 R M
  • Class 3 R, M
  • Select (randomly) a page in the lowest class
  • Easy to implement
  • Ok performance (used in Macintosh OS)
  • Paging Daemon Many OSes invoke a paging daemon
    when memory is full to select victims to
    writeback data in preparation for normal eviction

12
Processing a Page Fault
  • Hardware trap to kernel (save PC on stack or
    special register).
  • Enter monitor mode, call assembly routine to save
    registers and call OS.
  • OS discovers page fault and tries to find which
    page is needed (from hardware register or by
    examining the instruction _at_ the saved PC).
  • Check to see if page is valid and allowed access.
    If not, abort process.
  • Otherwise, find a free frame. If necessary,
    select a victim. If victim is dirty, schedule
    I/O, mark frame as busy, and switch context until
    done.
  • Look up disk address for page and request
    transfer to clean frame. Switch context and wait
    until disk interrupt for finished I/O arrives.
  • Update tables and mark frame as normal (not
    busy).
  • Back up faulting instruction to original state
    and reset PC.
  • Schedule process for execution (take off wait
    queue) and return to the assembly routine that
    invoked the OS.
  • Restore registers and return to user mode as if
    no fault had occurred.

13
Allocation of Frames
  • How do we decide how many frames each process
    gets?
  • Working set The subset of its pages that a
    process requires to be in memory resident at a
    given point in its execution
  • A system that uses memory segmentation with
    paging may require a minimum of 3 pages per
    process (text, data, stack)
  • Some processes require far more
  • All processes want their working set to be
    fully resident
  • Major allocation schemes
  • equal fixed allocation frames divided evenly
    among processes
  • proportional fixed allocation partition weighted
    by process size
  • priority allocation proportion is based on a
    non-size priority
  • Major replacement policies
  • Global replacement process selects a replacement
    frame from the set of all frames one process can
    take a frame from another
  • Local replacement each process selects from only
    its own set of allocated frames

14
Thrashing
  • If a process does not have enough frames to hold
    its current working set, the page-fault rate is
    very high
  • This leads to
  • low CPU utilization
  • operating system thinks that it needs to increase
    the degree of multiprogramming
  • another process added to the system (Argh!)
  • Thrashing
  • a process is thrashing when it spends more time
    paging than executing
  • w/ local replacement algorithms, a process may
    thrash even though memory is available
  • w/ global replacement algorithms, the entire
    system may thrash
  • Less thrashing in general, but is it fair?

15
Thrashing Diagram
  • Why does paging work?Locality model
  • Process migrates from one locality to another.
  • Localities may overlap.
  • Why does thrashing occur?? size of locality gt
    total memory size
  • What should we do?
  • suspend one or more processes!

16
Page-Fault Frequency Scheme
  • Establish acceptable page-fault frequency (PFF)
  • If PFF falls below minimum, remove a frame from
    processes
  • If PFF rate too high, allocate a new frame to
    process
  • If no free frames are available, suspend process

17
Program Structure
  • How should we arrange memory references to large
    arrays?
  • Is the array stored in row-major or column-major
    order?
  • Example
  • Array A1024, 1024 of type integer
  • Page size 1K
  • Each row is stored in one page
  • System has one frame
  • Program 1 for i 1 to 1024 do for j 1 to
    1024 do Ai,j 01024 page faults
  • Program 2 for j 1 to 1024 do for i 1 to
    1024 do Ai,j 01024 x 1024 page faults

18
Demand Segmentation
  • Used when insufficient hardware to implement
    demand paging
  • OS/2 allocates memory in segments, which it keeps
    track of through segment descriptors
  • Segment descriptor contains a valid bit to
    indicate whether the segment is currently in
    memory.
  • If segment is in main memory, access continues,
  • If not in memory, segment fault
  • Hybrid Scheme Segmentation with demand paging
Write a Comment
User Comments (0)
About PowerShow.com