Background - PowerPoint PPT Presentation

1 / 53
About This Presentation
Title:

Background

Description:

Logical address space can therefore be much larger than physical address space. ... must be locked from being selected for eviction by a page replacement algorithm. ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 54
Provided by: marily242
Category:

less

Transcript and Presenter's Notes

Title: Background


1
Background
  • Virtual memory separation of user logical
    memory from physical memory.
  • Only part of the program needs to be in memory
    for execution.
  • Logical address space can therefore be much
    larger than physical address space.
  • Allows address spaces to be shared by several
    processes.
  • Allows for more efficient process creation.
  • Virtual memory can be implemented via
  • Demand paging
  • Demand segmentation

2
Virtual Memory That is Larger Than Physical Memory
3
Demand Paging
  • Bring a page into memory only when it is needed.
  • Less I/O needed
  • Less memory needed
  • Faster response
  • More users
  • Page is needed ? reference to it
  • invalid reference ? abort
  • not-in-memory ? bring to memory

4
Transfer of a Paged Memory to Contiguous Disk
Space
5
Page Table When Some Pages Are Not in Main Memory
6
Page Fault
  • If there is ever a reference to a page, first
    reference will trap to OS ? page fault
  • OS decides
  • Invalid reference ? abort.
  • Just not in memory.
  • Get empty frame.

7
Page Fault
  • Get empty frame.
  • Swap page into frame.
  • Reset tables, validation bit 1.
  • Restart instruction

8
Steps in Handling a Page Fault
9
What happens if there is no free frame?
  • Page replacement find some page in memory, but
    not really in use, swap it out.
  • algorithm
  • performance want an algorithm which will result
    in minimum number of page faults.
  • Same page may be brought into memory several
    times.

10
Page Replacement
  • Use modify (dirty) bit to reduce overhead of page
    transfers only modified pages are written to
    disk.
  • Page replacement completes separation between
    logical memory and physical memory large
    virtual memory can be provided on a smaller
    physical memory.

11
Basic Page Replacement
  1. Find the location of the desired page on disk.
  2. Find a free frame - If there is a free frame,
    use it. - If there is no free frame, use a page
    replacement algorithm to select a victim frame.
  3. Read the desired page into the (newly) free
    frame. Update the page and frame tables.
  4. Restart the process.

12
Page Replacement Algorithms
  • Want lowest page-fault rate.
  • Evaluate algorithm by running it on a particular
    string of memory references (reference string)
    and computing the number of page faults on that
    string.
  • In examples, the reference string is 7,0,1,2,0,
    3,0,4,2,3, 0,3,2,1,2,0,1,7,0,1

13
Optimal Algorithm
  • Replace page that will not be used for longest
    period of time.
  • Used for measuring how well your algorithm
    performs.

14
Optimal Page Replacement
15
Optimal Page Replacement
16
Optimal Page Replacement
17
Optimal Page Replacement
18
Optimal Page Replacement
19
First-In-First-Out
  • Throw out the page that has been in memory the
    longest.
  • Good when talking about a set of pages for
    initialization.
  • Bad when talking about heavily used variable.

20
FIFO Page Replacement
21
Problem with FIFO
  • Considers only time when loaded into system. Not
    whether it has been used.
  • Set of second chance algorithms that use the
    reference bit in page table entry to determine if
    it has been recently used.
  • Example Clock Page Replacement Algorithm.

22
Second-Chance (clock) Page-Replacement Algorithm
23
Least Recently Used (LRU) Algorithm
  • Based on principal of Locality of Reference.
  • A page that has been used in the near past is
    likely to be used in the near future.
  • LRU Determine the least recently used page in
    memory and evict it.
  • Can be done but very expensive.

24
LRU Page Replacement
25
Software Approximations to LRU
  • Not Frequently Used (NFU).
  • Associate a software counter with each page.
  • On timer interrupt, OS scans all pages in memory.
  • For each page, the R bit (Referenced bit)is added
    to the counter.
  • Page with lowest count is evicted.

26
Problem with NFU
  • It never forgets.
  • A page referenced often in earlier phases of the
    program may not be evicted long after it has been
    used.
  • Would like to have an algorithm that ages the
    count. That is, the latest references should be
    the most important.

27
Aging Algorithm for Simulating LRU
  • On each timer interrupt scan the pages to get the
    R bit.
  • Shift right one bit of the counter.
  • Place the R bit in the leftmost bit of the
    counter.
  • Choose the page to evict that has the lowest
    count.

28
Example
Assume all counters are currently 0. Consider
the case when pages 0,2,4, and 5 are referenced
between last interrupt.
29
Simulating LRU in Software
  • The aging algorithm simulates LRU in software

30
Simulating LRU in Software
Assume references 0,1, and 4 next window.
  • The aging algorithm simulates LRU in software

31
Simulating LRU in Software
  • The aging algorithm simulates LRU in software

32
Simulating LRU in Software
Assume 0,1,3,5
  • The aging algorithm simulates LRU in software

33
Simulating LRU in Software
  • The aging algorithm simulates LRU in software

34
Allocation of Frames
  • Each process needs minimum number of pages.
  • Two major allocation schemes.
  • fixed allocation
  • Variable allocation.
  • Replacement Scope can be
  • Local.
  • Global.

35
Fixed Allocation, Local Scope
  • Number of pages per process is fixed based on
    some criteria.
  • Can use equal allocation or proportional
    allocation.
  • Equal allocation e.g., if 100 frames and 5
    processes, give each 20 pages.
  • What are the drawbacks of equal allocation?

36
Fixed Allocation, Local Scope
  • Proportional Allocation
  • Allocate page size based on the size of the
    process.
  • Problem?

37
Fixed Allocation, Local Scope
  • Equal allocation e.g., if 100 frames and 5
    processes, give each 20 pages.
  • What are the drawbacks of this approach?
  • Allocation may be too small causing significant
    paging.
  • Allocation may too large reducing number of
    processes in memory and wasting memory that could
    be used by other processes.

38
Fixed Allocation, Local Scope
  • Proportional Allocation
  • Allocate page size based on the size of the
    process.
  • Problem?
  • Process needs will vary over its execution
    leading to the same problems as equal-size pages.

39
Problem with Fixed Allocation Schemes
  • All processes treated the same. No priorities.
  • Can use process priority rather than size to
    allocate frames.

40
Variable Allocation, Global Replacement
  • When a page fault occurs, new page frame
    allocated to the process.
  • Page replacement based on previous approaches
    e.g., LRU, FIFO, etc.
  • No consideration of which process should (or can
    best afford) to lose a page.
  • Can lead to high page-fault rates.

41
Thrashing
  • If a process does not have enough pages, the
    page-fault rate is very high. This leads to
  • low CPU utilization.
  • operating system thinks that it needs to increase
    the degree of multiprogramming.
  • another process added to the system.
  • Thrashing ? a process is busy swapping pages in
    and out.

42
Thrashing
43
Locality In A Memory-Reference Pattern
44
Working-Set Model Local Scope, Variable
Allocation
  • ? ? working-set window ? a fixed number of page
    references Example 10,000 instruction
  • WSSi (working set of Process Pi) total number
    of pages referenced in the most recent ? (varies
    in time)
  • if ? too small will not encompass entire
    locality.
  • if ? too large will encompass several localities.
  • if ? ? ? will encompass entire program.
  • D ? WSSi ? total demand frames
  • if D gt m ? Thrashing
  • Policy if D gt m, then suspend one of the
    processes.

45
Working-set model
46
Page-Fault Frequency Scheme
  • Establish acceptable page-fault rate.
  • If actual rate too low, process loses frame.
  • If actual rate too high, process gains frame.

47
Other Considerations
  • Prepaging Predicting future page requests.
  • Page size selection
  • fragmentation
  • table size
  • I/O overhead

48
Other Considerations
  • TLB Reach - The amount of memory accessible from
    the TLB.
  • TLB Reach (TLB Size) X (Page Size)
  • Ideally, the working set of each process is
    stored in the TLB.

49
Increasing the Size of the TLB
  • Increase the Page Size.
  • This may lead to an increase in fragmentation as
    not all applications require a large page size.
  • Provide Multiple Page Sizes.
  • This allows applications that require larger page
    sizes the opportunity to use them without an
    increase in fragmentation.

50
Other Considerations
  • I/O Interlock Pages must sometimes be locked
    into memory.
  • Consider I/O. Pages that are used for copying a
    file from a device must be locked from being
    selected for eviction by a page replacement
    algorithm.

51
(No Transcript)
52
Windows NT
  • Uses demand paging with clustering. Clustering
    brings in pages surrounding the faulting page.
  • Processes are assigned working set minimum and
    working set maximum.
  • Working set minimum is the minimum number of
    pages the process is guaranteed to have in memory.

53
Windows NT
  • A process may be assigned as many pages up to its
    working set maximum.
  • When the amount of free memory in the system
    falls below a threshold, automatic working set
    trimming is performed to restore the amount of
    free memory.
  • Working set trimming removes pages from processes
    that have pages in excess of their working set
    minimum.
Write a Comment
User Comments (0)
About PowerShow.com