Operating Systems course 20062007 Virtual memory - PowerPoint PPT Presentation

1 / 56
About This Presentation
Title:

Operating Systems course 20062007 Virtual memory

Description:

Divide memory in equal sizes or different ones? Why do we need memory ... Case 3: Buy a new wardrobe?! TU/e Computer Science, System Architecture and Networking ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 57
Provided by: win4
Category:

less

Transcript and Presenter's Notes

Title: Operating Systems course 20062007 Virtual memory


1
Operating Systems course 2006/2007 Virtual
memory
  • Igor Radovanovic

2
Quiz 1 Memory manager
  • Memory placement
  • Single or multiple (multiprogramming)?
  • If multiple processes in memory
  • Divide memory in equal sizes or different ones?
  • Why do we need memory division?
  • After memory is divided
  • Can we change sizes of partitions or not
  • Dynamic adaptation to process size
  • Do processes run in specific partitions or any?
  • Should the process be in contiguous blocks or
    not?
  • Is the running process entirely in main memory or
    not?
  • Which process (or its part) to replace once
    memory is full?

3
Quiz 2 Memory management
  • Why do we need primary memory?
  • Why do not we use secondary memory only?
  • Do you think that the next breakthrough in
    computer systems design would be to make
    processors that can directly fetch instructions
    and data from the secondary memory?
  • If YES, .think again
  • Since the main memory is becoming huge and
    low-cost, will there be need for memory
    management in the future?
  • Cache

4
(OS) MM requirements
  • Multi-programming
  • Non-contiguous storage
  • (parts of) programs stored in main memory ONLY
    when used by CPU
  • No memory waste
  • No waiting for MM by CPU
  • No added complexity
  • No additional hardware required
  • Unlimited memory size

5
Reminder Memory manager strategies
  • Fetch strategies
  • Demand fetch strategy
  • Anticipatory fetch strategy
  • Static placement strategies
  • FIFO
  • Best-fit
  • First-fit
  • Worst-fit
  • Replacement strategies
  • LRU, LFU, other

6
Memory manager implementation
  • How is Memory manager implemented?
  • Software special-purpose hardware

7
Improved Memory management
  • Q
  • What needs to be done such that
  • programs do not have to be stored contiguously,
    and
  • not the entire program, but only its parts need
    to be stored in the memory?
  • A
  • Split memory into segments and try to fit parts
    of the program into those segments
  • If segments are of
  • different size segmentation
  • the same size paging
  • Physical memory blocks frames
  • Logical memory blocks pages

8
Memory management-recent systems-
  • Paged Memory Allocation
  • Demand Paging Memory Allocation
  • Segmented Memory Allocation
  • Segmented/Demand Paged Allocation

9
Paging memory allocation
  • Processes divided into pages of equal size
  • Works well if those pages are equal to the memory
    block size (page frame) and the disks section
    size (sectors)
  • Advantages of non-contiguous memory allocation
  • An empty page frame may be used by any process
  • Compaction scheme is not required
  • No external and almost no internal fragmentation
  • Problems
  • Mechanism needed to keep track of page locations
  • Consequence Enlarging the size and complexity of
    the OS SW

10
Non-contiguous allocation-an example-
Proc1 size350 lines
Internal fragmentation
  • The number of free pages left6
  • Any process of more than 600 lines has to wait
    until the Proc1 ends its execution
  • Any process of more than 1000 lines cannot fit
    into memory at all!
  • Disadvantage
  • Entire process must be stored in memory during
    its execution

11
Memory manager using paging requires 3 tables
to track a Procs pages
  • 1. Process table (PT) - 2 entries for each active
    process
  • Size of a process memory location of its page
    map table
  • Dynamic grows/shrinks as processes are
    loaded/completed
  • 1 table per job
  • 2. Page map table (PMT) - 1 entry per page
  • Page number corresponding page frame memory
    address
  • Page numbers are sequential (Page 0, Page 1 )
  • 1 table per process
  • Memory map table (MMT)
  • Location free/busy status
  • 1 table for all the page frames

12
Process table
13
Page Map Table
  • Contains page number and its corresponding page
    frame memory address
  • ? - referenced page is not in main memory
  • Reading PMT doubles the number of memory access
  • Ideally, PMT should not be accessed

14
PMT implementationAssociative memory (AM)
  • Used to speed up process execution
  • Associative Memory Page Table
  • No empty entries (saves memory space)
  • Takes advantage of temporal locality and spatial
    locality
  • Uses a key-field to address data
  • Expensive to implement

15
PMT implementationtranslation-lookaside buffer
(TLB)
  • Special registers where page - page frame mapping
    is done
  • TLB resides in cache
  • The full-page table is in main memory
  • When a page is first translated to a page frame,
    the map is read from main memory to TLB
  • On subsequent references to the page
  • Search for a certain page done in parallel
    through PMTs on the one hand and TLB on the other
  • No waste of time if page is not in TLB
  • Disadvantage
  • High cost of complex hardware required for
    parallel searches

16
TLB requirements
  • TLB is the most performance-critical component of
    a modern microprocessor
  • Designed in hardware
  • Must be
  • quickly accessible
  • to shorten a clock cycle
  • large enough
  • to provide avoiding of PMT usage
  • Contradictory requirements

17
TLB implementation
  • Separate TLBs for instruction fetches and data
    loads
  • 2-level TLB (similar to cache)
  • Example AMD Opteron
  • 40-entry L1 instruction data 512 entry L2
    instruction data
  • Take scheduler time slices into account
  • What if TLBs are of the same size?
  • When should TLB be updated?
  • Does this require support of a software in OS?

18
Displacement
  • Displacement (offset) of a line -- how far away
    a line is from the beginning of its page
  • Used to locate that line within its page frame
  • Relative factor
  • For example, lines 0, 100, 200, and 300 are first
    lines for pages 0, 1, 2, and 3 respectively, so
    each has displacement of zero
  • Address of a given program line
  • Division made in hardware
  • Memory Management Unit (MMU)

19
Address resolution
  • Translate a processs space address into a
    physical address

Page frame no.
Process1 (page size 512)
Main memory
PMT for Process1
20
Memory managementRecent systems
  • Paged Memory Allocation
  • Demand Paging Memory Allocation
  • Segmented Memory Allocation
  • Segmented/Demand Paged Allocation

21
Demand paging
  • The first allocation scheme that removed the
    restriction of having the entire process stored
    in memory
  • Bring a page into memory only when it is needed,
    so less I/O memory needed
  • Faster response
  • Made virtual memory widely available
  • Gives appearance of an almost infinite physical
    memory
  • Takes advantage that programs are written
    sequentially so not all pages are necessary at
    once. For example
  • User-written error handling modules
  • Mutually exclusive modules
  • Certain program options are either mutually
    exclusive or not always accessible
  • Many tables assigned a fixed amount of address
    space even though only a fraction of a table is
    actually used

22
Demand paging (cntd)
  • How and when pages are passed (or swapped)
    depends on predefined policies that determine
    when to make room for needed pages and how to do
    so
  • Policy that selects page to be removed is crucial
    to system efficiency
  • Selection of algorithm is critical
  • The one with the lowest page-fault rate is the
    most attractive
  • First-in first-out (FIFO) policy
  • best page to remove is one that has been in
    memory the longest
  • Least-recently-used (LRU) policy
  • chooses pages least recently accessed to be
    swapped out
  • Most recently used (MRU) policy
  • Least frequently used (LFU) policy

23
Page Map Table
Extra fields with respect to PMT in static paging
  • Status bit indicates if page is currently in
    memory or not
  • Referenced bit indicates if page has been
    referenced recently
  • Used by LRU to determine which pages should be
    swapped out
  • Modified bit indicates if page contents have been
    altered
  • Used to determine if page must be rewritten to
    secondary storage when it is swapped out
  • Note
  • Swapping
  • copying the page from secondary to main memory

24
Instruction Processing Algorithm
  • Hardware components (MMU) do
  • generate the (logical) address of the required
    page
  • find the page number
  • determine if the page is already in memory
  • Start processing instruction
  • Generate data address
  • Compute page number
  • if (page is in memory) then
  • get data and finish instruction
  • advance to next instruction
  • return to 1
  • else
  • generate page interrupt
  • call page fault handler
  • fi

Page in the secondary storage OS takes over
25
Page fault handler
  • Determines whether there are empty page frames in
    memory
  • If not, it must decide which page to swap out
  • This depends on predefined policy for page
    removal
  • When page is swapped out, 3 tables to be updated
  • PMT tables of two tasks (1 in,1 out)
  • Memory Map Table
  • Problem with swapping thrashing

26
Thrashing
  • When a process is spending more time paging than
    executing
  • When it happens?
  • When the number of frames for the low priority
    process falls below the min number required by
    computer architecture
  • Pages in active use are replaced by other pages
    in active use
  • Happens with increased degree of multiprogramming
  • Solution
  • Local replacement algorithms (not completely)
  • Provide the process with as many frames as it
    needs
  • Use locality model (working set)
  • Note this model also used in caching

27
Thrashing-an example-
  • for (j1 jlt100 j)
  • kjj
  • maj
  • printf(\nd d d, j, k, m)
  • printf(\n)

Page 0
Swapping between these two pages
Page 1
28
Dynamic allocation paging-Working-set model-
  • A set of active pages large enough to avoid
    trashing
  • Use a parameter ? to determine a working-set
    window
  • Page reference window
  • .2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4
    4 4 1 3 2 3 4 4 4.
  • ?10
    ?10
  • WS (t1)1,2,5,6,7
    WS (t2)3,4
  • D-the total demand for frames
  • WSSi- the working set size
  • For thrashing will occur m-the
    total number of available frames

29
A reminder Page fault handler
  • Determines whether there are empty page frames in
    memory
  • If not, it must decide which page to swap out
  • This depends on predefined policy for page
    removal
  • Good policy more memory hits - improved
    efficiency
  • When page is swapped out 3 tables to be updated
  • PMT tables of two tasks (1 in 1 out)
  • Memory Map Table
  • Problem with swapping thrashing

30
FIFO policy
  • How each page requested is swapped into 2
    available page frames using FIFO. When program is
    ready to be processed all 4 pages are on
    secondary storage. Throughout program, 11 page
    requests are issued. When program calls a page
    that isnt already in memory, a page interrupt is
    issued (shown by ). 9
    page interrupts result.

31
FIFO policy-analogy-
  • Dresser drawer filled with sweaters
  • Situation
  • The drawer is full and you have just bought a new
    sweater
  • Must put one sweater away to make space for the
    new one
  • Q
  • Which sweater to remove, oldest or the least
    recently used one?
  • Case 1 Remove the oldest one
  • What if it turn out that that is your most
    treasured possession?
  • Case 2 Remove the least recently used
  • What if once-a-year occasion demands its
    appearance soon?
  • Case 3 Buy a new wardrobe?!

32
LRU policy
  • Throughout the program 11 page requests are
    issued, but they cause only 8 page interrupts
  • Efficiency slightly better than FIFO
  • The most widely used static replacement algorithm

33
LRU implementation in hardware
  • Problem
  • Keeping track of of the backward distance of
    every loaded page for every process is difficult
  • Page table requires entry for the virtual time of
    the last reference
  • Costly
  • since another page table memory write must be
    introduce
  • Virtual time must be maintained
  • Solution
  • Approximate LRU behavior
  • Check reference bits
  • Periodically set to 0
  • Introduce shift register with the most
    significant bit like the reference bit and
    periodically shift the content to the right

34
Pros Cons of Demand Paging
  • First scheme in which a process was no longer
    constrained by the size of physical memory
    (virtual memory)
  • Uses memory more efficiently than previous
    schemes because sections of a process used seldom
    or not at all arent loaded into memory unless
    specifically requested
  • Increased overhead caused by tables and page
    interrupts

35
Evaluation of Paging
36
Memory managementRecent systems
  • Paged Memory Allocation
  • Demand Paging Memory Allocation
  • Segmented Memory Allocation
  • Segmented/Demand Paged Allocation

37
REMINDER Thrashing-an example-
  • for (j1 jlt100 j)
  • kjj
  • maj
  • printf(\nd d d, j, k, m)
  • printf(\n)

Page 0
Swapping between these two pages
Page 1
38
Segmented memory allocation
  • Based on common practice by programmers of
    structuring their programs in modules (logical
    groupings of code)
  • A segment is a logical unit such as main
    program, subroutine, procedure, function, local
    variables, global variables, common block, stack,
    symbol table, or array
  • Main memory is not divided into page frames
    because size of each segment is different
  • Memory is allocated dynamically
  • Address specifying a segment name and an offset
  • This in contrast to paging where user specifies
    only a single address

39
Segment Map Table (SMT)
  • Memory Manager needs to track segments in memory
  • Process Table (PT) lists every process (one for
    the whole system)
  • Segment Map Table (SMT) lists details about each
    segment (one for each process)
  • Memory Map Table (MMT) monitors allocation of
    main memory (one for the whole system)
  • Each segment is numbered and a Segment Map Table
    (SMT) is generated for each process
  • Contains segment numbers, their lengths, access
    rights, status, and (when each is loaded into
    memory) its location in memory

40
SMT-an example-
  • Two dimensional addressing scheme
  • Address (segment number, displacement)

0
0
Main Program
Operating System
3000
Empty
4000
349
Main Program
0
Yin memory Nnot in memory EExecute only
Subroutine A
Other Programs
7000
199
Subroutine A
0
Subroutine B
Other Programs
99
41
Segments vs. Pages
  • Pages
  • Physical units invisible to the users program
  • Fixed size
  • Disadvantage
  • MM operating without any a priory
    knowledge regarding relationship among pages
  • Segments
  • Logical units visible to the users program
  • Variable size
  • Advantages
  • Less swapping
  • Disadvantages
  • External fragmentation
  • Memory compaction
  • Secondary storage handling

42
Memory managementRecent systems
  • Paged Memory Allocation
  • Demand Paging Memory Allocation
  • Segmented Memory Allocation
  • Segmented/Demand Paged Allocation

43
Segmented/Demand Paged Memory Allocation
  • Evolved from combination of segmentation and
    demand paging
  • Logical benefits of segmentation
  • Physical benefits of paging
  • Subdivides each segment into pages of equal size,
    smaller than most segments, and more easily
    manipulated than whole segments
  • Eliminates many problems of segmentation because
    it uses fixed length pages
  • Disadvantage
  • Overhead required for extra tables
  • Time required to reference both segment table and
    page table

44
Segmented/Demand Paged Memory Allocation -an
example-
Main memory
Page Map Tables Process0/Segment0
0123456789
Segment Map Tables Process0
Active Process Table
Process0/Segment1
Process0/Segment2
Process1
Process1/Segmen0
  • Interaction among Process Table, Segment Map
    Table, Page Map Table and Main Memory

45
Managing insufficient memory
  • Consolidating several dispersed single holes into
    a larger hole
  • Memory compaction
  • Swapping
  • Overlay

46
Swapping
  • Evicting one of the processes (with blocked
    thread) resident in the main memory to the
    secondary storage
  • Process manager - Memory manager cooperation
  • Make use of the relocation hardware
  • Issue where to keep swapped processes on the
    disk?
  • Involve Device manager
  • In time-sharing systems combined with scheduling
  • Scheduling performed at the end of each time
    quantum
  • Processes swapped during the execution of other
    processes

47
Swapping (cntd)
  • Overhead RS/D the time to reacquire primary
    memory
  • R the number of disk blocks
  • S the number of units of primary memory
  • D - the unit size of the disk block
  • Resource waist SxT
  • T the amount of time during which the process
    is blocked
  • Not always easy to estimate
  • Note
  • If TltR then process begins requesting memory
    before it is completely swapped out
  • Swapping strategy
  • Calculate SxT (space-time product),
  • T the amount of time the process has held S
    units of memory
  • replace the one with the largest product

48
Swapping vs. Memory Compaction
  • Advantages
  • More efficient
  • Guarantees creation of a hole large enough to
    accommodate a request
  • Evict as many processes as necessary
  • Affects only a small number of processes (not
    always)
  • Fewer memory access
  • Can be used with both fixed and variable
    partitions
  • Disadvantage
  • Requires access to secondary memory (time
    consuming)
  • Overlapped with execution of other processes
    though

49
Virtual Memory (VM)
  • Gives appearance that programs are being
    completely loaded in the main memory during their
    entire processing time
  • Shared programs and subroutines are loaded on
    demand, reducing storage requirements of main
    memory
  • Spatial locality
  • The ideal operation of VM manager
  • Knowledge of the exact set of addresses that need
    to be loaded into the main memory before they are
    referenced
  • Keeps exactly those addresses in the main memory
    that are currently referenced
  • IMPLEMETATION
  • VM is implemented through demand paging and
    segmentation schemes

50
VM (cntd)
51
Advantages of VM
  • Works well in a multiprogramming environment
    because most programs spend a lot of time waiting
  • Processs size is no longer restricted to the
    size of main memory (or the free space within
    main memory)
  • Memory is used more efficiently
  • Allows an unlimited amount of multiprogramming
  • Eliminates external fragmentation when used with
    paging and eliminates internal fragmentation when
    used with segmentation
  • Allows a program to be loaded multiple times
    occupying a different memory location each time
  • Allows sharing of code and data
  • Facilitates dynamic linking of program segments

52
Disadvantages of VM
  • Increased processor hardware costs
  • Increased overhead for handling paging interrupts
  • Increased software complexity to prevent thrashing

53
Cache memory
bus
  • Based on the principle of locality
  • A high-speed memory that speeds up the CPUs
    access to information
  • Increases the performance
  • No need to contend for the bus

54
Cache memory-improved efficiency-
  • Cache hit ratio h
  • Average main memory access time

55
Address translation in Pentium
  • Virtual address is 48 bits long (2 bits for
    protection)
  • Segment table has two parts of 4 K entries each
    (Local/Global Descriptor Table)

56
The buddy system (in Linux)
Trade-off between fixed and variable partitions
A list with 5 entries (in Linux 10)
Write a Comment
User Comments (0)
About PowerShow.com