CMPT 300 Operating System I - PowerPoint PPT Presentation

About This Presentation
Title:

CMPT 300 Operating System I

Description:

Multiprogramming improves CPU utilization ... Flexibility, improve memory utilization ... allocation unit and memory utilization. Smaller allocation unit larger ... – PowerPoint PPT presentation

Number of Views:82
Avg rating:3.0/5.0
Slides: 44
Provided by: senqia
Category:

less

Transcript and Presenter's Notes

Title: CMPT 300 Operating System I


1
CMPT 300 Operating System I
  • Chapter 4Memory Management

2
Why Memory Management?
  • Why money management?
  • Not enough money. Same thing for memory
  • Parkinsons law programs expand to fill the
    memory available to hold them
  • 640KB memory are enough for everyone Bill
    Gates
  • Programmers ideal an infinitely large,
    infinitely fast memory, nonvolatile
  • Reality memory hierarchy

Registers
Cache
Main memory
Magnetic disk
Magnetic tape
3
What Is Memory Management?
  • Memory manager the part of the OS managing the
    memory hierarchy
  • Keep track of memory parts in use/not in use
  • Allocate/de-allocate memory to processes
  • Manage swapping between main memory and disk
  • Basic memory management every program is put and
    run in main memory as whole
  • Swapping paging move processes back and forth
    between main memory and disk

4
Outline
  • Basic memory management
  • Swapping
  • Virtual memory
  • Page replacement algorithms
  • Modeling page replacement algorithms
  • Design issues for paging systems
  • Implementation issues
  • Segmentation

5
Mono Programming
  • One program at a time
  • Share memory with OS
  • OS loads the program from disk to memory
  • Three variations

0xFFF
0
0
0
6
Multiprogramming With Fixed Partitions
  • Advantages of Multiprogramming?
  • Scenario multiple programs at a time
  • Problem how to allocate memory?
  • Divide memory up into n partitions, one partition
    can at most hold one program (process)
  • Equal partitions vs. unequal partitions
  • Each partition has a job queue
  • Can be done manually when system is up
  • A job arrives, put it into the input queue for
    the smallest partition large enough to hold it
  • Any space in a partition not used by a job is lost

7
Example Multiprogramming With Fixed Partitions
800K
700K
B
400K
A
200K
100K
0
Multiple input queues
8
Single Input Queue
800K
  • Disadvantage of multiple input queues
  • Small jobs may wait, while a queue with larger
    memory is empty
  • Solution single input queue

700K
250K
B
400K
A
10K
200K
100K
0
9
How to Pick Jobs?
  • Pick the first job in the queue fitting an empty
    partition
  • Fast, but may waste a large partition on a small
    job
  • Pick the largest job fitting an empty partition
  • Memory efficient
  • Smallest jobs may be interactive ones, need best
    service, slow
  • Policies for efficiency and fairness
  • Have at least one small partition around
  • A job may not be skipped more than k times

10
A Naïve Model for Multiprogramming
  • Goal determine the number of processes in main
    memory to keep the CPU busy
  • Multiprogramming improves CPU utilization
  • If on average, a process computes 20 of the time
    it sitting in memory ? 5 processes can keep CPU
    busy all the time
  • Assume all processes never wait for I/O at the
    same time.
  • Too optimistic!

11
A Probabilistic Model
  • A process spends a fraction p of its time waiting
    for I/O to complete
  • 0ltplt1
  • At once n processes in memory
  • CPU utilization 1 pn
  • Probability that all n processes are waiting for
    I/O pn
  • Assume processes are independent to each other
  • Not true in reality. A process has to wait
    another process to give up CPU
  • Using queue theory.

12
CPU Utilization
  • 1 pn

13
Memory Management for Multiprogramming
  • Relocation
  • When program is compiled, it assumes the starting
    address is 0. (logical address)
  • When it is loaded into memory, it could start at
    any address. (physical address)
  • How to map logical address to physical address?
  • Protection
  • A programs access should be confined to proper
    area

14
Relocation Protection
  • Logical address for programming
  • Call a procedure at logical address 100
  • Physical address
  • When the procedure is in partition 1 (started
    from physical address 100k), then the procedure
    is at 100K100
  • Relocation problem translation between logical
    address and physical address
  • Protection a malicious program can jump to space
    belonging to other users
  • Generate a new instruction on the fly that can
    reads or writes any word in memory

15
Relocation/Protection Using Registers
  • Base register start of the partition
  • Every memory address generated adds the content
    of base register
  • Base register 100K, CALL 100 ? CALL 100K 100
  • Limit register length of the partition
  • Addresses are checked against the limit register
  • Disadvantage perform addition and comparison on
    every memory reference

16
Outline
  • Basic memory management
  • Swapping
  • Virtual memory
  • Page replacement algorithms
  • Modeling page replacement algorithms
  • Design issues for paging systems
  • Implementation issues
  • Segmentation

17
In Time-sharing/Interactive Systems
  • Not enough main memory to hold all currently
    active processes
  • Intuition excess processes must be kept on disk
    and brought in to run dynamically
  • Swapping bring in each process in entirely
  • Assumption each process can be held in main
    memory, but cannot finish at one run
  • Virtual memory allow programs to run even when
    they are only partially in main memory
  • No assumption about program size

18
Swapping
Hole
Time ?
Swap A out
Swap B out
19
Swapping V.S. Fixed Partitions
  • The number, location and size of partitions vary
    dynamically in swapping
  • Flexibility, improve memory utilization
  • Complicate allocating, de-allocating and keeping
    track of memory
  • Memory compaction combine holes in memory into
    a big one
  • More efficient in allocation
  • Require a lot of CPU time
  • Rarely used in real systems

20
Enlarge Memory for a Process
  • Fixed size process easy
  • Growing process
  • Expand to the adjacent hole, if there is a hole
  • Otherwise, wait or swap some processes out to
    create a large enough hole
  • If swap area on the disk is full, wait or be
    killed
  • Allocate extra space whenever a process is
    swapped in or move

21
Handling Growing Processes
Processes with one growing data segment
Processes with growing data and stack segments
22
Memory Management With Bitmaps
  • Two ways to keep track of memory usage
  • Bitmaps and free lists
  • Bitmaps
  • Memory is divided into allocation units
  • One bit per unit 0-free, 1-occupied

23
Size of Allocation Units
  • 4 bytes/unit ? 1 bit in map for 32 bits of memory
    ? bitmap takes 1/33 of memory
  • Trade-off between allocation unit and memory
    utilization
  • Smaller allocation unit ? larger bitmap
  • Larger allocation unit ? smaller bitmap
  • On average, half of the last unit is wasted
  • When bring a k unit process into memory
  • Need find a hole of k units
  • Search for k consecutive 0 bits in the entire map

24
Memory Management With Linked Lists
  • Two types of entries hole(H)/process(P)

Address 20
Length 6
List is kept sorted by address.
Starts at 20
Process
25
Updating Linked Lists
  • Combine holes if possible
  • Not necessary for bitmap

Before process X terminates
After process X terminates
26
Allocate Memory for New Processes
  • First fit find the first hole fitting
    requirement
  • Break the hole into two pieces P smaller H
  • Next fit start search from the place of last fit
  • Empirical evidence Slightly worse performance
    than first fit
  • Best fit take the smallest hole that is adequate
  • Slower
  • Generate tiny useless holes
  • Worst fit always take the largest hole

27
Using Distinct Lists
  • Distinct lists for processes and holes
  • List of holes can be sorted on size
  • Best fit becomes faster
  • Problem how to free a process?
  • Merging holes is very costly
  • Quick fit grouping holes based on size
  • Different lists for different sizes
  • E.g., List 1 for 4KB holes, List 2 for 8KB holes.
  • How about a 5KB hole?
  • Speed up the searching
  • Merging holes is still costly

28
Outline
  • Basic memory management
  • Swapping
  • Virtual memory
  • Page replacement algorithms
  • Modeling page replacement algorithms
  • Design issues for paging systems
  • Implementation issues
  • Segmentation

29
Why Virtual Memory?
  • If the program is too big to fit in memory
  • Split the program into pieces overlays
  • Swapping overlays in and out
  • Problem programmer does the work of splitting
    the program into pieces.
  • Virtual memory OS takes care of everything
  • Size of program could be larger than the physical
    memory available.
  • Keep the parts currently used in memory
  • Put other parts on disk

30
Virtual and Physical Addresses
  • Virtual addresses (VA) are used/generated by
    programs
  • Each process has its own VA.
  • E.g, MOV REG, 1000 1000 is VA
  • Physical addresses (PA) are used in execution
  • MMU maps VA to PA

31
Paging
  • Virtual address space is divided into pages
  • Memories are allocated in the unit of page
  • Page frames in physical memory
  • Pages and page frames are always the same size
  • Usually, from 512B to 64KB
  • Pages gt Page frames
  • On a 32-bit PC, VA could be as large as 4GB, but
    PA lt 1GB
  • In hardware, a present/absent bit keeps track of
    which pages are physically present in memory.
  • Page fault an unmapped page is requested
  • OS picks up a little-used page frame and write
    its content back to hard disk
  • Fetch the wanted page into the page frame just
    freed

32
Paging An Example
Pages
  • Page 0 04095
  • VA 0? page 0? page frame 2? PA 8192
  • 04095? 8192--12287
  • VA 8192? page 2? page frame 6? PA 24567
  • VA 8199? page 2, offset 7? page frame 6, offset
    7? PA 24567724574
  • VA32789? page 8? unmapped? page fault

Page frames
8
2
1
0
33
The Magic in MMU
34
Page Table
  • Map virtual pages onto page frames
  • VA is split into page number and offset.
  • Each page number has one entry in page table.
  • Page table can be extremely large
  • 32 bits virtual addresses, 4kb/page? 1M pages.
    How about 64 bits VA?
  • Each process needs its own page table

35
Typical Page Table Entry
  • Entry size usually 32 bits
  • Page frame number goal of page mapping
  • Present/absent bit page in memory?
  • Protection what kinds of access permitted
  • Modified Has the page been written? (If so, need
    to write back to disk later) Dirty bit
  • Referenced Has the page been referenced?
  • Caching disable read from the disk?

Present/absent
Caching disabled
Modified
Referenced
Protection
36
Fast Mapping
  • Virtual to physical mapping must be fast
  • several page table references/instruction
  • Unacceptable to store the entire page table in
    main memory
  • Have to seek for hardware solutions

37
Two Simple Designs for Page Table
  • Use fast hardware registers for page table
  • Single physical page table in MMU an array of
    fast registers one entry for each virtual page
  • Requires no memory reference during mapping
  • Load registers at every process switching
  • Expensive if the page table is large
  • Cost of hardware and overhead of context
    switching
  • Put the whole table in main memory
  • Only one register pointing to the start of table
  • Fast switching
  • Several memory references/instruction
  • Pure memory solution is slow, pure register
    solution is expensive, so

38
Translation Lookaside Buffers (TLBs)
  • Observation Most programs tend to make a large
    number of references to a small number of pages
  • Put the heavily read fraction in registers
  • TLB/associative memory

check
Page table
TLB
Virtual address
found
Not found
Physical address
39
Outline
  • Basic memory management
  • Swapping
  • Virtual memory
  • Page replacement algorithms
  • Modeling page replacement algorithms
  • Design issues for paging systems
  • Implementation issues
  • Segmentation

40
Page Replacement
  • When a page fault occurs, and all page frames are
    full
  • Choose one page to remove, if modified (called
    dirty page), update its disk copy
  • Better choose an unmodified page
  • Better choose a rarely used page
  • Many similar problems in computer systems
  • Memory cache page replacement
  • Web page cache replacement in web server
  • Revisit page table entry

41
Typical Page Table Entry
  • Entry size usually 32 bits
  • Page frame number goal of page mapping
  • Present/absent bit page in memory?
  • Protection what kinds of access permitted
  • Modified Has the page been written? (If so, need
    to write back to disk later) Dirty bit
  • Referenced Has the page been referenced?
  • Caching disable read from the disk?

Present/absent
Caching disabled
Modified
Referenced
Protection
42
Optimal Algorithm
  • Label each page in the main memory with number of
    instructions will be executed before next
    reference
  • E.g, a page labeled by 1 means this page will
    be referenced by the next instruction.
  • Remove the page with highest label
  • Put off page faults as long as possible
  • Unrealizable!
  • Why? SJF process scheduling, Bankers Algorithm
    for deadlock avoidance
  • Could be used as a benchmark

43
Remove Not Recently Used Pages
  • R and M are initially 0
  • Set R when a page is referenced
  • Set M when a page is modified
  • Done by hardware
  • Clear R bit periodically by software (OS)
  • Four classes of pages when a page fault
  • Class 0 (R0M0) not referenced, not modified
  • Class 1 (R0M1) not referenced, modified
  • Class 2 (R1M0) referenced, not modified
  • Class 3 (R1M1) referenced, modified
  • NRU removes a page at random from the lowest
    numbered nonempty class
Write a Comment
User Comments (0)
About PowerShow.com