Chapter%208%20Memory%20Management - PowerPoint PPT Presentation

About This Presentation
Title:

Chapter%208%20Memory%20Management

Description:

Chapter 8 Memory Management Dr. Yingwu Zhu – PowerPoint PPT presentation

Number of Views:221
Avg rating:3.0/5.0
Slides: 57
Provided by: zhu105
Category:

less

Transcript and Presenter's Notes

Title: Chapter%208%20Memory%20Management


1
Chapter 8 Memory Management
  • Dr. Yingwu Zhu

2
Outline
  • Big picture
  • Background
  • Basic Concepts
  • Memory Allocation

3
Big Picture
  • Main memory is a resource
  • OS needs to manage this resource
  • A process/thread is being executing, the
    instructions data must be in memory
  • Assumption Main memory is super big to hold a
    program
  • Allocation of memory to processes
  • Address translation

4
Background
  • Program must be brought into memory and placed
    within a process for it to be run
  • User programs go through several steps before
    being run
  • An assumption for this discussion
  • Physical memory is large enough to hold an any
    sized process (VM is discussed in Ch. 9)

5
Multi-step processing of a user program
6
Outline
  • Background
  • Basic Concepts
  • Logical address vs. physical address
  • MMU
  • Memory Allocation

7
Logical vs. Physical Address Space
  • Logical address (virtual address)
  • Generated by the CPU, always starting from 0
  • Physical address
  • Address seen/required by the memory unit
  • Logical address space is bound to a physical
    address space
  • Central to proper memory management

8
Binding logical address space to physical address
space
  • Binding instructions data into memory can
    happen at 3 different stages
  • Compile time If memory location known a priori,
    absolute code can be generated must recompile
    code if starting location changes
  • Load time Must generate relocatable code if
    memory location is not known at compile time
  • Execution time Binding delayed until run time
    if the process can be moved during its execution
    from one memory segment to another. Need
    hardware support for address maps (e.g.,
    relocation and limit registers)
  • Logical address physical address for compile
    time and load time logical address ! physical
    address for execution time

9
Memory Management Unit (MMU)
  • Hardware device
  • logical address ? physical address (mapping)
  • Simplest MMU scheme relocation register

The user program deals with logical addresses it
never sees the real physical addresses
10
Outline
  • Background
  • Basic Concepts
  • Memory Allocation

11
Memory Allocation, How?
  • In the context of
  • Multiprocessing, competing for resources
  • Memory is a scarce resource shared by processes
  • Questions
  • 1 How to allocate memory to processes?
  • 2 What considerations need to be taken in
    memory allocation?
  • 3 How to manage free space?

12
Memory Allocation
  • Contiguous allocation
  • Non-contiguous allocation Paging

13
Contiguous Allocation
  • Fact memory is usually split into 2 partitions
  • Low end for OS (e.g., interrupt vector)
  • High end for user processes ? where allocation
    happens

14
Contiguous Allocation
  • Definition each process is placed in a single
    contiguous section of memory
  • Single partition allocation
  • Multiple-partition allocation

15
Contiguous Allocation
  • Single partition allocation, needs hardware
    support
  • Relocation register the base physical address
  • Limit register the range of logical address

16
Contiguous Allocation
Base register limit register define a logical
address Space in memory !
17
Contiguous Allocation
  • Single partition allocation (high-end partition),
    needs hardware support
  • Relocation register the base physical address
  • Limit register the range of logical address
  • Protect user processes from each other, and from
    changing OS code data

18
Protection by Base Limit Registers
Memory
Limit register
Relocation register
Logical address
Physical address
yes
lt
CPU

no
Trap address error
19
Contiguous Allocation
  • Multiple-partition allocation
  • Divide memory into multiple fixed-sized
    partitions
  • Hole block of free/available memory
  • Holes of various sizes are scattered thru memory
  • When a process arrives, it is allocated from a
    hole large enough to hold it
  • May create a new hole
  • OS manages
  • Allocated blocks holes

20
Multiple-Partition Allocation
OS
process 5
process 8
process 2
21
Multiple-Partition Allocation
  • Dynamic storage allocation problem
  • How to satisfy a request of size n from a list of
    holes?
  • Three strategies
  • First fit allocate the first hole large enough
  • Best fit allocate the smallest hole that is big
    enough
  • Worst fit allocate the largest hole
  • First best fit outperform worst fit (in storage
    utilization)

22
Fragmentation
  • Storage allocation produces fragmentation!
  • External fragmentation
  • Total available memory space exists to satisfy a
    request, but it is not contiguous
  • Internal fragmentation
  • Allocated memory may be slightly larger than
    requested memory, this size difference is memory
    internal to a partition, but not being used
  • Why?

Memory is allocated in block size rather than in
the unit of bytes
23
Thinking
  • What fragmentations are produced by
    multiple-partition allocation?
  • What fragmentation is produced by
    single-partition allocation?
  • Any solution to eliminate fragmentations?

24
Memory Allocation
  • Contiguous allocation
  • Non-contiguous allocation Paging

25
Paging
  • Memory allocated to a process is not necessarily
    contiguous
  • Divide physical memory into fixed-sized blocks,
    called frames (size is power of 2, e.g., 512B
    16MB)
  • Divide logical address space into blocks of same
    size, called pages
  • Memory allocated to a process in one or multiple
    of frames
  • Page table maps pages to frames, per-process
    structure
  • Keep track of all free frames

26
Paging Example
27
Address Translation Scheme
  • Logical address ? Physical address
  • Address generated by the CPU is divided into
  • Page number (p) used as an index into a page
    table which contains base address of each page in
    physical memory
  • Page offset (d) combined with base address to
    define the physical memory address that is sent
    to the memory unit

28
Logical Address
Logical address space 2m
p
d
n bits
m-n bits
29
Address Translation
30
Exercise
  • Consider a process of size 72,776 bytes and page
    size of 2048 bytes
  • How many entries are in the page table?
  • What is the internal fragmentation size?

31
Discussion
  • How to implement page tables? Where to store page
    tables?

32
Implementation of Page Tables (1)
  • Option 1 hardware support, using a set of
    dedicated registers
  • Case study
  • 16-bit address, 8KB page size, how many registers
    needed for the page table?
  • Using dedicated registers
  • Pros
  • Cons

33
Implementation of Page Tables (2)
  • Option 2 kept in main memory
  • Page-table base register (PTBR) points to the
    page table
  • Page-table length register (PTLR) indicates size
    of the page table
  • Problem?

34
Implementation of Page Tables (2)
  • Option 2 kept in main memory
  • Page-table base register (PTBR) points to the
    page table
  • Page-table length register (PTLR) indicates size
    of the page table
  • Problem?
  • Every data/instruction access requires 2 memory
    accesses
  • One for the page table and one for the
    data/instruction.

35
Option 2 Using memory to keep page tables
  • How to handle 2-memory-accesses problem?

36
Option 2 Using memory to keep page tables
  • How to handle 2-memory-accesses problem?
  • Caching hardware support
  • Use of a special fast-lookup hardware cache
    called associative memory or translation
    look-aside buffers (TLBs)
  • Cache page-table entries (LRU, etc.)
  • Expensive but faster
  • Small 64 1024 entries

37
Associative Memory
  • Associative memory parallel search
  • Address translation (A, A)
  • If A is in associative register, get frame out
  • Otherwise get frame from page table in memory

Page
Frame
38
Paging with TLB
39
Effective Memory-Access Time
  • Associative Lookup b time unit
  • Assume one memory access time is x time unit
  • Hit ratio percentage of times that a page
    number is found in the associative registers
    ratio related to number of associative registers
  • Hit ratio ?
  • Effective Access Time (EAT)
  • EAT (x b) ? (2x b)(1 ?)
  • Example memory takes 100ns, TLB 20ns, hit ratio
    80, EAT?

40
Memory Protection
  • Memory protection implemented by associating
    protection bit with each frame
  • Valid-invalid bit attached to each entry in the
    page table
  • valid indicates that the associated page is in
    the processs logical address space, and is thus
    a legal page
  • invalid indicates that the page is not in the
    processs logical address space

41
Memory Protection
42
Page Table Structure
  • Hierarchical Paging
  • Hashed page tables
  • Inverted hash tables

43
Why Hierarchical Paging?
  • Most modern computer systems support a large
    logical address space, 232 264
  • Large page tables
  • Example 32-bit logical address space, page size
    is 4KB, then 220 page table entries. If address
    takes 4 bytes, then the page table size costs 4MB
  • Contiguous memory allocation for large page
    tables may be a problem!
  • Physical memory may not hold a single large page
    table!

44
Hierarchical Paging
  • Break up the logical address space into multiple
    page tables
  • Page table is also paged!
  • A simple technique is a two-level page table

45
Two-Level Paging Example
  • A logical address (on 32-bit machine with 4K page
    size) is divided into
  • A page number consisting of 20 bits ? whats the
    page table size in bytes?
  • A page offset consisting of 12 bits
  • Since the page table is paged, the page number is
    further divided into
  • A 10-bit page number
  • A10-bit page offset
  • Thus, a logical address is as followswhere
    pi is an index into the outer page table, and p2
    is the displacement within the page of the outer
    page table

46
Address Translation
  • 2-level 32-bit paging architecture

47
Address Translation Example
48
Page Table Structure
  • Hierarchical Paging
  • Hashed page tables
  • Inverted hash tables

49
Hashed Page Tables
  • A common approach for handling address space gt 32
    bits
  • The virtual page number is hashed into a page
    table. This page table contains a chain of
    elements hashing to the same location.
  • Virtual page numbers are compared in this chain
    searching for a match. If a match is found, the
    corresponding physical frame is extracted.

50
Hashed Page Tables
51
Page Table Structure
  • Hierarchical Paging
  • Hashed page tables
  • Inverted hash tables

52
Inverted Page Tables
  • Why need it?
  • How?
  • One entry for each memory frame
  • Each entry consists of the virtual address of the
    page stored in the memory frame, with info about
    the process that owns the page ltpid, page gt
  • One page table system wide
  • Pros Cons

53
Inverted Hash Tables
  • Pros reduce memory consumption for page tables
  • Cons linear search performance!

54
Exercise
  • Consider a system with 32GB virtual memory, page
    size is 2KB. It uses 2-level paging. The physical
    memory is 512MB.
  • Show how the virtual memory address is split in
    page directory, page table and offset.
  • How many (2nd level) page tables are there in
    this system (per process)?
  • How many entries are there in the (2nd level)
    page table?
  • What is the size of the frame number (in bits)
    needed for implementing this?
  • How large should be the outer page table size?

55
Summary
  • Basic concepts
  • MMU logical addr. ? physical addr.
  • Memory Allocation
  • Contiguous
  • Non-contiguous paging
  • Implementation of page tables
  • Hierarchical paging
  • Hashed page tables
  • Inverted page tables
  • TLB effective memory-access time

56
Notes Shared Memory
  • ipcs m
  • ipcrm
Write a Comment
User Comments (0)
About PowerShow.com