Memory Management - PowerPoint PPT Presentation

About This Presentation
Title:

Memory Management

Description:

How to adjust the degree of multiprogramming to maximize CPU utilization? ... where x is the average ratio of I/O time to process lifetime in memory, and n is ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 51
Provided by: csg8
Learn more at: https://cs.gmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Memory Management


1
(No Transcript)
2
Memory Management
  • Logical and Physical Address Spaces
  • Contiguous Allocation
  • Paging
  • Segmentation
  • Virtual Memory Techniques (Next Lecture)

3
Memory Management
  • Memory density available for constant dollars
    tends to double every 18 months.
  • Why bother about memory management?
  • Parkinsons Law
  • Data expands to fill the space available
    for storage
  • Emerging memory-intensive applications
  • Memory usage of evolving systems tends to double
    every 18 months.

4
Background
  • Program must be brought (from disk) into memory
    and placed within a process for it to be run
  • Main memory and registers are only storage CPU
    can access directly
  • Register access in one CPU clock (or less)
  • Main memory access can take many cycles
  • Cache sits between main memory and CPU registers
  • Protection of memory required to ensure correct
    operation

5
Basic Concepts
  • Multiple processes typically take space in main
    memory at the same time
  • How to adjust the degree of multiprogramming to
    maximize CPU utilization?
  • Suppose average process computes only 20 of
    time it is sitting in memory and does I/O 80 of
    time.
  • Probabilistic view CPU Utilization 1
    xnwhere x is the average ratio of I/O time to
    process lifetime in memory, and n is the degree
    of multiprogramming.

6
Multiprogramming
Degree of multiprogramming
  • CPU utilization as a function of the number of
    processes in memory and their degree of I/O.

7
Basic Concepts
  • Multiple processes typically take space in main
    memory at the same time
  • Address binding
  • Allocation of space in main memory contiguous
    vs. non-contiguous

8
Binding of Instructions and Data to Memory
  • Address binding of instructions and data to
    memory addresses can happen at three
    different stages.
  • Compile time If the memory location is known a
    priori, absolute code can be generated. Must
    recompile code if starting location changes.
  • Load time Must generate relocatable code if
    memory location is not known at compile time.
  • Execution time Binding delayed until run-time
    if the process can be moved during its execution
    from one memory segment to another. Need
    hardware support for address maps (e.g. base and
    limit registers).

9
Address Binding (Cont.)
0
PROGRAM
JUMP 800
800
LOAD 1010
DATA
1010
10
Logical vs. Physical Address Space
  • The concept of a logical address space that is
    bound to a separate physical address space is
    central to proper memory management.
  • Logical address generated by the CPU also
    referred to as virtual address.
  • Physical address address seen by the memory
    unit.
  • Logical and physical addresses are the same in
    compile-time and load-time address-binding
    schemes they differ in execution-time
    address-binding scheme.

11
Memory-Management Unit (MMU)
  • Hardware device that maps logical address to
    physical address for execution time binding.
  • In a simple MMU scheme, the value in the
    relocation (or, base) register is added to every
    address generated by a user process at the time
    it is sent to memory.
  • The user program deals with logical addresses it
    never sees the real physical addresses.

12
Dynamic relocation using a relocation register
13
Use of Relocation and Limit Registers
Hardware Support for Relocation and Limit
Registers
14
Contiguous Allocation
  • Processes (that are eligible for the CPU) are
    placed to the main memory in contiguous fashion
  • Can all of a programs data and code be loaded
    into memory?
  • Dynamic loading
  • Dynamic linking
  • How to partition the memory across multiple
    processes?
  • Fixed partitioning
  • Dynamic partitioning

15
Contiguous Allocation (Cont.)
  • Fixed Partitioning
  • Divide the memory into fixed-size partitions
  • One process/partition
  • The degree of multiprogramming is bounded by the
    number of partitions.
  • When a partition is free, a process is selected
    from the input queue and is loaded into memory
  • OS keeps track of which partitions are free and
    which are in use.

16
Contiguous Allocation with Fixed-Size Partitions
Process 4
17
Contiguous Allocation (Cont.)
  • Dynamic Partitioning
  • Partitions are created dynamically, so that each
    process is loaded into a partition of exactly the
    size that it will need.
  • Hole block of available memory holes of
    various size are scattered throughout memory.
  • When a process arrives, it is allocated memory
    from a hole large enough to accommodate it.
  • Operating system maintains information abouta)
    allocated partitions b) free partitions (holes)

18
Contiguous Allocation with Variable-Size
Partitions
Main Memory
Process 4
19
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of
free holes?
  • First-fit Allocate the first hole that is big
    enough.
  • Best-fit Allocate the smallest hole that is big
    enough.
  • Must search the entire list, unless ordered by
    size. Produces the smallest leftover hole.
  • Worst-fit Allocate the largest hole.
  • Produces the largest leftover hole.
  • Next-fit Starts to search from the last
    allocation made, chooses the first hole that is
    big enough.
  • First-fit and best-fit better than worst-fit in
    terms of storage utilization.

20
Fragmentation Problem
  • Internal Fragmentation allocated memory may be
    slightly larger than requested memory the unused
    memory is internal to a partition.
  • Contiguous allocation with fixed-size partitions
    has the internal fragmentation problem
  • External Fragmentation total memory space
    exists to satisfy a request, but it is not
    contiguous.
  • Contiguous allocation with variable-size
    partitions has the external fragmentation problem
  • 50-percent rule Even with an improved version of
    First Fit, given N allocated blocks, another N/2
    blocks will be lost to external fragmentation!
    (on the average)
  • Memory Compaction (for execution time address
    binding)

21
Paging
  • A memory management scheme that allows the
    physical address space of a process to be
    non-contiguous.
  • Divide physical memory into fixed-sized blocks
    called frames.
  • Divide logical memory into blocks of same size
    called pages.
  • Any page can go to any free frame
  • To run a program of size n pages, need to find n
    free frames and load program.
  • Set up a page table (one per process) to
    translate logical addresses (generated by
    process) to physical addresses.
  • External Fragmentation is eliminated.
  • Internal fragmentation is a problem.

22
Paging Example
0
Frame 0
1024
Frame 1
2048
Frame 2
As Page 0
3072
Frame 3
0
Page 0
1024
Page 1
2048
Page 2
3072
As Page 3
Page 3
4093
As Page 1
Logical Address Space of Process A
As Page 2
Frame 15
16383
16383
Main Memory
Main Memory
23
Address Translation Scheme
  • Observe The simple limit/relocation register
    pair mechanism is no longer sufficient.
  • m-bit logical address generated by CPU is divided
    into
  • Page number (p) used as an index into a page
    table which contains base address of each page in
    physical memory.
  • Page offset (d) combined with base address to
    define the physical memory address that is sent
    to the memory unit.

24
Address Translation Architecture
25
Address Translation Architecture
The length of the page table entry can be bigger
than the required number of bits for the physical
address For example, the physical address 24
bits, d 5, then the page table entry is gt 24-5
d 5 gt page size 25 Physical address 24 bits
gt of frames (224)/25 219 Most
memories are byte (8-bit) addressable There are
16 or 32-bit word addressable
GMU CS 571
26
Paging Example
Consider some logical addresses 6 - 0110 13
- 1101
32-byte memory and 4-byte pages
27
Free Frames
  • OS needs to
  • manage physical
  • memory and
  • keep track of
  • what frames
  • are available
  • (typically using
  • a frame table)

Before allocation
After allocation
28
Hardware Support for Page Table
  • Option keep page table in registers
  • Fast
  • Not realistic if page table is large
  • Must reload these registers to run the process
  • Option Keep the page table in memory
  • Page-table base register (PTBR) points to the
    page table location.
  • Page-table length register (PTLR), if it exists,
    indicates the size of the page table.
  • In this scheme every data/instruction access
    requires two memory accesses- one for the page
    table and one for the data/instruction.

29
Associative Memory
  • The two memory access problem can be solved by
    the use of a special fast-lookup hardware cache
    called translation look-aside buffer (TLBs).
  • TLB is an associative, high speed memory
    (expensive).
  • Address translation (A, A)
  • If page A is in associative register, get frame
    out.
  • Otherwise get frame from page table in memory
  • Typically, TLB contains 64 1024 entries ? not
    all pages are in the TLB.

Page
Frame
30
Paging Hardware With TLB
31
Translation Look-Aside Buffer
  • Works like a cache when we have a miss want
    to add that info to the TLB
  • When we want to add a new entry to a full TLB,
    one entry must be replaced. Many different
    policies LRU (Least Recently Used), random,
  • Some entries can be wired down (non removable)
  • Some TLBs store address-space identifiers
    (ASIDs) to provide address space protection
  • What if TLB does not support separate ASIDs? TLB
    must be flushed when a new process is selected.

32
Effective Access Time
  • How does the paging affect the memory access time
    with or without TLB?
  • TLB Hit ratio percentage of times that a page
    number is found in TLB.
  • Example Assume that in the absence of paging,
    effective memory access time is 100 nanoseconds
    (computed through the cache hit ratio and cache
    memory/main memory cycle times).
  • Assume that Associative Lookup is 20 nanoseconds
  • Assume TLB Hit ratio is 80
  • Effective Access Time (EAT)
  • EAT 0.8 120 0.2 220
  • 140 nanoseconds

33
Hierarchical Page Tables
  • Having page tables with more than one million
    entries is not uncommon in modern architectures.
  • Break up the logical address space into multiple
    page tables.
  • A simple technique is a two-level page table
    (forward-mapped page table in Pentium II.).

34
Two-Level Page-Table Scheme
35
Two-Level Paging Example
  • A logical address (on 32-bit machine with 4KByte
    page size) is divided into
  • a page number consisting of 20 bits
  • a page offset consisting of 12 bits
  • Since the page table is paged, the page number is
    further divided into
  • a 10-bit page number
  • a 10-bit page offset
  • Thus, a logical address is as followswhere
    p1 is an index into the outer page table, and p2
    is the displacement within the page of the outer
    page table.

36
Address-Translation Scheme
  • Address-translation scheme for a two-level 32-bit
    paging architecture

37
Multiple-Level Paging
  • For a system with a 64-bit logical space,
    two-level paging scheme may no longer be
    appropriate.
  • SPARC architecture (32 bit addressing) supports a
    three-level paging scheme, whereas the 32-bit
    Motorola 68030 architecture supports a four-level
    paging scheme.

38
Three-level Paging Scheme
39
Other Paging Strategies
  • For large logical address paces, hashed page
    tables and inverted page tables can be used.
  • Hashed Page Tables
  • Common in address spaces gt 32 bits
  • The virtual page number is hashed into a page
    table. This page table contains a chain of
    elements hashing to the same location.
  • Virtual page numbers are compared in this chain
    searching for a match. If a match is found, the
    corresponding physical frame is extracted.

40
Hashed Page Table
41
Other Paging Strategies
  • Inverted Page Table
  • One entry for each real page of memory
  • Entry consists of the virtual address of the page
    stored in that real memory location, with
    information about the process that owns that page
  • Decreases memory needed to store each page table,
    but increases time needed to search the table
    when a page reference occurs
  • Use hash table to limit the search to one or at
    most a few page-table entries

42
Inverted Page Table Architecture
43
Shared Pages
  • Shared code
  • One copy of read-only (reentrant) code shared
    among processes (i.e., text editors, compilers,
    window systems).
  • Particularly important for time-sharing
    environments.
  • Private code and data
  • Each process keeps a separate copy of the code
    and data.

44
Shared Pages (Example)
45
Segmentation
  • Memory-management scheme that supports user view
    of memory.
  • A program is a collection of segments. A segment
    is a logical unit such as
  • main program,
  • procedure,
  • function,
  • object,
  • local variables,
  • global variables,
  • common block,
  • stack,
  • symbol table,
  • arrays

46
Logical View of Segmentation
1
4
1
2
3
2
4
3
Virtual address space of a process
physical memory
47
Segmentation Architecture
  • Logical address consists of a pair
  • ltsegment-number, offsetgt,
  • Segment table maps two-dimensional physical
    addresses. Each table entry has
  • base contains the starting physical address
    where the segments reside in memory.
  • limit specifies the length of the segment.
  • Segment-table base register (STBR) points to the
    segment tables location in memory.
  • Segment-table length register (STLR) indicates
    number of segments used by a process
  • segment number s is legal if s lt STLR.

48
Segmentation Hardware
49
Example of Segmentation
50
Segmentation Protection and Sharing
  • Each segment represents a semantically defined
    portion of the program.
  • All entries in a segment are likely to be used in
    the same way.
  • Segment-table entry will contain the protection
    info.
  • Sharing
  • Segments are shared when entries in the segment
    tables of two different processes point to the
    same physical location.
  • Memory Allocation
  • Similar to the paging case except that the
    segments are of variable length.
  • external fragmentation
  • first fit/best fit
Write a Comment
User Comments (0)
About PowerShow.com