Memory Management - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Memory Management

Description:

We'll try to finish most of memory management today. EXAM two weeks from today ... Note: jobs were not broken into parts, but were relocated whole hog ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 33
Provided by: scottb75
Category:
Tags: hog | management | memory

less

Transcript and Presenter's Notes

Title: Memory Management


1
Memory Management
  • CIS 250 Operating Sytems
  • Lecture 4

2
Today
  • Well try to finish most of memory management
    today
  • EXAM two weeks from today
  • Material covered only up through today
  • Lab before exam will be a study session for the
    exam

3
Early memory management schemes
  • Originally used to devote computer to single user

User has all of memory
0
65535
4
Limitations of single-user contiguous scheme
  • Only one person using the machine--lots of
    computer time going to waste (why?)
  • Largest job based on size of machine memory

5
Next fixed partitions
  • Created chunks of memory for each job

6
Limitations of fixed partitions
  • Operator had to correctly guess size of programs
  • Programs limited to partitions they were given
  • Memory fragmentation resulted
  • The kind illustrated here is called internal
    memory fragmentation

7
Dynamic Partitions
1
1
2
6
3
5
4
7
8
Internal versus external memory fragmentation
Space currently allocated by Job 8
Job 8
Space previously allocated by Job 1
9
Dynamic Partitions
  • Contiguous memory is still required for processes
  • How do we decide size of the partitions?
  • Once the machine is going, how do old jobs get
    replaced by new ones?

10
Dyanmic Partitions First Fit
  • In this scheme, we search forward in the free
    list for a partition large enough to accommodate
    the next job
  • Fast, but the gaps left can be large

11
Dynamic Partitions Best Fit
  • In this scheme, we try to find the smallest
    partition large enough to hold the next job
  • This tends to minimize the size of the gaps
  • But it also requires that we keep list of free
    spaces

12
Deallocating memory
  • If the block we are deallocating is adjacent to
    one or two free blocks, then it needs to be
    merged with them.
  • So either we are returning a pointer to the free
    block, or we are changing the size of a block, or
    both

13
Relocatable Dynamic Partitions
  • We can see that in some cases, a job can fit
    into the combined spaces within or between
    partitions of the early schemes
  • So how do we take advantage of that space?
  • One way is to move programs while they are in the
    machine--compacting them down into the lower end
    of memory above the operating system

14
Several names for this
  • Garbage collection
  • Defragmentation
  • Compaction
  • All share a problem relative addressing!

15
Special registers
  • Early machine architectures went through a phase
    of more complexity is better
  • Few registers, but large sets of commands
  • RISC computers have changed all of that (for
    architectural reasons we wont go into in detail)

16
But...
  • In order to manage dynamic relocatable
    partitions, two registers were assigned to each
    partition
  • Note jobs were not broken into parts, but were
    relocated whole hog
  • Note that the whole job is stored in
    memory--still no heavy use of external storage
    devices

17
Summary of early schemes
  • These were adequate for batch jobs
  • But speed of components continued to advance
  • Storage devices appear
  • And then the biggie remote terminals
  • Note that processor speeds double about every 1.5
    years--much faster than memory!

18
To accommodate changes
  • We really need to be able to extend memory by
    hiding unused parts of programs on storage
    devices--called virtual memory
  • We can do this if we can break a program into
    pieces--called pages--that can be independently
    loaded and removed from memory without affecting
    the running program

19
Paged Memory
  • Allows jobs to reside in noncontiguous memory
  • More jobs can be squeezed into memory
  • Butsome internal fragmentation, particularly if
    the number of jobs is large

20
Disk partitions to page frames
  • Disk partitions varied in size before
  • Now we want to fix the size to match the chunk
    size of the programs coming in (plus a tiny bit
    of bookkeeping space)
  • We call these chunks of memory page frames

21
Is there any fragmentation?
  • We can see that if page frames are chosen right,
    we shouldnt have external fragmentation
  • Seldom will the size of code exactly fit the
    frames, so there will be a little bit of internal
    fragmentation
  • Tradeoff between internal fragmentation and
    processor time spent managing frames

22
Other problems
  • Earliest schemes didnt allow virtual memory
  • Tables for managing memory a significant part of
    the operating system--its growing!

23
Demand Paging
  • Demand paging broke jobs into pieces and became
    popular because it allowed only parts of jobs to
    be active in memory
  • New problems thrashing and page faults

24
Page Replacement Algorithms
  • Optimal page replacement simply not possible
  • Keep referenced (R) and Modify (M) bits to allow
    us to keep track of past usage instead
  • Page is referenced by any read or write in it
  • Page is modified by any change (write) made to it

25
Page Replacement Algorithms, Continued
  • FIFO First in, first out
  • LRU Least recently used
  • LFU Least frequently used
  • both of the latter rely on a page request call to
    the operating system
  • a failure to find a page page interrupt
  • we might measure quality by failure rate
    page interrupts / page requests

26
Page Replacement Algorithms, Continued
  • Clock page replacement
  • Hand of the clock points to the oldest page
  • If a page fault occurs, check R bits in
    clockwise order
  • A variant called the two-handed clock is used
    in some UNIX systems

27
FIFO solution is not more memory
  • Called Beladys anomaly
  • the page request order is an important factor,
    not just the size of memory

28
LRU
  • Doesnt suffer from Beladys anomaly
  • Presumes locality of reference
  • But while it works well, it is a little more
    complex to implement in software
  • Consequently, aging and various clock algorithms
    are the most common in practice
  • Aging can yield a good approximation

29
Segmented Memory Allocation
  • Instead of equal divisions, try to break code
    into its natural modules
  • Compiler now asked to help operating system
  • No page frames--different sizes required (meaning
    we get external fragmentation again)

30
Segmented/Demand Paging
  • Subdivide the natural program segments into equal
    sized parts to load into page frames
  • eliminates external fragmentation
  • allows for large virtual memory, so it is often
    used in more modern OSs

31
Tradeoffs
  • Note that there is a tradeoff between external
    fragmentation and page faults in paging systems
  • Note also that we probably want slightly smaller
    page frames in a Segmented-Demand Paging framework

32
And Onward!
  • Class after next well do our first exam
  • Study guide to be posted
  • Dont miss the labs!
Write a Comment
User Comments (0)
About PowerShow.com