Midterm 3 Revision - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Midterm 3 Revision

Description:

Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science Memory Allocation Compile for overlays Compile for fixed Partitions Separate queue per partition ... – PowerPoint PPT presentation

Number of Views:92
Avg rating:3.0/5.0
Slides: 56
Provided by: ScottB170
Category:

less

Transcript and Presenter's Notes

Title: Midterm 3 Revision


1
Midterm 3 Revision
Lecture 18
  • Prof. Sin-Min Lee
  • Department of Computer Science

2
Memory Allocation
  • Compile for overlays
  • Compile for fixed Partitions
  • Separate queue per partition
  • Single queue
  • Relocation and variable partitions
  • Dynamic contiguous allocation (bit maps versus
    linked lists)
  • Fragmentation issues
  • Swapping
  • Paging

3
Overlays
Secondary Storage
Overlay 1
0K
Overlay Manager
Main Program
Overlay 2
5k
7k
Overlay Area
Overlay 1
Overlay 2
Overlay 3
Overlay 1
Overlay 3
12k
4
Multiprogramming with Fixed Partitions
0k
  • Divide memory into n (possible unequal)
    partitions.
  • Problem
  • Fragmentation

4k
16k
64k
Free Space
128k
5
Fixed Partitions
Legend
0k
Free Space
4k
16k
Internalfragmentation (cannot be reallocated)
64k
128k
6
Fixed Partition Allocation Implementation Issues
  • Separate input queue for each partition
  • Requires sorting the incoming jobs and putting
    them into separate queues
  • Inefficient utilization of memory
  • when the queue for a large partition is empty but
    the queue for a small partition is full. Small
    jobs have to wait to get into memory even though
    plenty of memory is free.
  • One single input queue for all partitions.
  • Allocate a partition where the job fits in.
  • Best Fit
  • Worst Fit
  • First Fit

7
Relocation
  • Correct starting address when a program starts in
    memory
  • Different jobs will run at different addresses
  • When a program is linked, the linker must know at
    what address the program will begin in memory.
  • Logical addresses, Virtual addresses
  • Logical address space , range (0 to max)
  • Physical addresses, Physical address space
  • range (R0 to Rmax) for base value R.
  • User program never sees the real physical
    addresses
  • Memory-management unit (MMU)
  • map virtual to physical addresses.
  • Relocation register
  • Mapping requires hardware (MMU) with the base
    register

8
Relocation Register
Memory
Base Register
BA
CPU Instruction Address
Physical Address
Logical Address

MA
MABA
9
Storage Placement Strategies
  • Best fit
  • Use the hole whose size is equal to the need, or
    if none is equal, the whole that is larger but
    closest in size.
  • Rationale?
  • First fit
  • Use the first available hole whose size is
    sufficient to meet the need
  • Rationale?
  • Worst fit
  • Use the largest available hole
  • Rationale?

10
Storage Placement Strategies
  • Every placement strategy has its own problem
  • Best fit
  • Creates small holes that cant be used
  • Worst Fit
  • Gets rid of large holes making it difficult to
    run large programs
  • First Fit
  • Creates average size holes

11
Locality of Reference
  • Most memory references confined to small region
  • Well-written program in small loop, procedure or
    function
  • Data likely in array and variables stored
    together
  • Working set
  • Number of pages sufficient to run program
    normally, i.e., satisfy locality of a particular
    program

12
Page Replacement Algorithms
  • Page fault - page is not in memory and must be
    loaded from disk
  • Algorithms to manage swapping
  • First-In, First-Out FIFO Beladys Anomaly
  • Least Recently Used LRU
  • Least Frequently Used LFU
  • Not Used Recently NUR
  • Referenced bit, Modified (dirty) bit
  • Second Chance Replacement algorithms
  • Thrashing
  • too many page faults affect system performance

13
Virtual Memory Tradeoffs
  • Disadvantages
  • SWAP file takes up space on disk
  • Paging takes up resources of the CPU
  • Advantages
  • Programs share memory space
  • More programs run at the same time
  • Programs run even if they cannot fit into memory
    all at once
  • Process separation

14
Virtual Memory vs. Caching
  • Cache speeds up memory access
  • Virtual memory increases amount of perceived
    storage
  • Independence from the configuration and capacity
    of the memory system
  • Low cost per bit compared to main memory

15
How Bad Is Fragmentation?
  • Statistical arguments - Random sizes
  • First-fit
  • Given N allocated blocks
  • 0.5?N blocks will be lost because of
    fragmentation
  • Known as 50 RULE

16
Solve Fragmentation w. Compaction
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
5
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
6
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
7
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
8
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
9
17
Storage Management Problems
  • Fixed partitions suffer from
  • internal fragmentation
  • Variable partitions suffer from
  • external fragmentation
  • Compaction suffers from
  • overhead

18
Placement Policy
  • Determines where in real memory a process piece
    is to reside
  • Important in a segmentation system
  • Paging or combined paging with segmentation
    hardware performs address translation

19
Replacement Policy
  • Placement Policy
  • Which page is replaced?
  • Page removed should be the page least likely to
    be referenced in the near future
  • Most policies predict the future behavior on the
    basis of past behavior

20
Replacement Policy
  • Frame Locking
  • If frame is locked, it may not be replaced
  • Kernel of the operating system
  • Control structures
  • I/O buffers
  • Associate a lock bit with each frame

21
Basic Replacement Algorithms
  • Optimal policy
  • Selects for replacement that page for which the
    time to the next reference is the longest
  • Impossible to have perfect knowledge of future
    events

22
Basic Replacement Algorithms
  • Least Recently Used (LRU)
  • Replaces the page that has not been referenced
    for the longest time
  • By the principle of locality, this should be the
    page least likely to be referenced in the near
    future
  • Each page could be tagged with the time of last
    reference. This would require a great deal of
    overhead.

23
Basic Replacement Algorithms
  • First-in, first-out (FIFO)
  • Treats page frames allocated to a process as a
    circular buffer
  • Pages are removed in round-robin style
  • Simplest replacement policy to implement
  • Page that has been in memory the longest is
    replaced
  • These pages may be needed again very soon

24
Basic Replacement Algorithms
  • Clock Policy
  • Additional bit called a use bit
  • When a page is first loaded in memory, the use
    bit is set to 1
  • When the page is referenced, the use bit is set
    to 1
  • When it is time to replace a page, the first
    frame encountered with the use bit set to 0 is
    replaced.
  • During the search for replacement, each use bit
    set to 1 is changed to 0

25
(No Transcript)
26
Early memory management schemes
  • Originally used to devote computer to single user

User has all of memory
0
65535
27
Limitations of single-user contiguous scheme
  • Only one person using the machine--lots of
    computer time going to waste (why?)
  • Largest job based on size of machine memory

28
Next fixed partitions
  • Created chunks of memory for each job

29
Limitations of fixed partitions
  • Operator had to correctly guess size of programs
  • Programs limited to partitions they were given
  • Memory fragmentation resulted
  • The kind illustrated here is called internal
    memory fragmentation

30
Dynamic Partitions
1
1
2
6
3
5
4
7
31
Internal versus external memory fragmentation
Space currently allocated by Job 8
Job 8
Space previously allocated by Job 1
32
Dynamic Partitions
  • Contiguous memory is still required for processes
  • How do we decide size of the partitions?
  • Once the machine is going, how do old jobs get
    replaced by new ones?

33
Dyanmic Partitions First Fit
  • In this scheme, we search forward in the free
    list for a partition large enough to accommodate
    the next job
  • Fast, but the gaps left can be large

34
Dynamic Partitions Best Fit
  • In this scheme, we try to find the smallest
    partition large enough to hold the next job
  • This tends to minimize the size of the gaps
  • But it also requires that we keep list of free
    spaces

35
Deallocating memory
  • If the block we are deallocating is adjacent to
    one or two free blocks, then it needs to be
    merged with them.
  • So either we are returning a pointer to the free
    block, or we are changing the size of a block, or
    both

36
Relocatable Dynamic Partitions
  • We can see that in some cases, a job can fit
    into the combined spaces within or between
    partitions of the early schemes
  • So how do we take advantage of that space?
  • One way is to move programs while they are in the
    machine--compacting them down into the lower end
    of memory above the operating system

37
Several names for this
  • Garbage collection
  • Defragmentation
  • Compaction
  • All share a problem relative addressing!

38
Special registers
  • Early machine architectures went through a phase
    of more complexity is better
  • Few registers, but large sets of commands
  • RISC computers have changed all of that (for
    architectural reasons we wont go into in detail)

39
But...
  • In order to manage dynamic relocatable
    partitions, two registers were assigned to each
    partition
  • Note jobs were not broken into parts, but were
    relocated whole hog
  • Note that the whole job is stored in
    memory--still no heavy use of external storage
    devices

40
Summary of early schemes
  • These were adequate for batch jobs
  • But speed of components continued to advance
  • Storage devices appear
  • And then the biggie remote terminals
  • Note that processor speeds double about every 1.5
    years--much faster than memory!

41
To accommodate changes
  • We really need to be able to extend memory by
    hiding unused parts of programs on storage
    devices--called virtual memory
  • We can do this if we can break a program into
    pieces--called pages--that can be independently
    loaded and removed from memory without affecting
    the running program

42
Paged Memory
  • Allows jobs to reside in noncontiguous memory
  • More jobs can be squeezed into memory
  • Butsome internal fragmentation, particularly if
    the number of jobs is large

43
Disk partitions to page frames
  • Disk partitions varied in size before
  • Now we want to fix the size to match the chunk
    size of the programs coming in (plus a tiny bit
    of bookkeeping space)
  • We call these chunks of memory page frames

44
Is there any fragmentation?
  • We can see that if page frames are chosen right,
    we shouldnt have external fragmentation
  • Seldom will the size of code exactly fit the
    frames, so there will be a little bit of internal
    fragmentation
  • Tradeoff between internal fragmentation and
    processor time spent managing frames

45
Other problems
  • Earliest schemes didnt allow virtual memory
  • Tables for managing memory a significant part of
    the operating system--its growing!

46
Demand Paging
  • Demand paging broke jobs into pieces and became
    popular because it allowed only parts of jobs to
    be active in memory
  • New problems thrashing and page faults

47
Page Replacement Algorithms
  • Optimal page replacement simply not possible
  • Keep referenced (R) and Modify (M) bits to allow
    us to keep track of past usage instead
  • Page is referenced by any read or write in it
  • Page is modified by any change (write) made to it

48
Page Replacement Algorithms, Continued
  • FIFO First in, first out
  • LRU Least recently used
  • LFU Least frequently used
  • both of the latter rely on a page request call to
    the operating system
  • a failure to find a page page interrupt
  • we might measure quality by failure rate
    page interrupts / page requests

49
Page Replacement Algorithms, Continued
  • Clock page replacement
  • Hand of the clock points to the oldest page
  • If a page fault occurs, check R bits in
    clockwise order
  • A variant called the two-handed clock is used
    in some UNIX systems

50
FIFO solution is not more memory
  • Called Beladys anomaly
  • the page request order is an important factor,
    not just the size of memory

51
LRU
  • Doesnt suffer from Beladys anomaly
  • Presumes locality of reference
  • But while it works well, it is a little more
    complex to implement in software
  • Consequently, aging and various clock algorithms
    are the most common in practice
  • Aging can yield a good approximation

52
Segmented Memory Allocation
  • Instead of equal divisions, try to break code
    into its natural modules
  • Compiler now asked to help operating system
  • No page frames--different sizes required (meaning
    we get external fragmentation again)

53
Segmented/Demand Paging
  • Subdivide the natural program segments into equal
    sized parts to load into page frames
  • eliminates external fragmentation
  • allows for large virtual memory, so it is often
    used in more modern OSs

54
Tradeoffs
  • Note that there is a tradeoff between external
    fragmentation and page faults in paging systems
  • Note also that we probably want slightly smaller
    page frames in a Segmented-Demand Paging framework

55
And Onward!
  • Next Thursday well do our Mid3 exam
  • Study guide already posted
  • Dont miss the exam!
Write a Comment
User Comments (0)
About PowerShow.com