Uppsala University Operating Systems - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

Uppsala University Operating Systems

Description:

OS: Memory Management/ Brahim Hnich -- UU. 2. Overview I. Memory management without ... Memory manger (MM): keeps track of which part of memory are (not) in use ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 66
Provided by: oana
Category:

less

Transcript and Presenter's Notes

Title: Uppsala University Operating Systems


1
Uppsala UniversityOperating Systems
Memory Management Brahim Hnichhttp//www.csd.uu
.se/brahim/os1.html
2
Overview I
  • Memory management without swapping
  • monoprogramming without swapping or paging
  • multiprogramming and memory usage
  • multiprogramming with fixed partitions
  • Swapping
  • multiprogramming with variable partitions
  • memory management with bit maps
  • memory management with linked lists
  • memory management with the Buddy System
  • Virtual memory
  • paging
  • page tables
  • associative memory
  • inverted page tables

3
Overview II
  • Page Replacement Algorithms
  • optimal page replacement
  • NRU (not recently used)
  • FIFO (first in first out)
  • second chance
  • clock
  • LRU (least recently used)
  • Modeling page algorithms
  • Design issues for paging systems
  • Segmentation
  • Summary

4
Memory Management I
  • Memory is an important resource that must be
    carefully managed.
  • Memory manger (MM)
  • keeps track of which part of memory are (not) in
    use
  • (de-) allocates memory to processes
  • manages swapping between main memory and disk
    when main memory is not big enough to hold all
    processes
  • Swapping and paging are caused by the lack of
    sufficient main memory to hold all the programs
    and data at once

5
Memory Management II
  • Two classes of memory management systems
  • without paging and swapping (simpler)
  • with paging and swapping (more complex)
  • Monoprogramming without swapping or paging
  • Multiprogramming and memory usage
  • Multiprogramming with fixed partitions

6
Monoprogramming without Swapping or Paging
  • Simplest possible memory management scheme
  • Just one process in memory at a time, this
    process can use all memory
  • Memory is divided between OS and a single user
    process (3 ways of organizing memory)

device drivers in ROM
User program
OS in ROM
User program
User program
OS (in RAM)
OS (in RAM)
(c)
(a)
(b)
7
Multiprogramming and Memory Usage
  • Easier to program by splitting an application
    into 2-n processes
  • Large computers provide often interactive service
    to several people simultaneously, thus
  • more than one process is required in memory at
    once to get reasonable performance
  • Most processes spend a lot of time waiting for
    disk I/O to complete

8
Modeling Multiprogramming
  • When multiprogramming is used, the CPU
    utilization can be improved
  • Probabilistic viewpoint
  • process spends a fraction p of its time in I/O
    wait state
  • if we have n processes in memory concurrently,
    the probability all n processes are waiting for
    I/O is pn
  • CPU utilization 1 - pn
  • It is clear that when n increases, CPU
    utilization increases

9
Analysis of Multiprogramming System Performance
Job 1 finishes
2.0
0.9
0.8
0.3
1
0.9
0.3
0.8
0.9
0.1
2
Job 2 starts
0.3
0.8
0.9
3
0.1
0.9
0.3
0.7
4
10
15
20 22
31.7
27.6 28.2
10
Multiprogramming With Fixed Partitions I
  • How can we support more than one process in
    memory?
  • Divide memory into n (possibly unequal) partitions

x
y
Partition 4
Partition 4
H
Partition 3
Partition 3
x
y
Z
Partition 2
Partition 2
F
E
Partition 1
Partition 1
OS
C
B
D
OS
single input queue
multiple input queues
11
Multiprogramming With Fixed Partitions II
  • (-) multiple input queues
  • queue for a large partition is empty but queue
    for a small partition is full
  • since the partitions are fixed, any space in a
    partition not used by a job is lost
  • single input queue whenever a partition becomes
    free, the job closest to the front of the queue
    that fits in it could be loaded into the empty
    partition and run
  • different strategy since it is undesirable to
    waste a large partition on a small job, search
    the whole input queue whenever a partition
    becomes free and pick the largest job that fits
  • discriminates against small jobs ( normally the
    interactive jobs that should get the best
    service)
  • solution at least one small partition
  • other solution counter, how many times has a
    queued job been skipped over

12
Relocation and Protection
  • Problems with multiprogramming
  • relocation different jobs will run at different
    addresses. When a program is linked, i.e. main
    program, user-written procedures, and library
    procedures are combined into a single address
    space. The linker must know at what address the
    program will begin in main memory
  • protection a process can write on other
    processes memory
  • Solution
  • solves both problems
  • equip the machine with two special hardware
    registers base and limit
  • base starting address of the partition
  • limit the size of the partition

13
Swapping
  • Batch systems organizing memory into fixed
    partitions is simple and effective.
  • Fine, as long as enough jobs can be kept in
    memory to keep the CPU busy all time
  • Timesharing there are normally more users than
    there is memory to hold all their process gt need
    to keep excess processes on disk.
  • Swappingmoving processes from main memory to
    disk and from disk back to memory
  • (-) swapping on fixed partitions wastes too much
    memory, because programs are normally smaller
    than their partitions
  • () use instead swapping on variable partitions

14
Multiprogramming with Variable Partitions I
C
C
C
C
C
B
B
B
B
E
A
A
A
D
D
D
OS
OS
OS
OS
OS
OS
OS
  • Number, location, and size of processes and
    partitions vary dynamically (memory compaction
    possible)
  • Improves memory utilization but complicates (de-)
    allocation and management of memory

15
Multiprogramming with Variable Partitions II
Growing processes
room for growth
B
actually in use
A-Stack
room for growth
room for growth
A-Data
A
actually in use
A-program
OS
OS
Allocating space for a growing data segment
Allocating space for a growing stack and data
segment
  • Three ways to keep track of memory usage
  • bit maps
  • lists
  • buddy systems

16
Memory Mgment with Bit Maps
A
B
C
D
E
8
16
24
11111000
P
0
5
H
5
3
P
8
6
11111111
11001111
11111000
P
14
4
H
18
2
P
20
6
P
26
3
H
29
3
nil
  • Memory is divided up into allocation units (few
    words - several kilobytes), size of allocation
    unit is an important design issue
  • Per allocation unit one bit in the bit map
  • 0 unit is free
  • 1 unit is occupied

17
Memory Mgment with Linked-Lists I
  • Maintain a linked list of allocated and free
    memory segments, where a segment is either a
    process or a hole between two processes
  • When the processes and holes are kept on a list
    sorted by address, several algorithms can be used
    to allocate memory for a newly created or swapped
    process
  • Assumption the memory manager (MM) knows how
    much memory to allocate

after X terminates
before X terminates
A
X
B
A
B
A
X
A
X
B
B
X
18
Memory Mgment with Linked-Lists II
  • First fit MM scans along the list of segments
    until it finds a hole that is big enough. The
    hole is then broken up into two pieces, one for
    the process and one for the unused memory
    (exception exact fit)
  • Next fit minor variation of first fit, but
    continues search from the place where it found
    the previous hole
  • Best fit searches the entire list and takes the
    smallest hole that is adequate
  • Worst fit takes the largest available hole
  • Quick fit maintains separate lists for some of
    the more common sizes requests

19
Memory Mgment with the Buddy System I
Memory
holes
Initial
1 3 3 3 4 4 4 3 1
1M 1024K
Request 70
A
256K
512K
128K
Request 35
512K
256K
A
B
64
Request 80
512K
128K
C
A
B
64
Return A
512K
128K
C
128K
B
64
Request 60
512K
128K
C
128K
B
D
Return B
512K
128K
C
128K
64
D
Return D
512K
128K
C
256K
Return C
1M 1024K
  • MM maintains lists of free blocks of size 1, 2,
    4, 8, up to the size of memory (2n)

20
Memory Mgment with the Buddy System II
  • Blocks are sorted by size and address gt fast
  • Inefficient in terms of memory utilization
  • Internal fragmentation wastes memory is internal
    to the allocated segments
  • External fragmentation (checkerboarding) holes
    between the segments but no wasted space within
    the segments

21
Virtual Memory
  • Virtual memory (basic idea) the combined size of
    a program, data, and stack may exceed the amount
    of physical memory available for it. OS keeps
    those parts of the program currently in use in
    main memory, and the rest on disk

22
Paging I
  • Most virtual memory systems sue paging
  • Virtual addresses program-generated addresses
  • Virtual address space formed by the virtual
    addresses
  • Memory management unit (MMU) a chip or
    collection of chips that maps the virtual
    addresses onto the physical memory addresses

23
Paging II
  • The virtual address space is divided up into
    units called pages
  • The corresponding units in the physical memory
    are called page frames
  • Pages and page frames are always the same size
  • Transfers between memory and disk are always in
    units of pages

CPU card
The CPU sends virtual address to the MMU
CPU
Memory
Disk controller
MMU
Bus
24
Paging III
  • Present/absent bit in each entry keeps track of
    whether the page is mapped or not
  • Page fault program tries to use an unmapped page
    gt trap to OS

0-4k
0-4k
2
1
6
Physical address space
Virtual address space
0
4
3
X
28-32K
X
X
5
Page frame
X
7
X
Virtual page
X
X
60-64K
X
25
Paging IV
Incoming virtual address (8196)
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
Present/absent bit
0
010
1
001
1
2
110
1
110
12-bit offset copied directly from input to output
000
1
100
1
011
1
000
0
Page Table
000
0
000
0
101
1
000
0
111
1
000
0
000
0
000
0
Outgoing physical address (24580)
15
000
0
1
1
0
0
0
0
0
0
0
0
0
0
1
0
0
26
Page Tables
  • The purpose of the page table is to map virtual
    pages onto page frames
  • Two major problems
  • the page table can be extremely large
  • the mapping must be fast
  • Simplest design one page table consisting of an
    array of fast hardware registers, with one entry
    for each virtual page, indexed by virtual page
    number (expensive, performance)
  • Page table entirely in main memory, and one
    hardware register that points to the start of the
    page table
  • Variations of the two approaches

27
Multilevel Page Tables I
Second-level page table
Top-level page table
PT1
To pages
Bits 10 10 12
PT1
PT2
Offset
? ?
? ?
? ?
? ?
. .
32-bit address with two page table fields
? ?
? ?
28
Multilevel Page Tables II
  • Idea avoid keeping all the page tables in memory
    all the time, those that are needed should not be
    in main memory
  • Only the top-level table and some second level
    (may be third level and fourth level) tables are
    needed
  • Additional levels give more flexibility but
    increase complexity
  • Layout and size (32 bits is common) of page table
    entries is highly machine dependent, but the
    presented information is roughly the same.
  • Most important field is the page frame number

29
Multilevel Page Tables III
Page frame number
referenced
modified
protection
Present/absent
Caching disabled
  • Page frame number goal of page mapping to locate
    this number
  • Present/absent bit
  • 1 entry is valid and can be used
  • 0 virtual page to which entry belongs is
    currently not in memory gt page fault!
  • Protection bits what kind of access is permitted
    (RWX in UNIX)
  • Modified bit
  • 1 page has been written
  • 0 page not modified
  • Referenced bit is set whenever a page is
    referenced, either for reading or writing
  • Cashing disabled bit allows caching to be
    disabled for a page important for pages that map
    onto device registers rather than memory

30
Associative Memory I
  • All paging schemes (except PDP 11) keep the page
    tables in memory gt performance problems!
  • Most programs tend to make a large number of
    references to a small number of pages, and not
    the other way around
  • Solution equip computers with a small hardware
    device for mapping virtual addresses to physical
    addresses without going through the page table
  • This device is called associative memory (AM) or
    translation lookaside buffer. It is usually
    inside the MMU and consists of a small number of
    entries (normally 32)

31
Associative Memory II
Valid entry virtual page modified protection
page frame
1
140
1
RW
31
1
20
0
RX
38
1
130
1
RW
29
1
129
1
RW
62
1
19
0
RX
50
1
21
0
RX
45
1
860
1
RW
14
1
861
1
RW
75
AN EXAMPLE OF ASSOCIATIVE MEMORY
  • Hit ratio fraction of memory references that can
    be satisfied from the associative memory.
  • The higher the hit ratio, the better the
    performance

32
Associative Memory III
Average access time for a 100 nsec page table
access and a 20 nsec associative memory access
100
All mapping using page table
Average access time (nsec)
All mapping using the associative memory
0
100
Hit ratio
  • Three general factors that influence performance
  • page table access time
  • associative memory access time
  • hit ratio (depends on pages referenced and AM
    size)

33
Inverted Page Tables I
  • Today 32-bit virtual address space and physical
    memory, 4 Kbytes pages size gt each process need
    2 20 entries in its page table (PT) with 4 bytes
    per entry 4 Mbytes / process and PT is large
    but manageable (multilevel paging schemes)
  • RISC chips with 64-bit virtual address space?
  • 64-bit virtual address space gtgtgtgt physical memory
  • 64-bit address space 20 million terabytes
  • 4 Kbytes page size gt 2 52 4 quadrillion PT
    entries gt requires rethinking!!!!!
  • Idea virtual address space immense, physical
    pages frames still manageable gt inverted page
    table
  • e.g. IBM System/38 or HP spectrum

34
Inverted Page Tables II
  • Inverted PT always used with AM
  • Inverted PR is only needed on an AM miss, it is
    searched for an entry whose virtual page number
    matches the virtual page in the current memory
    address
  • In case of page fault, conventional PT is used
    (may be stored on disk)

Conventional page table for one process
Flag bits
Pid
virtual page
page frame 0
page frame 1
Flag bits page frame
page frame 2
Virtual page 0
Virtual page 1
Virtual page 2
Inverted page table
35
Summary
  • Memory Management
  • swapping
  • paging
  • Next?
  • Page replacement algorithms

36
Overview
  • Page Replacement Algorithms
  • optimal page replacement
  • NRU (not recently used)
  • FIFO (first in first out)
  • second chance
  • clock
  • LRU (least recently used)
  • Modeling page algorithms
  • Design issues for paging systems
  • Segmentation
  • Summary

37
Page Replacement Algorithms
  • Page fault gt OS has to select a page for
    replacement
  • Modified page gt write back to disk
  • Not modified page gt just overwrite with new page
  • How to decide which page should be replaced?
  • random
  • many algorithms take into account
  • usage
  • age
  • ...

38
Optimal Page Replacement Algorithm
  • Easy to describe - impossible to implement
    because OS cannot look into future
  • Useful to evaluate page replacement algorithms
  • Best (optimal) page replacement algorithm
  • page fault occurs, a set of pages is in memory
  • label all pages with the number of instructions
    that will be executed before this page will be
    used again in the future
  • replace the page with the highest number

39
NRU Page Replacement Algorithm
  • Status bits associated with each page
  • R page referenced (read or written)
  • M page modified (written)
  • Four classes
  • class 0 not referenced, not modified
  • class 1 not referenced, modified
  • class 2 referenced, not modified
  • class 4 referenced, modified
  • NRU removes a page at random from the lowest
    numbered nonempty class
  • Low overhead

40
FIFO Page Replacement Algorithm
  • OS maintains list of all pages currently in
    memory
  • Pages are stored in list by age
  • FIFO replaces oldest pages in case of page fault
  • Low overhead
  • FIFO is rarely used in its pure form

Page loaded first
Most recently loaded page
Time 0 3 7 8 12
14 15 18
A
B
C
D
E
F
G
H
41
Second Chance Page Replacement Algorithm
  • Modification of FIFO
  • R bit when page is referenced again, it is
    treated like newly loaded page
  • Second chance is a reasonable algorithm
  • But, inefficient because it is moving pages
    around on its list

Page loaded first
Most recently loaded page
Time 0 3 7 8 12
14 15 18
A
B
C
D
E
F
G
H
A is treated like newly loaded page
Time 3 7 8 12 14
15 18
20
H
B
C
D
E
F
G
A
42
Clock Page Replacement Algorithm
Circular list in form of a clock
When a page fault occurs, the page the arrow is
pointing to is inspected. Action taken depends
on the R bit R0 evict page R1 clear R
advance
A
B
E
C
D
D
  • Pointer to the oldest page
  • R bit 0 page not referenced in last around gt
    replace
  • R bit 1 page referenced in last round
  • set R bit to 0
  • advance until first page with R 0 is found
  • advance pointer to next entry in both cases

43
LRU Page Replacement Algorithm
  • Pages that are heavily used in the last few
    instructions will probably be heavily used again
    in the next few instructions and vice versa
  • Expensive (ordered linked list of all pages in
    memory)

0 1 2 3
0
0
1
1
1
0
0
1
1
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
1
1
1
0
0
1
1
0
0
0
1
0
0
0
2
0
0
0
0
0
0
0
0
1
1
0
1
1
1
0
0
1
1
0
1
3
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
0
1
1
0
0
0
0
0
0
0
1
1
1
0
1
1
0
0
1
0
0
0
1
0
0
1
0
1
1
0
0
1
1
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0
1
0
0
0
0
1
1
0
1
1
1
0
0
1
0
0
0
0
0
0
0
1
1
1
0
1
1
0
0
1
1
1
0
Pages referenced in this order 0 1 2 3 2 1 0 3 2
3
44
Modeling Paging Algorithms
  • Beladys anomaly
  • Stack algorithms
  • Distance string
  • Predicting page fault rates

45
Beladys Anomaly
  • The more pages frames, the fewer page faults?
  • Example with 3 and 4 page frames

All page frames initially empty
0 1 2 3 0 1 4 0 1 2 3 4
Youngest page
0
1
2
3
0
1
4
4
4
2
3
4
0
1
2
3
0
1
1
1
4
2
2
Oldest page
0
1
2
3
0
0
0
1
4
4
9 PAGES FAULTS
P P P P P P P P P
0 1 2 3 0 1 4 0 1 2 3 4
Youngest page
0
1
2
3
3
3
4
0
1
2
3
4
0
1
2
2
2
3
4
0
1
2
3
0
1
1
1
2
3
4
0
1
2
Oldest page
0
0
0
1
2
3
4
0
1
10 PAGES FAULTS
P P P P P P P P P P
46
Stack Algorithms I
  • Every process generates a sequence of memory
    references as it runs
  • Each memory reference corresponds to a virtual
    page
  • Reference string ordered list of page numbers (
    process memory access)
  • A paging system can be characterized by
  • reference string of executing process
  • page replacement algorithm
  • page frames available in memory

47
Stack Algorithms II
0 2 1 3 5 4 6 3 7 4 7 3 3 5 5 3 1 1
1 7 2 3 4 1
Reference string
0
2
1
3
5
4
6
3
7
4
7
3
3
5
5
3
1
1
1
7
2
3
4
1
0
2
1
3
5
4
6
3
7
4
7
7
3
3
5
3
3
3
1
7
2
3
4
0
2
1
3
5
4
6
3
3
4
4
7
7
7
5
5
5
3
1
7
2
3
0
2
1
3
5
4
6
6
6
6
4
4
4
7
7
7
5
3
1
7
2
0
2
1
1
5
5
5
5
5
6
6
6
4
4
4
4
5
5
1
7
0
2
2
1
1
1
1
1
1
1
1
6
6
6
6
4
4
5
5
0
0
2
2
2
2
2
2
2
2
2
2
2
2
6
6
6
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Page faults P P P P P P P P
P P P
Distance string ? ? ? ? ? ? ? 4 ? 4 2 3 1
5 1 2 6 1 1 4 2 3 5 3
48
Predicting Page Fault Rates
  • Fm ?ck C?, m1 k n
  • Fm page faults occurring with a given distance
    string and m page frames

times 1 occurs in distance string
C1 4
F1 20
C2 C3 C?
C2 3
F2 17
C3 C4 C?
C3 3
F3 14
C4 C5 C?
C4 3
F4 11
C5 2
F5 9
page faults with 5 frames
C6 1
F6 8
times 6 occurs in distance string
C7 0
F7 8
C8 8
F8 8
49
Local Versus Global Allocation Policies I
  • How should memory be allocated among the
    competing runnable processes?
  • Local page replacement consider only pages of
    own process (can cause thrashing)
  • Global page replacement consider all pages in
    memory
  • in general, better performance
  • monitoring of working set size and aging bits
  • Equal allocation all processes get the same
    number of pages
  • Proportional allocation number of pages is
    depending on process size
  • Page fault frequency (PFF allocation algorithm)

50
Local Versus Global Allocation Policies II
Page fault rate as function of page frames
assigned
Page fault/sec
Unacceptable page fault rate, too high gt
process needs more memory
Unacceptable page fault rate, too low gt
process has too much memory
page frames assigned
  • Even with paging, swapping is still needed
    swapping is used to reduce potential demand for
    memory, rather than to reclaim blocks of it for
    immediate use

51
Page Size
  • Can often be chosen by OS designer, even if
    hardware is designed with fixed size
  • Determining the optimum page size requires
    balancing several competition factors
  • segment size n x page size (internal
    fragmentation)
  • keep only in memory what is used
  • page table size access/loading time and space
    requirements
  • Usual page size 512 Bytes - 8 KBytes

52
Implementation Issues I
  • Design issues
  • second chance versus aging
  • local versus global page allocation
  • demand paging versus pre-paging
  • Locking pages in memory
  • interaction between virtual memory and I/O
  • locking of pages engaged in I/O in memory
  • Shared pages users running the same program at
    the same time, e.g. editor or compiler
  • not all pages are sharable
  • program text is sharable
  • data pages are not sharable

53
Implementation Issues II
  • Backing store disk management
  • where on disk shall the pages be put when paged
    out?
  • Special allocated swap space copy or no copy on
    disk?
  • Nothing allocated in advance
  • Paging daemons
  • background process taking care that enough free
    page frames are available
  • writing back modified pages to disk before they
    are replaced

54
Page Fault Handling
  • Sequence of events
  • hardware traps to the OS kernel, saving program
    counter on stack
  • assembly code routine started to save general
    registers and other volatile information
  • OS discovers that page has occurred and tries to
    discover which virtual page is needed
  • OS checks whether virtual address is valid and
    protection consistent with access
  • check if selected page frame is dirty
  • when page frame is clean, OS looks up the disk
    address where the needed page is, and schedules a
    disk operation to bring it in
  • disk interrupt indicates that page has arrived,
    page tables are updated to reflect its position,
    and frame is marked as being in normal state
  • faulting instruction is backed up to begin,
    program counter is reset
  • faulting process is scheduled, OS returns to
    assembly language routine calling it
  • this routine restores the registers and other
    volatile information, and returns to user space
    to continue execution, as if no fault had
    occurred!

55
Segmentation I
  • One-dimensional virtual memory
  • Segments linear sequence of addresses (variable
    length)
  • two or more separate/independent virtual address
    spaces growing/shrinking processes and segments
  • different kinds of protection are possible
  • Two-part address
  • address number
  • address within segment
  • Segmentation facilitates sharing procedures or
    data between several processes
  • e.g. shared library

56
Segmentation II
  • One-dimensional address space with growing tables
  • e.g. segmented memory used for compiler tables

Symbol table
Symbol table has bumped into the source text
table
Source text
Address space currently being used by the
constant table
Address space allocated to the constant table
Constant table
free
Parse tree
Call stack
57
Segmentation III
  • Segmented memory allows each table to grow or
    shrink independently of other tables

Segment 0 Segment 1 Segment 2
Segment 3 Segment 4
Symbol table
Source text
Constant
Parse tree
Call stack
58
Comparison of Paging and Segmentation
  • Paging Segmentation
  • Consideration
  • Need the programmer be aware that this technique
    is used?
  • How many linear address spaces are there?
  • Can the total address space exceed the size of
    physical memory?
  • Can procedures and data be distinguished and
    separately protected?
  • Can tables whose size fluctuates be accomplished
    easily?
  • Is sharing of procedures between users
    facilitated?
  • Why was this technique invented?

Yes
NO
Many
1
Yes
Yes
Yes
No
No
Yes
No
Yes
To get a large linear address space
without having to buy more physical memory
To allow programs and data to be broken up
into logically independent address spaces and to
aid sharing and protection
59
Implementation of Pure Segmentation
Pages have fixed size, segments have variable size
Segment 0 (4k)
Segment 0 (4k)
Segment 0 (4k)
Segment 0 (4k)
Segment 0 (4k)
Segment 1 (8k)
Segment 7 (5k)
Segment 7 (5k)
Segment 7 (5k)
Segment 7 (5k)
(3k)
(3k)
(3k)
Segment 2 (5k)
Segment 2 (5k)
Segment 2 (5k)
Segment 2 (5k)
Segment 2 (5k)
Segment 6 (4k)
Segment 3 (8k)
Segment 3 (8k)
Segment 3 (8k)
Segment 6 (4k)
Segment 5 (4k)
(4k)
(10k)
Segment 4 (7k)
Segment 4 (7k)
Segment 5 (4k)
Segment 5 (4k)
(3k)
(3k)
(a) (b) ( c)
(d) (e)
(a)-(d) checkboarding (e) memory compaction
60
Segmentation with Paging
  • Many large segments gt main memory size gt paging
  • MULTICS
  • Honeywell 6000 machines descendents
  • per program virtual memory of max. size 218
    256 K segments (max. size 64 K 36-bit word long)
  • 1 segment 1 virtual memory
  • segment table page tables
  • 16-word high speed associative memory

61
Segmentation with Paging MULTICS I
Virtual memory in MULTICS
Page table for segment 0
Descriptor segment
Pointer to the page
Page 0 entry
36 bits
Page 1 entry
Page 2 entry
Segment 0 descriptor
Page 3 entry
Segment 1 descriptor
Segment 2 descriptor
...
Segment 3 descriptor
Segment 4 descriptor
Segment 5 descriptor
Page table for segment 1
Segment 6 descriptor
Segment 7 descriptor
...
Page 0 entry
Page 1 entry
Page 2 entry
Page 3 entry
...
18
9 1 1 1 3 3
Main memory address of the page table
Segment length
Page size
protection
Paged?
Misc.
Segment descriptor
62
Segmentation with Paging MULTICS II
34-bit virtual address in MULTICS
18 6
10
Segment number
Page number
Offset within page
Address within the segment
  • segment number used to find segment descriptor
  • check to see if segments page table is in
    memory
  • page table entry for requested virtual page is
    examined
  • offset is added page origin to give main memory
    address where word is located
  • read or store

63
Segmentation with Paging MULTICS III
Conversion of a two-part address into a main
memory address
18 6
10
Segment number
Page number
Offset within page
Descriptor
Page frame
Word
Page table
Descriptor segment
Page
64
Segmentation with Paging MULTICS IV
Comparison field
Is this entry used?
Segment number
Age
Virtual page page frame protection
1
7
RW
13
1
4
0
2
R
10
1
6
3
1
RW
2
1
12
0
1
0
X
7
1
2
2
12
X
9
1
2
65
Summary -Memory Management
  • Simple systems - no swapping
  • monoprogramming (one process at a time)
  • multiprogramming (multiple processes at a time)
  • Systems with swapping
  • swapping of processes to disk which do not fit
    into physical memory
  • free place management with bit map, hole list, or
    buddy system
  • Virtual memory
  • paging (single-level for small page tables,
    multi-level for large page tables)
  • Associative memory
  • Page replacement algorithms
  • Segmentation
  • Next time?
  • File System
Write a Comment
User Comments (0)
About PowerShow.com