Title: CS 241 Section Week
1CS 241 Section Week 10(10/30/08)
2Outline
- MP5 tips
- Memory Management
- Fragmentation
- Storage Placement Algorithms
- malloc revisited
- paging
- Virtual Memory
- Why Virtual Memory
- Virtual Memory Addressing
- TLB (Translation Lookaside Buffer)
- Multilevel Page Table
- Inverted Page Table
3MP5 tips
4MP5 tips
- What system call allows you to create a child
process and make it execute a command? - Do some reading on the exec() family of library
functions (man 3 exec). - Include a counter that increments for each
command executed regardless of whether the
command succeeds or fails.(Careful with blank
lines) - Create a built-in 'cd' command which CHanges
DIRectory in the shell - Create a build-in command '!N' to re-execute the
last N'th command - User presses 'Ctrl-C' in Shell, instead of
exiting shell, you prints the last 9 commands - Carefully read Readme.txt for details!
- Good luck!
5Memory Management
6Fragmentation
- External Fragmentation
- Free space becomes divided into many small pieces
- Caused over time by allocating and freeing the
storage of different sizes - Internal Fragmentation
- Result of reserving space without ever using its
part - Caused by allocating fixed size of storage
7Storage Placement Algorithms
- Best Fit
- Produces the smallest leftover hole
- Creates small holes that cannot be used
8Storage Placement Algorithms
- Best Fit
- Produces the smallest leftover hole
- Creates small holes that cannot be used
- First Fit
- Creates average size holes
9Storage Placement Algorithms
- Best Fit
- Produces the smallest leftover hole
- Creates small holes that cannot be used
- First Fit
- Creates average size holes
- Worst Fit
- Produces the largest leftover hole
- Difficult to run large programs
10Storage Placement Algorithms
- Best Fit
- Produces the smallest leftover hole
- Creates small holes that cannot be used
- First Fit
- Creates average size holes
- Worst Fit
- Produces the largest leftover hole
- Difficult to run large programs
- First-Fit and Best-Fit are better than Worst-Fit
in terms of SPEED and STORAGE UTILIZATION
11Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit?
12Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit? 20KB, 10KB and 18KB
13Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit? 20KB, 10KB and 18KB
- Best Fit?
14Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit? 20KB, 10KB and 18KB
- Best Fit? 12KB, 10KB and 9KB
15Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit? 20KB, 10KB and 18KB
- Best Fit? 12KB, 10KB and 9KB
- Worst Fit?
16Exercise
- Consider a swapping system in which memory
consists of the following hole sizes in memory
order 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and
15KB. Which hole is taken for successive segment
requests of (a) 12KB, (b) 10KB, (c) 9KB for - First Fit? 20KB, 10KB and 18KB
- Best Fit? 12KB, 10KB and 9KB
- Worst Fit? 20KB, 18KB and 15KB
17malloc Revisited
- Free storage is kept as a list of free blocks
- Each block contains a size, a pointer to the next
block, and the space itself
18malloc Revisited
- Free storage is kept as a list of free blocks
- Each block contains a size, a pointer to the next
block, and the space itself - When a request for space is made, the free list
is scanned until a big-enough block can be found - Which storage placement algorithm is used?
19malloc Revisited
- Free storage is kept as a list of free blocks
- Each block contains a size, a pointer to the next
block, and the space itself - When a request for space is made, the free list
is scanned until a big-enough block can be found - Which storage placement algorithm is used?
- If the block is found, return it and adjust the
free list. Otherwise, another large chunk is
obtained from the OS and linked into the free list
20malloc Revisited (continued)
- typedef long Align / for alignment to long /
- union header / block header /
- struct
- union header ptr / next block if on free
list / - unsigned size / size of this block /
- s
- Align x / force alignment of blocks /
-
- typedef union header Header
21Paging
- Divide memory into pages, all of equal size
- We dont need to assign contiguous chunks
22Paging
- Divide memory into pages, all of equal size
- We dont need to assign contiguous chunks
- Internal fragmentation can only occur on the last
page assigned to a process
23Paging
- Divide memory into pages, all of equal size
- We dont need to assign contiguous chunks
- Internal fragmentation can only occur on the last
page assigned to a process - External fragmentation cannot occur at all
24Virtual Memory
25Why Virtual Memory?
- Use main memory as a Cache for the Disk
- Address space of a process can exceed physical
memory size - Sum of address spaces of multiple processes can
exceed physical memory
26Why Virtual Memory?
- Use main memory as a Cache for the Disk
- Address space of a process can exceed physical
memory size - Sum of address spaces of multiple processes can
exceed physical memory - Simplify Memory Management
- Multiple processes resident in main memory.
- Each process with its own address space
- Only active code and data is actually in memory
27Why Virtual Memory?
- Use main memory as a Cache for the Disk
- Address space of a process can exceed physical
memory size - Sum of address spaces of multiple processes can
exceed physical memory - Simplify Memory Management
- Multiple processes resident in main memory.
- Each process with its own address space
- Only active code and data is actually in memory
- Provide Protection
- One process cant interfere with another.
- because they operate in different address spaces.
- User process cannot access privileged information
- different sections of address spaces have
different permissions.
28Principle of Locality
- Program and data references within a process tend
to cluster
29Principle of Locality
- Program and data references within a process tend
to cluster - Only a few pieces of a process will be needed
over a short period of time (active data or code)
30Principle of Locality
- Program and data references within a process tend
to cluster - Only a few pieces of a process will be needed
over a short period of time (active data or code) - Possible to make intelligent guesses about which
pieces will be needed in the future
31Principle of Locality
- Program and data references within a process tend
to cluster - Only a few pieces of a process will be needed
over a short period of time (active data or code) - Possible to make intelligent guesses about which
pieces will be needed in the future - This suggests that virtual memory may work
efficiently
32VM Address Translation
- Parameters
- P 2p page size (bytes).
- N 2n Virtual address limit
- M 2m Physical address limit
33Page Table
- Keeps track of what pages are in memory
34Page Table
- Keeps track of what pages are in memory
- Provides a mapping from virtual address to
physical address
35Handling a Page Fault
- Page fault
- Look for an empty page in RAM
- May need to write a page to disk and free it
36Handling a Page Fault
- Page fault
- Look for an empty page in RAM
- May need to write a page to disk and free it
- Load the faulted page into that empty page
37Handling a Page Fault
- Page fault
- Look for an empty page in RAM
- May need to write a page to disk and free it
- Load the faulted page into that empty page
- Modify the page table
38Addressing
39Addressing
- 64MB RAM (226)
- 231 (2GB) total memory
Virtual Address (31 bits)
40Addressing
- 64MB RAM (226)
- 231 (2GB) total memory
- 4KB page size (212)
Virtual Address (31 bits)
41Addressing
- 64MB RAM (226)
- 231 (2GB) total memory
- 4KB page size (212)
- So we need 212 for the offset, we can use the
remainder bits for the page
Virtual Address (31 bits)
Virtual Page number (19 bits)
Page offset (12 bits)
42Addressing
- 64MB RAM (226)
- 231 (2GB) total memory
- 4KB page size (212)
- So we need 212 for the offset, we can use the
remainder bits for the page - 19 bits, we have 219 pages (524288 pages)
Virtual Address (31 bits)
Virtual Page number (19 bits)
Page offset (12 bits)
43Address Conversion
- That 19bit page address can be optimized in a
variety of ways - Translation Look-aside Buffer
44Translation Lookaside Buffer (TLB)
- Each virtual memory reference can cause two
physical memory accesses - One to fetch the page table
- One to fetch the data
45Translation Lookaside Buffer (TLB)
- Each virtual memory reference can cause two
physical memory accesses - One to fetch the page table
- One to fetch the data
- To overcome this problem a high-speed cache is
set up for page table entries
46Translation Lookaside Buffer (TLB)
- Each virtual memory reference can cause two
physical memory accesses - One to fetch the page table
- One to fetch the data
- To overcome this problem a high-speed cache is
set up for page table entries - Contains page table entries that have been most
recently used (a cache for page table)
47Translation Lookaside Buffer (TLB)
48Address Conversion
- That 19bit page address can be optimized in a
variety of ways - Translation Look-aside Buffer
- Multilevel Page Table
49Multilevel Page Tables
- Given
- 4KB (212) page size
- 32-bit address space
- 4-byte PTE
50Multilevel Page Tables
- Given
- 4KB (212) page size
- 32-bit address space
- 4-byte PTE
- Problem
- Would need a 4 MB page table!
- 220 4 bytes
51Multilevel Page Tables
- Given
- 4KB (212) page size
- 32-bit address space
- 4-byte PTE
- Problem
- Would need a 4 MB page table!
- 220 4 bytes
- Common solution
- multi-level page tables
- e.g., 2-level table (P6)
- Level 1 table 1024 entries, each of which points
to a Level 2 page table. - Level 2 table 1024 entries, each of which
points to a page
52Summary Multi-level Page Tables
- Instead of one large table, keep a tree of tables
- Top-level table stores pointers to lower level
page tables - First n bits of the page number index of the
top-level page table - Second n bits index of the 2nd-level page
table - Etc.
53Example Two-level Page Table
- 32-bit address space (2GB)
54Example Two-level Page Table
- 32-bit address space (2GB)
- 12-bit page offset (4kB pages)
55Example Two-level Page Table
- 32-bit address space (2GB)
- 12-bit page offset (4kB pages)
- 20-bit page address
- First 10 bits index the top-level page table
- Second 10 bits index the 2nd-level page table
- 10 bits 1024 entries 4 bytes 4kB 1
page
56Example Two-level Page Table
- 32-bit address space (2GB)
- 12-bit page offset (4kB pages)
- 20-bit page address
- First 10 bits index the top-level page table
- Second 10 bits index the 2nd-level page table
- 10 bits 1024 entries 4 bytes 4kB 1
page - Need three memory accesses to read a memory
location
57Why use multi-level page tables?
- Split one large page table into many page-sized
chunks - Typically 4 or 8 MB for a 32-bit address space
58Why use multi-level page tables?
- Split one large page table into many page-sized
chunks - Typically 4 or 8 MB for a 32-bit address space
- Advantage less memory must be reserved for the
page tables - Can swap out unused or not recently used tables
59Why use multi-level page tables?
- Split one large page table into many page-sized
chunks - Typically 4 or 8 MB for a 32-bit address space
- Advantage less memory must be reserved for the
page tables - Can swap out unused or not recently used tables
- Disadvantage increased access time on TLB miss
- n1 memory accesses for n-level page tables
60Address Conversion
- That 19bit page address can be optimized in a
variety of ways - Translation Look-aside Buffer
- Multilevel Page Table
- Inverted Page Table
61Inverted Page Table
- Normal page table
- Virtual page number index
- Physical page number value
62Inverted Page Table
- Normal page table
- Virtual page number index
- Physical page number value
- Inverted page table
- Virtual page number value
- Physical page number index
63Inverted Page Table
- Normal page table
- Virtual page number index
- Physical page number value
- Inverted page table
- Virtual page number value
- Physical page number index
- Need to scan the table for the right value to
find the index - More efficient way use a hash table
64Example
Virtual Address (1010110)
65Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
66Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
1010
67Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
1010
68Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
Index 4 (100)
1010
69Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
Index 4 (100)
70Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
Index 4 (100)
71Example
Page Table
Virtual Address (1010110)
Index Present Virtual Addr
Index 4 (100)
Physical Address (100110)
72Why use inverted page tables?
- One entry for each page of physical memory
- vs. one per page of logical address space
73Why use inverted page tables?
- One entry for each page of physical memory
- vs. one per page of logical address space
- Advantage less memory needed to store the page
table - If address space gtgt physical memory
74Why use inverted page tables?
- One entry for each page of physical memory
- vs. one per page of logical address space
- Advantage less memory needed to store the page
table - If address space gtgt physical memory
- Disadvantage increased access time on TLB miss
- Use a hash table to limit the search to one or
at most a few extra memory accesses
75Summary Address Conversion
- That 19bit page address can be optimized in a
variety of ways - Translation Look-aside Buffer
- m memory cycle, ? - hit ratio, ? - TLB lookup
time - Effective access time (Eat)
- Eat (m ?)????(2m ?)(1 ?) 2m ? m?
- Multilevel Page Table
- Similar to indirect pointers in I-nodes
- Split the 19bits into multiple sections
- Inverted Page Table
- Much smaller, but is slower and more difficult to
lookup
76Summary Page Tables
- 64kB logical address space
- 8 pages 4kB 32kB RAM
- 16-bit virtual address consists of
- Page number (4 bits)
- Page offset (12 bits)
- Virtual page number table index
- Physical frame number value
- Present bit is page in memory?
77Summary Paging
- Divide memory into pages of equal size
- We dont need to assign contiguous chunks
- Internal fragmentation can only occur on the last
page assigned to a process - External fragmentation cannot occur at all
- Need to map contiguous logical memory addresses
to disjoint pages
78Summary Virtual Memory
- RAM is expensive (but fast), disk is cheap (but
slow) - Need to find a way to use the cheaper memory
- Store memory that isnt frequently used on disk
- Swap pages between disk and memory as needed
- Treat main memory as a cache for pages on disk