CS 213 Introduction to Computer Systems Thinking Inside the Box PowerPoint PPT Presentation

presentation player overlay
1 / 44
About This Presentation
Transcript and Presenter's Notes

Title: CS 213 Introduction to Computer Systems Thinking Inside the Box


1
CS 213Introduction to Computer SystemsThinking
Inside the Box
  • Instructor Brian M. Dennis
  • bmd_at_cs.northwestern.edu
  • Teaching Assistant Bin Lin
  • binlin_at_cs.northwestern.edu

2
Today's Topics
  • More virtual memory
  • Dealing with huge page tables
  • Kernel data structures

3
Virtual Memory Key Idea
Memory
Page Table
Virtual Addresses
Physical Addresses
0
1
P-1
  • Examples
  • workstations, servers, modern PCs, etc.

Disk
  • Address Translation Hardware converts virtual
    addresses to physical addresses via OS-managed
    lookup table (page table)

4
Review of Abbreviations
  • Symbols
  • Components of the virtual address (VA)
  • TLBI TLB index
  • TLBT TLB tag
  • VPO virtual page offset
  • VPN virtual page number
  • Components of the physical address (PA)
  • PPO physical page offset (same as VPO)
  • PPN physical page number
  • CO byte offset within cache line
  • CI cache index
  • CT cache tag

5
Simple Memory System Example
  • Addressing
  • 14-bit virtual addresses
  • 12-bit physical address
  • Page size 64 bytes
  • 1 byte word size

(Virtual Page Number)
(Virtual Page Offset)
(Physical Page Number)
(Physical Page Offset)
6
Simple Memory System Page Table
  • Only seeing first 16 entries of page table
  • Total 28
  • PTE bits 6
  • Total size 256 bytes

7
Simple Memory System TLB
  • TLB
  • 16 entries
  • 4-way set associative

8
Simple Memory System Cache
  • Cache
  • 16 lines
  • 4-byte line size
  • Direct mapped

9
Multi-Level Page Tables
Level 2 Tables
  • Given
  • 4KB (212) page size
  • 32-bit address space
  • 4-byte PTE
  • Problem
  • Would need a 4 MB page table!
  • 220 4 bytes
  • Common solution
  • multi-level page tables
  • e.g., 2-level table (P6)
  • Level 1 table 1024 entries, each of which points
    to a Level 2 page table.
  • Level 2 table 1024 entries, each of which
    points to a page

Level 1 Table
...
10
Representation of Virtual Address Space
  • Simplified Example
  • 16 page virtual address space
  • May only have 8 physical pages
  • Flags
  • P Is entry in physical memory?
  • M Has this part of VA space been mapped?

11
Intel P6
  • Internal Designation for Successor to Pentium
  • Which had internal designation P5
  • Fundamentally Different from Pentium
  • Out-of-order, superscalar operation
  • Designed to handle server applications
  • Requires high performance memory system
  • Resulting Processors
  • PentiumPro (1996)
  • Pentium II (1997)
  • Incorporated MMX instructions
  • special instructions for parallel processing
  • L2 cache on same chip
  • Pentium III (1999)
  • Incorporated Streaming SIMD Extensions
  • More instructions for parallel processing

12
P6 Memory System
  • 32 bit address space
  • 4 KB page size
  • L1, L2, and TLBs
  • 4-way set associative
  • inst TLB
  • 32 entries
  • 8 sets
  • data TLB
  • 64 entries
  • 16 sets
  • L1 i-cache and d-cache
  • 16 KB
  • 32 B line size
  • 128 sets
  • L2 cache
  • unified
  • 128 KB -- 2 MB

DRAM
external system bus (e.g. PCI)
L2 cache
cache bus
bus interface unit
inst TLB
data TLB
instruction fetch unit
L1 i-cache
L1 d-cache
processor package
13
Overview of P6 Address Translation
CPU
32
L2 and DRAM
result
20
12
virtual address (VA)
VPN
VPO
L1 miss
L1 hit
4
16
TLBT
TLBI
L1 (128 sets, 4 lines/set)
TLB hit
TLB miss
...
...
TLB (16 sets, 4 entries/set)
10
10
VPN1
VPN2
20
12
20
5
7
PPN
PPO
CT
CO
CI
physical address (PA)
PDE
PTE
Page tables
PDBR
14
P6 2-level Page Table Structure
Up to 1024 page tables
  • Page directory
  • 1024 4-byte page directory entries (PDEs) that
    point to page tables
  • one page directory per process.
  • page directory must be in memory when its process
    is running
  • always pointed to by PDBR
  • Page tables
  • 1024 4-byte page table entries (PTEs) that point
    to pages.
  • page tables can be paged in and out.

1024 PTEs
page directory
...
1024 PTEs
1024 PDEs
...
1024 PTEs
15
P6 Page Directory Entry (PDE)
31
12
11
9
8
7
6
5
4
3
2
1
0
Page table physical base addr
Avail
G
PS
A
CD
WT
U/S
R/W
P1
Page table physical base address 20 most
significant bits of physical page table address
(forces page tables to be 4KB aligned) Avail
These bits available for system programmers G
global page (dont evict from TLB on task
switch) PS page size 4K (0) or 4M (1) A
accessed (set by MMU on reads and writes, cleared
by software) CD cache disabled (1) or enabled
(0) WT write-through or write-back cache policy
for this page table U/S user or supervisor mode
access R/W read-only or read-write access P
page table is present in memory (1) or not (0)
31
0
1
Available for OS (page table location in
secondary storage)
P0
16
P6 Page Table Entry (PTE)
31
12
11
9
8
7
6
5
4
3
2
1
0
Page physical base address
Avail
G
0
D
A
CD
WT
U/S
R/W
P1
Page base address 20 most significant bits of
physical page address (forces pages to be 4 KB
aligned) Avail available for system
programmers G global page (dont evict from TLB
on task switch) D dirty (set by MMU on
writes) A accessed (set by MMU on reads and
writes) CD cache disabled or enabled WT
write-through or write-back cache policy for this
page U/S user/supervisor R/W read/write P page
is present in physical memory (1) or not (0)
31
0
1
Available for OS (page location in secondary
storage)
P0
17
P6 TLB Translation
CPU
32
L2 andDRAM
result
20
12
virtual address (VA)
VPN
VPO
L1 miss
L1 hit
4
16
TLBT
TLBI
L1 (128 sets, 4 lines/set)
TLB hit
TLB miss
...
...
TLB (16 sets, 4 entries/set)
10
10
VPN1
VPN2
20
12
20
5
7
PPN
PPO
CT
CO
CI
physical address (PA)
PDE
PTE
Page tables
PDBR
18
P6 TLB
  • TLB entry (not all documented, so this is
    speculative)
  • V indicates a valid (1) or invalid (0) TLB entry
  • PD is this entry a PDE (1) or a PTE (0)?
  • tag disambiguates entries cached in the same set
  • PDE/PTE page directory or page table entry
  • Structure of the data TLB
  • 16 sets, 4 entries/set

19
Translating with the P6 TLB
  • 1. Partition VPN into TLBT and TLBI.
  • 2. Is the PTE for VPN cached in set TLBI?
  • 3. Yes then build physical address.
  • 4. No then read PTE (and PDE if not cached) from
    memory and build physical address.

CPU
virtual address
20
12
VPN
VPO
4
16
TLBT
TLBI
1
2
TLB hit
TLB miss
PDE
PTE
3
...
20
12
PPN
PPO
physical address
page table translation
4
20
P6 page table translation
CPU
32
L2 andDRAM
result
20
12
virtual address (VA)
VPN
VPO
L1 miss
L1 hit
4
16
TLBT
TLBI
L1 (128 sets, 4 lines/set)
TLB hit
TLB miss
...
...
TLB (16 sets, 4 entries/set)
10
10
VPN1
VPN2
20
12
20
5
7
PPN
PPO
CT
CO
CI
physical address (PA)
PDE
PTE
Page tables
PDBR
21
P6 Page Tables VA --gt PA
10
10
12
Virtual address
VPN1
VPO
VPN2
word offset into page directory
word offset into page table
word offset into physical and virtual page
page directory
page table
physical address of page base (if P1)
PTE
PDE
PDBR
physical address of page table base (if P1)
physical address of page directory
20
12
Physical address
PPN
PPO
22
Translating with the P6 Page Tables(case 1/1)
20
12
  • Case 1/1 page table and page present.
  • MMU Action
  • MMU builds physical address and fetches data
    word.
  • OS action
  • none

VPN
VPO
20
12
VPN1
VPN2
PPN
PPO
Mem
PDE
p1
PTE
p1
data
PDBR
Data page
Page directory
Page table
Disk
23
Translating with the P6 Page Tables(case 1/0)
20
12
  • Case 1/0 page table present but page missing.
  • MMU Action
  • page fault exception
  • handler receives the following args
  • VA that caused fault
  • fault caused by non-present page or page-level
    protection violation
  • read/write
  • user/supervisor

VPN
VPO
VPN1
VPN2
Mem
PDE
p1
PTE
p0
PDBR
Page directory
Page table
data
Disk
Data page
24
Translating with the P6 Page Tables(case 1/0,
cont)
20
12
  • OS Action
  • Check for a legal virtual address.
  • Read PTE through PDE.
  • Find free physical page (swapping out current
    page if necessary)
  • Read virtual page from disk and copy to virtual
    page
  • Restart faulting instruction by returning from
    exception handler.

VPN
VPO
20
12
VPN1
VPN2
PPN
PPO
Mem
PDE
p1
PTE
p1
data
PDBR
Data page
Page directory
Page table
Disk
25
Translating with the P6 Page Tables(case 0/1)
20
12
  • Case 0/1 page table missing but page present.
  • Introduces consistency issue.
  • potentially every page out requires update of
    disk page table.
  • Linux disallows this
  • if a page table is swapped out, then swap out its
    data pages too.

VPN
VPO
VPN1
VPN2
Mem
PDE
p0
data
PDBR
Data page
Page directory
PTE
p1
Disk
Page table
26
Translating with the P6 Page Tables(case 0/0)
20
12
VPN
VPO
  • Case 0/0 page table and page missing.
  • MMU Action
  • page fault exception

VPN1
VPN2
Mem
PDE
p0
PDBR
Page directory
PTE
data
p0
Disk
Page table
Data page
27
Translating with the P6 Page Tables(case 0/0,
cont)
20
12
VPN
VPO
  • OS action
  • swap in page table.
  • restart faulting instruction by returning from
    handler.
  • Like case 0/1 from here on.

VPN1
VPN2
Mem
PDE
p1
PTE
p0
PDBR
Page table
Page directory
data
Disk
Data page
28
P6 L1 Cache Access
CPU
32
L2 andDRAM
result
20
12
virtual address (VA)
VPN
VPO
L1 miss
L1 hit
4
16
TLBT
TLBI
L1 (128 sets, 4 lines/set)
TLB hit
TLB miss
...
...
TLB (16 sets, 4 entries/set)
10
10
VPN1
VPN2
20
12
20
5
7
PPN
PPO
CT
CO
CI
physical address (PA)
PDE
PTE
Page tables
PDBR
29
L1 Cache Access
32
L2 andDRAM
data
  • Partition physical address into CO, CI, and CT.
  • Use CT to determine if line containing word at
    address PA is cached in set CI.
  • If no check L2.
  • If yes extract word at byte offset CO and return
    to processor.

L1 miss
L1 hit
L1 (128 sets, 4 lines/set)
...
20
5
7
CT
CO
CI
physical address (PA)
30
Speeding Up L1 Access
Tag Check
20
5
7
CT
CO
CI
Physical address (PA)
PPO
PPN
Addr. Trans.
No Change
CI
virtual address (VA)
VPN
VPO
20
12
  • Observation
  • Bits that determine CI identical in virtual and
    physical address
  • Can index into cache while address translation
    taking place
  • Then check with CT from physical address
  • Virtually indexed, physically tagged
  • Cache carefully sized to make this possible

31
Transition to Kernel Datastructures
32
Linux VM as Collection of Areas
process virtual memory
vm_area_struct
task_struct
mm_struct
vm_end
vm_start
pgd
mm
vm_prot
vm_flags
mmap
shared libraries
vm_next
0x40000000
vm_end
  • pgd
  • page directory address
  • vm_prot
  • read/write permissions for this area
  • vm_flags
  • shared with other processes or private to this
    process

vm_start
data
vm_prot
vm_flags
0x0804a020
text
vm_next
vm_end
vm_start
0x08048000
vm_prot
vm_flags
0
vm_next
33
Linux Page Fault Handling
process virtual memory
vm_area_struct
  • Is the VA legal?
  • i.e. is it in an area defined by a
    vm_area_struct?
  • if not then signal segmentation violation (e.g.
    (1))
  • Is the operation legal?
  • i.e., can the process read/write this area?
  • if not then signal protection violation (e.g.,
    (2))
  • If OK, handle fault
  • e.g., (3)

shared libraries
1
read
3
data
read
2
text
write
0
34
Memory Mapping
  • Creation of new VM area done via memory mapping
  • create new vm_area_struct and page tables for
    area
  • area can be backed by (i.e., get its initial
    values from)
  • regular file on disk (e.g., an executable object
    file)
  • initial page bytes come from a section of a file
  • nothing (e.g., bss)
  • initial page bytes are zeros
  • dirty pages are swapped back and forth between a
    special swap file.
  • Key point no virtual pages are copied into
    physical memory until they are referenced!
  • known as demand paging
  • crucial for time and space efficiency

35
User-Level Memory Mapping
  • void mmap(void start, int len,
  • int prot, int flags, int fd,
  • int offset)
  • map len bytes starting at offset offset of the
    file specified by file description fd, preferably
    at address start (usually 0 for dont care).
  • prot MAP_READ, MAP_WRITE
  • flags MAP_PRIVATE, MAP_SHARED
  • return a pointer to the mapped area.
  • Example fast file copy
  • useful for applications like Web servers that
    need to quickly copy files.
  • mmap allows file transfers without copying into
    user space.

36
mmap() Example Fast File Copy
int main() struct stat stat int i, fd,
size char bufp / open the file get its
size/ fd open("./mmap.c", O_RDONLY)
fstat(fd, stat) size stat.st_size / map
the file to a new VM area / bufp mmap(0,
size, PROT_READ, MAP_PRIVATE, fd, 0)
/ write the VM area to stdout / write(1,
bufp, size) / Like the following but OS does
work for (int i 0 i lt size
i) putc(bufpi) /
  • include ltunistd.hgt
  • include ltsys/mman.hgt
  • include ltsys/types.hgt
  • include ltsys/stat.hgt
  • include ltfcntl.hgt
  • /
  • mmap.c - a program that uses mmap
  • to copy itself to stdout
  • /

37
Exec() Revisited
process-specific data structures (page
tables, task and mm structs)
  • To run a new program p in the current process
    using exec()
  • free vm_area_structs and page tables for old
    areas.
  • create new vm_area_structs and page tables for
    new areas.
  • stack, bss, data, text, shared libs.
  • text and data backed by ELF executable object
    file.
  • bss and stack initialized to zero.
  • set PC to entry point in .text
  • Linux will swap in code and data pages as needed.

physical memory
same for each process
kernel code/data/stack
kernel VM
0xc0
demand-zero
stack
esp
process VM
Memory mapped region for shared libraries
.data
.text
libc.so
brk
runtime heap (via malloc)
demand-zero
uninitialized data (.bss)
initialized data (.data)
.data
program text (.text)
.text
p
forbidden
0
38
Fork() Revisited
  • To create a new process using fork()
  • make copies of the old processs mm_struct,
    vm_area_structs, and page tables.
  • at this point the two processes are sharing all
    of their pages.
  • How to get separate spaces without copying all
    the virtual pages from one space to another?
  • copy on write technique.
  • copy-on-write
  • make pages of writeable areas read-only
  • flag vm_area_structs for these areas as private
    copy-on-write.
  • writes by either process to these pages will
    cause page faults.
  • fault handler recognizes copy-on-write, makes a
    copy of the page, and restores write permissions.
  • Net result
  • copies are deferred until absolutely necessary
    (i.e., when one of the processes tries to modify
    a shared page).

39
Memory System Summary
  • Cache Memory
  • Purely a speed-up technique
  • Behavior invisible to application programmer and
    OS
  • Implemented totally in hardware
  • Virtual Memory
  • Supports many OS-related functions
  • Process creation
  • Initialization, shared libraries
  • Forking ? launching new thread of control
  • Exec ? launching new program
  • Task switching
  • Protection
  • Combination of hardware software implementation
  • Software management of tables, allocations
  • Hardware access of tables
  • Hardware caching of table entries (TLB)

40
  • If time intro dynamic memory management

41
Malloc Package
  • include ltstdlib.hgt
  • void malloc(size_t size)
  • If successful
  • Returns a pointer to a memory block of at least
    size bytes, (typically) aligned to 8-byte
    boundary.
  • If size 0, returns NULL
  • If unsuccessful returns NULL (0) and sets errno.
  • void free(void p)
  • Returns the block pointed at by p to pool of
    available memory
  • p must come from a previous call to malloc or
    realloc.
  • void realloc(void p, size_t size)
  • Changes size of block p and returns pointer to
    new block.
  • Contents of new block unchanged up to min of old
    and new size.

42
Malloc Example
void foo(int n, int m) int i, p /
allocate a block of n ints / if ((p (int )
malloc(n sizeof(int))) NULL)
perror("malloc") exit(0) for (i0
iltn i) pi i / add m bytes to end
of p block / if ((p (int ) realloc(p, (nm)
sizeof(int))) NULL) perror("realloc")
exit(0) for (in i lt nm i)
pi i / print new array / for (i0
iltnm i) printf("d\n", pi) free(p)
/ return p to available memory pool /
43
Assumptions
  • Assumptions
  • Memory is word addressed (each word can hold a
    pointer, probably 4 bytes)

Free word
Allocated block (4 words)
Free block (3 words)
Allocated word
44
Allocation Examples
p1 malloc(4)
p2 malloc(5)
p3 malloc(6)
free(p2)
p4 malloc(2)
45
Constraints
  • Applications
  • Can issue arbitrary sequence of allocation and
    free requests
  • Free requests must correspond to an allocated
    block
  • Allocators
  • Cant control number or size of allocated blocks
  • Must respond immediately to all allocation
    requests
  • i.e., cant reorder or buffer requests
  • Must allocate blocks from free memory
  • i.e., can only place allocated blocks in free
    memory
  • Must align blocks so they satisfy all alignment
    requirements
  • 8 byte alignment for GNU malloc (libc malloc) on
    Linux boxes
  • Can only manipulate and modify free memory
  • Cant move the allocated blocks once they are
    allocated
  • i.e., compaction is not allowed

46
Goals of Good malloc/free
  • Primary goals
  • Good time performance for malloc and free
  • Ideally should take constant time (not always
    possible)
  • Should certainly not take linear time in the
    number of blocks
  • Good space utilization
  • User allocated structures should be large
    fraction of the heap.
  • Want to minimize fragmentation.
  • Some other goals
  • Good locality properties
  • Structures allocated close in time should be
    close in space
  • Similar objects should be allocated close in
    space
  • Robust
  • Obeys constraints!!
  • Can check that free(p1) is on a valid allocated
    object p1
  • Can check that memory references are to allocated
    space

47
Wrapup
  • Next Lecture
  • Dynamic memory management techniques
  • Reading 10.7 - 10.8
  • 10.9 to get ahead on memory allocation
  • HW 3 Due Monday
  • No class next Wednesday
Write a Comment
User Comments (0)
About PowerShow.com