Virtual Memory Operation and Structure - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Virtual Memory Operation and Structure

Description:

Pages should be large enough to amortize access time, but not too large. Can you explain why? ... Large pages help to amortize disk access time. Large pages may ... – PowerPoint PPT presentation

Number of Views:20
Avg rating:3.0/5.0
Slides: 28
Provided by: egBuc
Category:

less

Transcript and Presenter's Notes

Title: Virtual Memory Operation and Structure


1
  • Virtual Memory Operation and Structure

2
The Basics of Virtual Memory
  • The main idea is to design a scheme that
  • Supports a larger address space than the
    physical,
  • Achieves this by swapping data in and out of
    disk,
  • Gives each program the illusion of owning all
    memory,
  • Is transparent to the programmer,
  • On top of it all, we need good performance!

3
The Basics of Virtual Memory
Remember the costs of disk usage in terms
of access time and monetary units
Disk space is cheap, but disk access is not!
Side comment Disk access cost has three
components the time to move the disk head to the
right cylinder, the time to reach the sector in
the cylinder and transfer time.
4
The Basics of Virtual Memory
Divide physical memory into equally sized
portions called pages.
Pages may reside in physical memory when their
contents are being actively used or in disk when
their contents have not been used for a while.
Physical memory
page i
DISK
5
A program may use a large number N of pages, even
if (Npage size) is greater than the physical
memory.
The pages that a program uses need not be
contiguous nor in order in physical memory.
Physical memory
xemacs 3
The programmer doesnt need to explicitly move
pages to and from disk.
xemacs 1
The programmer sees the address space as if it
were in a memory much larger than the physical
memory.
xemacs 0
xemacs 2
Question What do we need to create this
illusion for the programmer?
6
We need a mapping mechanism
Virtual memory
Physical memory
xemacs 3
xemacs 0
xemacs 1
xemacs 2
xemacs 1
Address Translation from virtual address to
physical address
xemacs 3
xemacs 0
xemacs 2
7
Virtual memory
  • Points to notice
  • Virtual memory defines a virtual
  • address space.
  • Virtual address space is made of
  • pages of the same size as the physical
  • address space.
  • The pages in virtual address
  • space are nicely contiguous and
  • ordered.

xemacs 0
xemacs 1
xemacs 2
xemacs 3
8
Address Translation for Virtual Memory
Virtual address
Example Page size 4KB 212 Bytes Physical
memory size 218 pages 230 Bytes 1
GB Virtual memory size 232 Bytes 4 GB
Question What about this mapping from a larger
to a smaller space?
9
Address (showing bit positions)
Remember cache mapping?
10
Important Considerations
Cache miss ? Page Fault how do the penalties
compare?
Pages should be large enough to amortize access
time, but not too large. Can you explain why?
What about writing? Should we think write through
or write back?
Should we use a fully associative or a
direct-mapping scheme? Why?
Should we leave page fault handling to software
or should we make it a hardware task?
11
Mapping Mechanism for Virtual Memory
Page table
(each program has its own)
We use fully associative mapping.
12
Mapping Mechanism for Virtual Memory
Page table
(each program has its own)
Question How many entries in the page table?
13
Interlude Program, Process and Context
Each program uses the CPU registers as it
wants. Each program has its own page table, its
own value for the page table register.. If
theres more than one program executing under the
same CPU, they share the CPU. Sharing means that
all the represents one process is swapped in and
out of the CPU, depending on whether the process
is active or inactive. The operating system
handles CPU sharing and, thus, also process
context management.
14
The Virtual Memory System
Virtual page number
Page faults are exceptions by definition.
15
Page Size, Page Table and Disk Space
  • Large pages help to amortize disk access time.
  • Large pages may lead to waste of memory.
  • Small pages make the page table bigger.
  • A large Virtual Address Space requires more space
    on disk that needs to be allocated to implement
    Virtual Memory.

16
Handling Page Faults
  • Note a cache miss is handled by the Control
    Unit (hardware), while a page fault is handled by
    an exception handler (software).
  • Algorithm
  • Exception is raised. Instruction address is
    loaded into EPC.
  • Locate page on disk.
  • Is there space for loading a new page into
    physical memory? If not, choose a page to
    replace.
  • Write replaced page to disk if necessary.
  • Read new page.
  • Restart instruction.

17
Implementing Page Replacement
  • Schemes
  • FIFO
  • Least Recently Used
  • Most Recently Used
  • Random
  • ...

Hard to implement, but can be approximated in a
number of ways. Second-chance is a commonly
cited approximation.
18
Address Translation for Virtual Memory
Virtual address
Example Page size 4KB 212 Bytes Physical
memory size 218 pages 230 Bytes 1
GB Virtual memory size 232 Bytes 4 GB
19
Mapping Mechanism for Virtual Memory
Page table
(each program has its own)
20
Translation Lookaside Buffer
  • Some important points to notice
  • Page tables reside somewhere in the Physical
    Memory.
  • Physical Memory accesses are slow, thats why we
    have cache.
  • To translate every Virtual Address into a
    Physical Address we need to access the page
    table.
  • To improve performance, create another cache
  • The TLB is a cache containing frequently accessed
    entries in the page table. To translate a Virtual
    Address, we look for the page table info first in
    the TLB, if its not there, something like a
    cache miss happens.

If the TLB is a cache, how does one determine its
degree of associativity?
21
TLB
22
Virtual address
23
Processing a Memory Reference
24
(No Transcript)
25
Cache, VM, Processes and Context Switch
  • Facts
  • There is only one TLB, shared by all running
    processes.
  • There is only one cache memory, shared by all
    running processes.
  • All running processes share the CPU (and FPU)
    registers.
  • When the CPU switches from one process to
    another, we must
  • Save register contents,
  • Update page table register,
  • Flush the TLB (unless TLB entries are tagged with
    process id),
  • Flush the cache (remember the cache stores
    physical addresses).

26
The Memory Hierarchy
Bus
Split Cache
Instruction Cache
Data Cache
Physical Memory
Level 2 Cache
Disk
Processor
Harvard Architecture
27
Fine Tuning a Program for Memory Performance
high
  • for (i0 iltrows i)
  • for (j0 jltcolumns j)
  • Aij Aij i

A21
A20

for (j0 jltcolumns j) for (i0 iltrows
i) Aij Aij i
A11
A10

A01
If you know how data is organized in memory, you
can write your code in a way that minimizes cache
misses and page faults.
A00
low
Write a Comment
User Comments (0)
About PowerShow.com