Ping-Liang Lai (???) - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

Ping-Liang Lai (???)

Description:

Title: Growth Networks Inc - An Overview Author: Karen Yancik 314-995-6140 Last modified by: perry Created Date: 1/23/1998 5:03:10 PM Document presentation format – PowerPoint PPT presentation

Number of Views:94
Avg rating:3.0/5.0
Slides: 27
Provided by: KarenY4
Category:
Tags: buffer | cache | chapter | lai | liang | ping

less

Transcript and Presenter's Notes

Title: Ping-Liang Lai (???)


1
Lecture 5 Review of Memory Hierarchy (Appendix
C in textbook)
Computer Architecture?????
  • Ping-Liang Lai (???)
  •  

2
Outline
  • C.0 Review From Last Lecture
  • C.1 Introduction
  • C.2 Cache Performance
  • C.3 Six Basic Cache Optimizations
  • C.4 Virtual Memory
  • C.5 Protection and Examples of Virtual Memory
  • C.6 Fallacies and Pitfalls
  • C.7 Concluding Remarks
  • C.8 Historical Perspective and References

3
Review From Last Lecture
  • Quantify and summarize performance
  • Ratios, Geometric Mean, Multiplicative Standard
    Deviation
  • FP Benchmarks age, disks fail,1 point fail
    danger
  • Control VIA State Machines and Microprogramming
  • Just overlap tasks easy if tasks are independent
  • Speed Up ? Pipeline Depth if ideal CPI is 1,
    then
  • Hazards limit performance on computers
  • Structural need more HW resources
  • Data (RAW,WAR,WAW) need forwarding, compiler
    scheduling
  • Control delayed branch, prediction
  • Exceptions, Interrupts add complexity

4
Outline
  • C.0 Review From Last Lecture
  • C.1 Introduction
  • C.2 Cache Performance
  • C.3 Six Basic Cache Optimizations
  • C.4 Virtual Memory
  • C.5 Protection and Examples of Virtual Memory
  • C.6 Fallacies and Pitfalls
  • C.7 Concluding Remarks
  • C.8 Historical Perspective and References

5
C.1 Introduction
  • The Principle of Locality
  • Program access a relatively small portion of the
    address space at any instant of time.
  • Two Different Types of Locality
  • Temporal Locality (Locality in Time) If an item
    is referenced, it will tend to be referenced
    again soon (e.g., loops, reuse)
  • Spatial Locality (Locality in Space) If an item
    is referenced, items whose addresses are close by
    tend to be referenced soon (e.g., straightline
    code, array access)
  • Last 15 years, HW relied on locality for speed
  • It is a property of programs which is exploited
    in machine design.

6
Levels of the Memory Hierarchy
Upper Level
Capacity Access Time Cost
Staging Xfer Unit
faster
CPU Registers 100s Bytes lt10s ns
Registers
prog./compiler 1-8 bytes
Instr. Operands
Cache K Bytes 10-100 ns 1-0.1 cents/bit
Cache
cache cntl 8-128 bytes
Blocks
Main Memory M Bytes 200ns- 500ns .0001-.00001
cents /bit
Memory
OS 512-4K bytes
Pages
Disk G Bytes, 10 ms (10,000,000 ns) 10-5-10-6
cents/bit
Disk
user/operator Mbytes
Files
Larger
Tape infinite sec-min 10-8
Tape
Lower Level
7
Levels of the Memory Hierarchy
Upper Level
Capacity Access Time Cost
Staging Xfer Unit
faster
CPU Registers 100s Bytes lt10s ns
Registers
prog./compiler 1-8 bytes
Instr. Operands
Cache K Bytes 10-100 ns 1-0.1 cents/bit
Cache
cache cntl 8-128 bytes
Blocks
Main Memory M Bytes 200ns- 500ns .0001-.00001
cents /bit
Memory
OS 512-4K bytes
Pages
Disk G Bytes, 10 ms (10,000,000 ns) 10-5-10-6
cents/bit
Disk
user/operator Mbytes
Files
Larger
Tape infinite sec-min 10-8
Tape
Lower Level
8
Since 1980, CPU has outpaced DRAM ...
  • Q. How do architects address this gap?

A. Put smaller, faster cache memories between
CPU and DRAM. Create a memory hierarchy.
Performance (1/latency)
CPU 60 per yr 2X in 1.5 yrs
CPU
1000
100
DRAM 9 per yr 2X in 10 yrs
10
DRAM
Year
1980
2000
1990
9
Memory Hierarchy Terminology
  • Hit data appears in some block in the upper
    level (example Block X)
  • Hit Rate the fraction of memory access found in
    the upper level.
  • Hit Time Time to access the upper level which
    consists of RAM access time Time to determine
    hit/miss.
  • Miss data needs to be retrieve from a block in
    the lower level (Block Y)
  • Miss Rate 1 - (Hit Rate).
  • Miss Penalty Time to replace a block in the
    upper level Time to deliver the block the
    processor.
  • Hit Time ltlt Miss Penalty (500 instructions on
    21264!)

10
Cache Measures Review
  • Hit rate fraction found in that level
  • So high that usually talk about Miss rate
  • Miss rate fallacy as MIPS to CPU performance,
    miss rate to average memory access time in memory
  • Average memory-access time Hit time Miss rate
    x Miss penalty (ns or clocks)
  • Miss penalty time to replace a block from lower
    level, including time to replace in CPU
  • Access time time to lower level f(latency to
    lower level)
  • Transfer time time to transfer block f(BW
    between upper lower levels)

11
Cache Performance Review (1/3)
  • Memory Stall Cycles the number of cycles during
    which the processor is stalled waiting for a
    memory access.
  • Rewriting the CPU performance time
  • The number of memory stall cycles depends on both
    the number of misses and the cost per miss, which
    is called the miss penalty
  • The advantage of the last form is the component
    can be easily measured.

12
Cache Performance Review (2/3)
  • Miss penalty depends on
  • Prior memory requests or memory refresh
  • Different clocks of the processor, bus, and
    memory
  • Thus, using miss penalty be a constant is a
    simplification.
  • Miss rate the fraction of cache access that
    result in a miss (i.e., number of accesses that
    miss divided by number of accesses).
  • Extract formula for R/W
  • Simplify the complete formula by combining the
    R/W.

13
Example (C-5)
  • Assume we have a computer where the clocks per
    instruction (CPI) is 1.0 when all memory accesses
    hit in the cache. The only data accesses are
    loads and stores, and these total 50 of the
    instructions. If the miss penalty is 25 clock
    cycles and the miss rate is 2, how much faster
    would the computer be if all instructions were
    cache hits?
  • Ans
  • First computer the performance for the computer
    that always hits
  • Now for the computer with the real cache, first
    we compute memory stall cycles
  • The total performance is thus
  • The performance ratio is the inverse of the
    execution times

14
Cache Performance Review (3/3)
  • Usually, measuring miss rate as misses per
    instruction rather than misses per memory
    reference.
  • For example, in the previous example into misses
    per instruction
  • The latter formula is useful when you know the
    average number of memory accesses per instruction.

15
Example (C-6)
  • To show equivalency between the two miss rate
    equations, lets redo the example above, this
    time assuming a miss rate per 1000 instructions
    of 30. What is memory stall time in terms of
    instruction count?
  • Answer
  • Recomputing the memory stall cycles

16
4 Questions for Memory Hierarchy
  • Q1 Where can a block be placed in the upper
    level?
  • Block placement
  • Q2 How is a block found if it is in the upper
    level?
  • Block identification
  • Q3 Which block should be replaced on a miss?
  • Block replacement
  • Q4 What happens on a write?
  • Write strategy

17
Q1 Where Can a Block be Placed in The Upper
Level?
  • Block Placement
  • Direct Mapped, Fully Associative, Set Associative
  • Direct mapped (Block number) mod (Number of
    blocks in cache)
  • Set associative (Block number) mod (Number of
    sets in cache)
  • of set ? of blocks
  • n-way n blocks in a set
  • 1-way direct mapped
  • Fully associative of set 1

Direct mapped block 12 can go only into block 4
(12 mod 8)
Set associative block 12 can go anywhere in set
0 (12 mod 4)
Fully associative block 12 can go anywhere
Block no.
0 1 2 3 4 5 6 7
Block no.
Block no.
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7
Set3
Set0
Set1
Set2
Block-frame address
31
Block no.
0
2
3
5
4
7
8
12
9
1
6
18
1 KB Direct Mapped Cache, 32B blocks
  • For a 2N byte cache
  • The uppermost (32 - N) bits are always the Cache
    Tag
  • The lowest M bits are the Byte Select (Block Size
    2M)

19
Set Associative Cache
  • N-way set associative N entries for each Cache
    Index
  • N direct mapped caches operates in parallel
  • Example Two-way set associative cache
  • Cache Index selects a set from the cache
  • The two tags in the set are compared to the input
    in parallel
  • Data is selected based on the tag result.

20
Disadvantage of Set Associative Cache
  • N-way Set Associative Cache versus Direct Mapped
    Cache
  • N comparators vs. 1
  • Extra MUX delay for the data
  • Data comes AFTER Hit/Miss decision and set
    selection
  • In a direct mapped cache, Cache Block is
    available BEFORE Hit/Miss
  • Possible to assume a hit and continue. Recover
    later if miss.

21
Q2 Block Identification
  • Tag on each block
  • No need to check index or block offset
  • Increasing associativity shrinks index, expands
    tag

Set Select
Data Select
Cache size Associativity 2index_size
2offest_size
22
Q3 Which block should be replaced on a miss?
  • Easy for Direct Mapped
  • Set Associative or Fully Associative
  • Random
  • LRU (Least Recently Used)
  • First in, first out (FIFO)

Associativity Associativity Associativity Associativity Associativity Associativity Associativity Associativity Associativity Associativity
2-way 2-way 2-way 4-way 4-way 4-way 8-way 8-way 8-way
Size LRU Ran. FIFO LRU Ran. FIFO LRU Ran. FIFO
16KB 114.1 117.3 115.5 111.7 115.1 113.3 109.0 111.8 110.4
64KB 103.4 104.3 103.9 102.4 102.3 103.1 99.7 100.5 100.3
256KB 92.2 92.1 92.5 92.1 92.1 92.5 92.1 92.1 92.5
23
Q4 What Happens on a Write?
Write-Through Write-Back
Policy Data written to cache block, also written to lower-level memory Write data only to the cache Update lower level when a block falls out of the cache
Debug Easy Hard
Do read misses produce writes? No Yes
Do repeated writes make it to lower level? Yes No
Additional option -- let writes to an un-cached
address allocate a new cache line
(write-allocate).
24
Write Buffers for Write-Through Caches
  • Q. Why a write buffer ?
  • A. So CPU doesnt stall
  • Q. Why a buffer, why not just one register ?
  • A. Bursts of writes are common.
  • Q. Are Read After Write (RAW) hazards an issue
    for write buffer?
  • A. Yes! Drain buffer before next read, or send
    read 1st after check write buffers.

25
Write - Miss Policy
  • Two options on a write miss
  • Write allocate the block is allocated on a
    write miss, followed by the write hit actions.
  • Write misses act like read misses.
  • No-write allocate write misses do not affect
    the cache. The block is modified only in the
    lower-level memory.
  • Block stay out of the cache in no-write allocate
    until the program tries to read the blocks, but
    with write allocate even blocks that are only
    written will still be in the cache.

26
Write-Miss Policy Example
  • Example Assume a fully associative write-back
    cache with many cache entries that starts empty.
    Below is sequence of five memory operations (The
    address is in square brackets)
  • Write Mem100
  • Write Mem100
  • Read Mem200
  • Write Mem200
  • Write Mem100.
  • What are the number of hits and misses
    (inclusive reads and writes) when using no-write
    allocate versus write allocate?
  • Answer
  • No-write Allocate
    Write allocate
  • Write Mem100 1 write miss Write
    Mem100 1 write miss
  • Write Mem100 1 write miss
    Write Mem100 1 write hit
  • Read Mem200 1 read miss
    Read Mem200 1 read miss
  • Write Mem200 1 write hit
    Write Mem200 1 write hit
  • Write Mem100. 1 write miss
    Write Mem100 1 write hit
  • 4 misses 1 hit
    2 misses 3 hits
Write a Comment
User Comments (0)
About PowerShow.com