COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses - PowerPoint PPT Presentation

About This Presentation
Title:

COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Description:

COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses Siddhartha Chatterjee Fall 2000 Cache Performance Block Size Tradeoff In general, larger block ... – PowerPoint PPT presentation

Number of Views:123
Avg rating:3.0/5.0
Slides: 42
Provided by: Siddhartha7
Category:

less

Transcript and Presenter's Notes

Title: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses


1
COMP 206Computer Architecture and
ImplementationUnit 8b Cache Misses
  • Siddhartha Chatterjee
  • Fall 2000

2
Cache Performance
3
Block Size Tradeoff
  • In general, larger block size take advantage of
    spatial locality, BUT
  • Larger block size means larger miss penalty
  • Takes longer time to fill up the block
  • If block size is too big relative to cache size,
    miss rate will go up
  • Too few cache blocks
  • Average Access Time
  • Hit Time Miss Penalty x Miss Rate

4
Sources of Cache Misses
  • Compulsory (cold start or process migration,
    first reference) first access to a block
  • Cold fact of life not a whole lot you can do
    about it
  • Conflict/Collision/Interference
  • Multiple memory locations mapped to the same
    cache location
  • Solution 1 Increase cache size
  • Solution 2 Increase associativity
  • Capacity
  • Cache cannot contain all blocks access by the
    program
  • Solution 1 Increase cache size
  • Solution 2 Restructure program
  • Coherence/Invalidation
  • Other process (e.g., I/O) updates memory

5
The 3C Model of Cache Misses
  • Based on comparison with another cache
  • CompulsoryThe first access to a block is not in
    the cache, so the block must be brought into the
    cache. These are also called cold start misses or
    first reference misses.(Misses in Infinite
    Cache)
  • CapacityIf the cache cannot contain all the
    blocks needed during execution of a program (its
    working set), capacity misses will occur due to
    blocks being discarded and later
    retrieved.(Misses in fully associative size X
    Cache)
  • ConflictIf the block-placement strategy is
    set-associative or direct mapped, conflict misses
    (in addition to compulsory and capacity misses)
    will occur because a block can be discarded and
    later retrieved if too many blocks map to its
    set. These are also called collision misses or
    interference misses.(Misses in A-way associative
    size X Cache but not in fully associative size X
    Cache)

6
Sources of Cache Misses
Direct Mapped
N-way Set Associative
Fully Associative
Cache Size
Big
Medium
Small
Compulsory Miss
Same
Same
Same
Conflict Miss
High
Medium
Zero
Capacity Miss
Low(er)
Medium
High
Invalidation Miss
Same
Same
Same
If you are going to run billions of
instruction, compulsory misses are insignificant.
7
3Cs Absolute Miss Rate
8
3Cs Relative Miss Rate
Conflict
9
How to Improve Cache Performance
  • Latency
  • Reduce miss rate (Section 5.3 of HP2)
  • Reduce miss penalty (Section 5.4 of HP2)
  • Reduce hit time (Section 5.5 of HP2)
  • Bandwidth
  • Increase hit bandwidth
  • Increase miss bandwidth

10
1. Reduce Misses via Larger Block Size
11
2. Reduce Misses via Higher Associativity
  • 21 Cache Rule
  • Miss Rate DM cache size N ? Miss Rate FA cache
    size N/2
  • Not merely empirical
  • Theoretical justification in Sleator and Tarjan,
    Amortized efficiency of list update and paging
    rules, CACM, 28(2)202-208,1985
  • Beware Execution time is only final measure!
  • Will clock cycle time increase?
  • Hill 1988 suggested hit time external cache
    10, internal 2 for 2-way vs. 1-way

12
ExampleAverage Memory Access Time vs. Miss Rate
  • Example assume clock cycle time is 1.10 for
    2-way, 1.12 for 4-way, 1.14 for 8-way vs. clock
    cycle time of direct mapped
  • (Red means A.M.A.T. not improved by more
    associativity)

13
3. Reduce Conflict Misses via Victim Cache
CPU
  • How to combine fast hit time of direct mapped yet
    still avoid conflict misses
  • Add small highly associative buffer to hold data
    discarded from cache
  • Jouppi 1990 4-entry victim cache removed 20
    to 95 of conflicts for a 4 KB direct mapped data
    cache

DATA
TAG
TAG
DATA
Mem
14
4. Reduce Conflict Misses via Pseudo-Associativit
y
  • How to combine fast hit time of direct mapped and
    have the lower conflict misses of 2-way SA cache
  • Divide cache on a miss, check other half of
    cache to see if there, if so have a pseudo-hit
    (slow hit)
  • Drawback CPU pipeline is hard if hit takes 1 or
    2 cycles
  • Better for caches not tied directly to processor

Hit Time
Miss Penalty
Pseudo Hit Time
Time
15
5. Reduce Misses by Hardware Prefetching of
Instruction Data
  • Instruction prefetching
  • Alpha 21064 fetches 2 blocks on a miss
  • Extra block placed in stream buffer
  • On miss check stream buffer
  • Works with data blocks too
  • Jouppi 1990 1 data stream buffer got 25 misses
    from 4KB cache 4 streams got 43
  • Palacharla Kessler 1994 for scientific
    programs for 8 streams got 50 to 70 of misses
    from 2 64KB, 4-way set associative caches
  • Prefetching relies on extra memory bandwidth that
    can be used without penalty

16
6. Reducing Misses by Software Prefetching Data
  • Data prefetch
  • Load data into register (HP PA-RISC loads)
    binding
  • Cache Prefetch load into cache (MIPS IV,
    PowerPC, SPARC v9) non-binding
  • Special prefetching instructions cannot cause
    faultsa form of speculative execution
  • Issuing prefetch instructions takes time
  • Is cost of prefetch issues lt savings in reduced
    misses?

17
7. Reduce Misses by Compiler Optimizations
  • Instructions
  • Reorder procedures in memory so as to reduce
    misses
  • Profiling to look at conflicts
  • McFarling 1989 reduced caches misses by 75 on
    8KB direct mapped cache with 4 byte blocks
  • Data
  • Merging Arrays
  • Improve spatial locality by single array of
    compound elements vs. 2 arrays
  • Loop Interchange
  • Change nesting of loops to access data in order
    stored in memory
  • Loop Fusion
  • Combine two independent loops that have same
    looping and some variables overlap
  • Blocking
  • Improve temporal locality by accessing blocks
    of data repeatedly vs. going down whole columns
    or rows

18
Merging Arrays Example
/ Before / int valSIZE int keySIZE
/ After / struct merge int val int
key struct merge merged_arraySIZE
  • Reduces conflicts between val and key
  • Addressing expressions are different

19
Loop Interchange Example
/ Before / for (k 0 k lt 100 k) for (j
0 j lt 100 j) for (i 0 i lt 5000 i)
xij 2 xij
/ After / for (k 0 k lt 100 k) for (i
0 i lt 5000 i) for (j 0 j lt 100 j)
xij 2 xij
  • Sequential accesses instead of striding through
    memory every 100 words

20
Loop Fusion Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) aij 1/bij
cij for (i 0 i lt N i) for (j 0 j
lt N j) dij aij cij
/ After / for (i 0 i lt N i) for (j 0
j lt N j) aij 1/bij
cij dij aij cij
  • Two misses per access to a c vs. one miss per
    access

21
Blocking Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) r 0 for (k 0 k lt
N k) r r yikzkj
xij r
  • Two Inner Loops
  • Read all NxN elements of z
  • Read N elements of 1 row of y repeatedly
  • Write N elements of 1 row of x
  • Capacity Misses a function of N and Cache Size
  • 3 NxN ? no capacity misses otherwise ...
  • Idea compute on BxB submatrix that fits

22
Blocking Example
/ After / for (jj 0 jj lt N jj jjB) for
(kk 0 kk lt N kk kkB) for (i 0 i lt
N i) for (j jj j lt min(jjB-1,N)
j) r 0 for (k kk k lt
min(kkB-1,N) k) r r
yikzkj xij xij r
  • Capacity misses go from 2N3 N2 to 2N3/B N2
  • B called Blocking Factor
  • What happens to conflict misses?

23
Reducing Conflict Misses by Blocking
  • Conflict misses in non-FA caches vs. block size
  • Lam et al 1991 found that a blocking factor of
    24 had a fifth the misses vs. 48 despite the
    fact that both fit in cache

24
Summary of Compiler Optimizations to Reduce Cache
Misses
25
1. Reduce Miss Penalty Read Priority over Write
on Miss
  • Write through with write buffers offer RAW
    conflicts with main memory reads on cache misses
  • If simply wait for write buffer to empty might
    increase read miss penalty by 50 (old MIPS 1000)
  • Check write buffer contents before read if no
    conflicts, let the memory access continue
  • Write Back?
  • Read miss replacing dirty block
  • Normal Write dirty block to memory, and then do
    the read
  • Instead copy the dirty block to a write buffer,
    then do the read, and then do the write
  • CPU stall less since restarts as soon as read
    completes

26
2. Subblock Placement to Reduce Miss Penalty
  • Dont have to load full block on a miss
  • Have bits per subblock to indicate valid
  • (Originally invented to reduce tag storage)

27
3. Early Restart and Critical Word First
  • Dont wait for full block to be loaded before
    restarting CPU
  • Early RestartAs soon as the requested word of
    the block arrrives, send it to the CPU and let
    the CPU continue execution
  • Critical Word FirstRequest the missed word first
    from memory and send it to the CPU as soon as it
    arrives let the CPU continue execution while
    filling the rest of the words in the block. Also
    called wrapped fetch and requested word first
  • Generally useful only in large blocks
  • Spatial locality a problem tend to want next
    sequential word, so not clear if benefit by early
    restart

28
4. Non-blocking Caches to reduce stalls on misses
  • Non-blocking cache or lockup-free cache allows
    the data cache to continue to supply cache hits
    during a miss
  • Hit under miss reduces the effective miss
    penalty by being helpful during a miss instead of
    ignoring the requests of the CPU
  • Hit under multiple miss or miss under miss
    may further lower the effective miss penalty by
    overlapping multiple misses
  • Significantly increases the complexity of the
    cache controller as there can be multiple
    outstanding memory accesses

29
Value of Hit Under Miss for SPEC
Hit under i Misses
Integer
Floating Point
  • FP programs on average AMAT 0.68 -gt 0.52 -gt
    0.34 -gt 0.26
  • Int programs on average AMAT 0.24 -gt 0.20 -gt
    0.19 -gt 0.19
  • 8 KB Data Cache, Direct Mapped, 32B block, 16
    cycle miss

30
5. Miss Penalty Reduction Second Level Cache
  • L2 Equations
  • AMAT Hit TimeL1 Miss RateL1 ? Miss
    PenaltyL1
  • Miss PenaltyL1 Hit TimeL2 Miss RateL2 ? Miss
    PenaltyL2
  • AMAT Hit TimeL1 Miss RateL1 ? (Hit TimeL2
    Miss RateL2? Miss PenaltyL2)
  • Definitions
  • Local miss rate misses in this cache divided by
    the total number of memory accesses to this cache
    (Miss rateL2)
  • Global miss ratemisses in this cache divided by
    the total number of memory accesses generated by
    the CPU (Miss RateL1 ? Miss RateL2)

31
Reducing Misses Which Apply to L2 Cache?
  • Reducing Miss Rate
  • 1. Reduce Misses via Larger Block Size
  • 2. Reduce Conflict Misses via Higher
    Associativity
  • 3. Reducing Conflict Misses via Victim Cache
  • 4. Reducing Conflict Misses via
    Pseudo-Associativity
  • 5. Reducing Misses by HW Prefetching Instr, Data
  • 6. Reducing Misses by SW Prefetching Data
  • 7. Reducing Capacity/Conf. Misses by Compiler
    Optimizations

32
L2 cache block size A.M.A.T.
  • 32KB L1, 8 byte path to memory

33
Reducing Miss Penalty Summary
  • Five techniques
  • Read priority over write on miss
  • Subblock placement
  • Early Restart and Critical Word First on miss
  • Non-blocking Caches (Hit Under Miss)
  • Second Level Cache
  • Can be applied recursively to Multilevel Caches
  • Danger is that time to DRAM will grow with
    multiple levels in between

34
Review Improving Cache Performance
  • 1. Reduce the miss rate,
  • 2. Reduce the miss penalty, or
  • 3. Reduce the time to hit in the cache.

35
1. Fast Hit Times via Small, Simple Caches
  • Why Alpha 21164 has 8KB Instruction and 8KB data
    cache 96KB second level cache
  • Direct Mapped, on chip
  • Impact of dynamic scheduling?
  • Alpha 21264 has 64KB 2-way L1 Data and Inst Cache

36
2. Fast Hits by Avoiding Addr. Translation
  • Send virtual address to cache? Called Virtually
    Addressed Cache or just Virtual Cache, vs.
    Physical Cache
  • Every time process is switched logically must
    flush the cache otherwise get false hits
  • Cost is time to flush compulsory misses from
    empty cache
  • Dealing with aliases (sometimes called synonyms)
    Two different virtual addresses map to same
    physical address
  • I/O must interact with cache, so need virtual
    address
  • Solution to aliases
  • HW guarantee each cache frame holds unique
    physical address
  • SW guarantee lower n bits must have same
    address as long as covers index field direct
    mapped, they must be uniquecalled page coloring
  • Solution to cache flush
  • Add process identifier tag that identifies
    process as well as address within process cant
    get a hit if wrong process

37
Virtually Addressed Caches
38
2. Avoiding Translation Process ID impact
  • Black is uniprocess
  • Light Gray is multiprocess when flush cache
  • Dark Gray is multiprocess when use Process ID tag
  • Y axis Miss Rates up to 20
  • X axis Cache size from 2 KB to 1024 KB
  • Fig 5.26

39
2. Avoiding Translation Index with Physical
Portion of Address
  • If index is physical part of address, can start
    tag access in parallel with translation so that
    can compare to physical tag
  • Limits cache to page size what if we want bigger
    caches and use same trick?
  • Higher associativity
  • Page coloring

40
Cache Optimization Summary
  • Technique MR MP HT Complexity
  • Larger Block Size 0Higher
    Associativity 1Victim Caches 2Pseudo-As
    sociative Caches 2HW Prefetching of
    Instr/Data 2Compiler Controlled
    Prefetching 3Compiler Reduce Misses 0
  • Priority to Read Misses 1Subblock Placement
    1Early Restart Critical Word 1st
    2Non-Blocking Caches 3Second Level
    Caches 2
  • Small Simple Caches 0Avoiding Address
    Translation 2

41
Impact of Caches
  • 1960-1985 Speed ƒ(no. operations)
  • 1997
  • Pipelined Execution Fast Clock Rate
  • Out-of-Order completion
  • Superscalar Instruction Issue
  • 1999 Speed ƒ(non-cached memory accesses)
  • What does this mean for
  • Compilers, Operating Systems, Algorithms, Data
    Structures?
Write a Comment
User Comments (0)
About PowerShow.com