Title: COMP 206: Computer Architecture and Implementation
1COMP 206Computer Architecture and Implementation
- Montek Singh
- Mon., Nov. 8, 2004
- Topic Caches (contd.)
2Outline
- Cache Organization
- Cache Read/Write Policies
- Block replacement policies
- Write-back vs. write-through caches
- Write buffers
- Cache Performance
- Means of improving performance
- Reading HP3 Sections 5.3-5.7
3Review 4 Questions for Mem Hierarchy
- Where can a block be placed in the upper level?
(Block placement) - How is a block found if it is in the upper
level?(Block identification) - Which block should be replaced on a miss?(Block
replacement) - What happens on a write?(Write strategy)
4Review Cache Shapes
Direct-mapped (A 1, S 16)
2-way set-associative (A 2, S 8)
4-way set-associative (A 4, S 4)
8-way set-associative (A 8, S 2)
Fully associative (A 16, S 1)
5Example 1 1KB, Direct-Mapped, 32B Blocks
- For a 1024 (210) byte cache with 32-byte blocks
- The uppermost 22 (32 - 10) address bits are the
tag - The lowest 5 address bits are the Byte Select
(Block Size 25) - The next 5 address bits (bit5 - bit9) are the
Cache Index
6Example 1a Cache Miss Empty Block
7Example 1b Read in Data
8Example 1c Cache Hit
9Example 1d Cache Miss Incorrect Block
10Example 1e Replace Block
11Cache Performance
12Block Size Tradeoff
- In general, larger block size take advantage of
spatial locality, BUT - Larger block size means larger miss penalty
- Takes longer time to fill up the block
- If block size is too big relative to cache size,
miss rate will go up - Too few cache blocks
- Average Access Time
- Hit Time Miss Penalty x Miss Rate
13 Sources of Cache Misses
- Compulsory (cold start or process migration,
first reference) first access to a block - Cold fact of life not a whole lot you can do
about it - Conflict/Collision/Interference
- Multiple mem locations mapped to the same cache
location - Solution 1 Increase cache size
- Solution 2 Increase associativity
- Capacity
- Cache cannot contain all blocks access by the
program - Solution 1 Increase cache size
- Solution 2 Restructure program
- Coherence/Invalidation
- Other process (e.g., I/O) updates memory
14The 3C Model of Cache Misses
- Based on comparison with another cache
- Compulsory The first access to a block is not
in the cache, so the block must be brought into
the cache. These are also called cold start
misses or first reference misses.(Misses in
Infinite Cache) - Capacity If the cache cannot contain all the
blocks needed during execution of a program (its
working set), capacity misses will occur due to
blocks being discarded and later
retrieved.(Misses in fully associative size X
Cache) - Conflict If the block-placement strategy is
set-associative or direct mapped, conflict misses
(in addition to compulsory and capacity misses)
will occur because a block can be discarded and
later retrieved if too many blocks map to its
set. These are also called collision misses or
interference misses.(Misses in A-way associative
size X Cache but not in fully associative size X
Cache)
15Sources of Cache Misses
Direct Mapped
N-way Set Associative
Fully Associative
Cache Size
Big
Medium
Small
Compulsory Miss
Same
Same
Same
Conflict Miss
High
Medium
Zero
Capacity Miss
Low(er)
Medium
High
Invalidation Miss
Same
Same
Same
If you are going to run billions of
instruction, compulsory misses are insignificant.
163Cs Absolute Miss Rate
173Cs Relative Miss Rate
Conflict
18How to Improve Cache Performance
- Latency
- Reduce miss rate
- Reduce miss penalty
- Reduce hit time
- Bandwidth
- Increase hit bandwidth
- Increase miss bandwidth
191. Reduce Misses via Larger Block Size
202. Reduce Misses via Higher Associativity
- 21 Cache Rule
- Miss Rate DM cache size N ? Miss Rate FA cache
size N/2 - Not merely empirical
- Theoretical justification in Sleator and Tarjan,
Amortized efficiency of list update and paging
rules, CACM, 28(2)202-208,1985 - Beware Execution time is only final measure!
- Will clock cycle time increase?
- Hill 1988 suggested hit time 10 higher for
2-way vs. 1-way
21Example Ave Mem Access Time vs. Miss Rate
- Example assume clock cycle time is 1.10 for
2-way, 1.12 for 4-way, 1.14 for 8-way vs. clock
cycle time of direct mapped - (Red means A.M.A.T. not improved by more
associativity)
223. Reduce Conflict Misses via Victim Cache
- How to combine fast hit time of direct mapped yet
avoid conflict misses - Add small highly associative buffer to hold data
discarded from cache - Jouppi 1990 4-entry victim cache removed 20
to 95 of conflicts for a 4 KB direct mapped data
cache
CPU
DATA
TAG
TAG
DATA
Mem
234. Reduce Conflict Misses via Pseudo-Assoc.
- How to combine fast hit time of direct mapped and
have the lower conflict misses of 2-way SA cache - Divide cache on a miss, check other half of
cache to see if there, if so have a pseudo-hit
(slow hit) - Drawback CPU pipeline is hard if hit takes 1 or
2 cycles - Better for caches not tied directly to processor
Hit Time
Miss Penalty
Pseudo Hit Time
Time
245. Reduce Misses by Hardware Prefetching
- Instruction prefetching
- Alpha 21064 fetches 2 blocks on a miss
- Extra block placed in stream buffer
- On miss check stream buffer
- Works with data blocks too
- Jouppi 1990 1 data stream buffer got 25 misses
from 4KB cache 4 stream buffers got 43 - Palacharla Kessler 1994 for scientific
programs for 8 streams got 50 to 70 of misses
from 2 64KB, 4-way set associative caches - Prefetching relies on extra memory bandwidth that
can be used without penalty - e.g., up to 8 prefetch stream buffers in the
UltraSPARC III
256. Reducing Misses by Software Prefetching
- Data prefetch
- Compiler inserts special prefetch instructions
into program - Load data into register (HP PA-RISC loads)
- Cache Prefetch load into cache (MIPS
IV,PowerPC,SPARC v9) - A form of speculative execution
- dont really know if data is needed or if not in
cache already - Most effective prefetches are semantically
invisible to prgm - does not change registers or memory
- cannot cause a fault/exception
- if they would fault, they are simply turned into
NOPs - Issuing prefetch instructions takes time
- Is cost of prefetch issues lt savings in reduced
misses?
267. Reduce Misses by Compiler Optzns.
- Instructions
- Reorder procedures in memory so as to reduce
misses - Profiling to look at conflicts
- McFarling 1989 reduced caches misses by 75 on
8KB direct mapped cache with 4 byte blocks - Data
- Merging Arrays
- Improve spatial locality by single array of
compound elements vs. 2 arrays - Loop Interchange
- Change nesting of loops to access data in order
stored in memory - Loop Fusion
- Combine two independent loops that have same
looping and some variables overlap - Blocking
- Improve temporal locality by accessing blocks
of data repeatedly vs. going down whole columns
or rows
27Merging Arrays Example
/ Before / int valSIZE int keySIZE
/ After / struct merge int val int
key struct merge merged_arraySIZE
- Reduces conflicts between val and key
- Addressing expressions are different
28Loop Interchange Example
/ Before / for (k 0 k lt 100 k) for (j
0 j lt 100 j) for (i 0 i lt 5000 i)
xij 2 xij
/ After / for (k 0 k lt 100 k) for (i
0 i lt 5000 i) for (j 0 j lt 100 j)
xij 2 xij
- Sequential accesses instead of striding through
memory every 100 words
29Loop Fusion Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) aij 1/bij
cij for (i 0 i lt N i) for (j 0 j
lt N j) dij aij cij
/ After / for (i 0 i lt N i) for (j 0
j lt N j) aij 1/bij
cij dij aij cij
- Before 2 misses per access to a and c
- After 1 miss per access to a and c
30Blocking Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) r 0 for (k 0 k lt
N k) r r yikzkj
xij r
- Two Inner Loops
- Read all NxN elements of z
- Read N elements of 1 row of y repeatedly
- Write N elements of 1 row of x
- Capacity Misses a function of N and Cache Size
- 3 NxN ? no capacity misses otherwise ...
- Idea compute on BxB submatrix that fits
31Blocking Example (contd.)
- Age of accesses
- White means not touched yet
- Light gray means touched a while ago
- Dark gray means newer accesses
32Blocking Example (contd.)
/ After / for (jj 0 jj lt N jj jjB) for
(kk 0 kk lt N kk kkB) for (i 0 i lt
N i) for (j jj j lt min(jjB-1,N)
j) r 0 for (k kk k lt
min(kkB-1,N) k) r r
yikzkj xij xij r
- Work with BxB submatrices
- smaller working set can fit within the cache
- fewer capacity misses
33Blocking Example (contd.)
- Capacity reqd. goes from (2N3 N2) to (2N3/B
N2) - B blocking factor