COMP 206: Computer Architecture and Implementation - PowerPoint PPT Presentation

About This Presentation
Title:

COMP 206: Computer Architecture and Implementation

Description:

Loop Fusion. Combine two independent loops that have same looping and some variables overlap ... Loop Fusion Example. Before: 2 misses per access to a and c ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 34
Provided by: Montek5
Learn more at: http://www.cs.unc.edu
Category:

less

Transcript and Presenter's Notes

Title: COMP 206: Computer Architecture and Implementation


1
COMP 206Computer Architecture and Implementation
  • Montek Singh
  • Mon., Nov. 8, 2004
  • Topic Caches (contd.)

2
Outline
  • Cache Organization
  • Cache Read/Write Policies
  • Block replacement policies
  • Write-back vs. write-through caches
  • Write buffers
  • Cache Performance
  • Means of improving performance
  • Reading HP3 Sections 5.3-5.7

3
Review 4 Questions for Mem Hierarchy
  • Where can a block be placed in the upper level?
    (Block placement)
  • How is a block found if it is in the upper
    level?(Block identification)
  • Which block should be replaced on a miss?(Block
    replacement)
  • What happens on a write?(Write strategy)

4
Review Cache Shapes
Direct-mapped (A 1, S 16)
2-way set-associative (A 2, S 8)
4-way set-associative (A 4, S 4)
8-way set-associative (A 8, S 2)
Fully associative (A 16, S 1)
5
Example 1 1KB, Direct-Mapped, 32B Blocks
  • For a 1024 (210) byte cache with 32-byte blocks
  • The uppermost 22 (32 - 10) address bits are the
    tag
  • The lowest 5 address bits are the Byte Select
    (Block Size 25)
  • The next 5 address bits (bit5 - bit9) are the
    Cache Index

6
Example 1a Cache Miss Empty Block
7
Example 1b Read in Data
8
Example 1c Cache Hit
9
Example 1d Cache Miss Incorrect Block
10
Example 1e Replace Block
11
Cache Performance
12
Block Size Tradeoff
  • In general, larger block size take advantage of
    spatial locality, BUT
  • Larger block size means larger miss penalty
  • Takes longer time to fill up the block
  • If block size is too big relative to cache size,
    miss rate will go up
  • Too few cache blocks
  • Average Access Time
  • Hit Time Miss Penalty x Miss Rate

13
Sources of Cache Misses
  • Compulsory (cold start or process migration,
    first reference) first access to a block
  • Cold fact of life not a whole lot you can do
    about it
  • Conflict/Collision/Interference
  • Multiple mem locations mapped to the same cache
    location
  • Solution 1 Increase cache size
  • Solution 2 Increase associativity
  • Capacity
  • Cache cannot contain all blocks access by the
    program
  • Solution 1 Increase cache size
  • Solution 2 Restructure program
  • Coherence/Invalidation
  • Other process (e.g., I/O) updates memory

14
The 3C Model of Cache Misses
  • Based on comparison with another cache
  • Compulsory The first access to a block is not
    in the cache, so the block must be brought into
    the cache. These are also called cold start
    misses or first reference misses.(Misses in
    Infinite Cache)
  • Capacity If the cache cannot contain all the
    blocks needed during execution of a program (its
    working set), capacity misses will occur due to
    blocks being discarded and later
    retrieved.(Misses in fully associative size X
    Cache)
  • Conflict If the block-placement strategy is
    set-associative or direct mapped, conflict misses
    (in addition to compulsory and capacity misses)
    will occur because a block can be discarded and
    later retrieved if too many blocks map to its
    set. These are also called collision misses or
    interference misses.(Misses in A-way associative
    size X Cache but not in fully associative size X
    Cache)

15
Sources of Cache Misses
Direct Mapped
N-way Set Associative
Fully Associative
Cache Size
Big
Medium
Small
Compulsory Miss
Same
Same
Same
Conflict Miss
High
Medium
Zero
Capacity Miss
Low(er)
Medium
High
Invalidation Miss
Same
Same
Same
If you are going to run billions of
instruction, compulsory misses are insignificant.
16
3Cs Absolute Miss Rate
17
3Cs Relative Miss Rate
Conflict
18
How to Improve Cache Performance
  • Latency
  • Reduce miss rate
  • Reduce miss penalty
  • Reduce hit time
  • Bandwidth
  • Increase hit bandwidth
  • Increase miss bandwidth

19
1. Reduce Misses via Larger Block Size
20
2. Reduce Misses via Higher Associativity
  • 21 Cache Rule
  • Miss Rate DM cache size N ? Miss Rate FA cache
    size N/2
  • Not merely empirical
  • Theoretical justification in Sleator and Tarjan,
    Amortized efficiency of list update and paging
    rules, CACM, 28(2)202-208,1985
  • Beware Execution time is only final measure!
  • Will clock cycle time increase?
  • Hill 1988 suggested hit time 10 higher for
    2-way vs. 1-way

21
Example Ave Mem Access Time vs. Miss Rate
  • Example assume clock cycle time is 1.10 for
    2-way, 1.12 for 4-way, 1.14 for 8-way vs. clock
    cycle time of direct mapped
  • (Red means A.M.A.T. not improved by more
    associativity)

22
3. Reduce Conflict Misses via Victim Cache
  • How to combine fast hit time of direct mapped yet
    avoid conflict misses
  • Add small highly associative buffer to hold data
    discarded from cache
  • Jouppi 1990 4-entry victim cache removed 20
    to 95 of conflicts for a 4 KB direct mapped data
    cache

CPU
DATA
TAG
TAG
DATA
Mem
23
4. Reduce Conflict Misses via Pseudo-Assoc.
  • How to combine fast hit time of direct mapped and
    have the lower conflict misses of 2-way SA cache
  • Divide cache on a miss, check other half of
    cache to see if there, if so have a pseudo-hit
    (slow hit)
  • Drawback CPU pipeline is hard if hit takes 1 or
    2 cycles
  • Better for caches not tied directly to processor

Hit Time
Miss Penalty
Pseudo Hit Time
Time
24
5. Reduce Misses by Hardware Prefetching
  • Instruction prefetching
  • Alpha 21064 fetches 2 blocks on a miss
  • Extra block placed in stream buffer
  • On miss check stream buffer
  • Works with data blocks too
  • Jouppi 1990 1 data stream buffer got 25 misses
    from 4KB cache 4 stream buffers got 43
  • Palacharla Kessler 1994 for scientific
    programs for 8 streams got 50 to 70 of misses
    from 2 64KB, 4-way set associative caches
  • Prefetching relies on extra memory bandwidth that
    can be used without penalty
  • e.g., up to 8 prefetch stream buffers in the
    UltraSPARC III

25
6. Reducing Misses by Software Prefetching
  • Data prefetch
  • Compiler inserts special prefetch instructions
    into program
  • Load data into register (HP PA-RISC loads)
  • Cache Prefetch load into cache (MIPS
    IV,PowerPC,SPARC v9)
  • A form of speculative execution
  • dont really know if data is needed or if not in
    cache already
  • Most effective prefetches are semantically
    invisible to prgm
  • does not change registers or memory
  • cannot cause a fault/exception
  • if they would fault, they are simply turned into
    NOPs
  • Issuing prefetch instructions takes time
  • Is cost of prefetch issues lt savings in reduced
    misses?

26
7. Reduce Misses by Compiler Optzns.
  • Instructions
  • Reorder procedures in memory so as to reduce
    misses
  • Profiling to look at conflicts
  • McFarling 1989 reduced caches misses by 75 on
    8KB direct mapped cache with 4 byte blocks
  • Data
  • Merging Arrays
  • Improve spatial locality by single array of
    compound elements vs. 2 arrays
  • Loop Interchange
  • Change nesting of loops to access data in order
    stored in memory
  • Loop Fusion
  • Combine two independent loops that have same
    looping and some variables overlap
  • Blocking
  • Improve temporal locality by accessing blocks
    of data repeatedly vs. going down whole columns
    or rows

27
Merging Arrays Example
/ Before / int valSIZE int keySIZE
/ After / struct merge int val int
key struct merge merged_arraySIZE
  • Reduces conflicts between val and key
  • Addressing expressions are different

28
Loop Interchange Example
/ Before / for (k 0 k lt 100 k) for (j
0 j lt 100 j) for (i 0 i lt 5000 i)
xij 2 xij
/ After / for (k 0 k lt 100 k) for (i
0 i lt 5000 i) for (j 0 j lt 100 j)
xij 2 xij
  • Sequential accesses instead of striding through
    memory every 100 words

29
Loop Fusion Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) aij 1/bij
cij for (i 0 i lt N i) for (j 0 j
lt N j) dij aij cij
/ After / for (i 0 i lt N i) for (j 0
j lt N j) aij 1/bij
cij dij aij cij
  • Before 2 misses per access to a and c
  • After 1 miss per access to a and c

30
Blocking Example
/ Before / for (i 0 i lt N i) for (j
0 j lt N j) r 0 for (k 0 k lt
N k) r r yikzkj
xij r
  • Two Inner Loops
  • Read all NxN elements of z
  • Read N elements of 1 row of y repeatedly
  • Write N elements of 1 row of x
  • Capacity Misses a function of N and Cache Size
  • 3 NxN ? no capacity misses otherwise ...
  • Idea compute on BxB submatrix that fits

31
Blocking Example (contd.)
  • Age of accesses
  • White means not touched yet
  • Light gray means touched a while ago
  • Dark gray means newer accesses

32
Blocking Example (contd.)
/ After / for (jj 0 jj lt N jj jjB) for
(kk 0 kk lt N kk kkB) for (i 0 i lt
N i) for (j jj j lt min(jjB-1,N)
j) r 0 for (k kk k lt
min(kkB-1,N) k) r r
yikzkj xij xij r
  • Work with BxB submatrices
  • smaller working set can fit within the cache
  • fewer capacity misses

33
Blocking Example (contd.)
  • Capacity reqd. goes from (2N3 N2) to (2N3/B
    N2)
  • B blocking factor
Write a Comment
User Comments (0)
About PowerShow.com