CSECE 365 COMPUTER ARCHITECTURE - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

CSECE 365 COMPUTER ARCHITECTURE

Description:

1. Reduce Misses via Larger Block Size. 2. Reduce Misses via Higher Associativity ... 1) Change Block Size: Which of 3Cs is obviously affected? 2) Change ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 36
Provided by: ESO17
Category:

less

Transcript and Presenter's Notes

Title: CSECE 365 COMPUTER ARCHITECTURE


1
CS/ECE 365 COMPUTER ARCHITECTURE
  • Soundararajan Ezekiel
  • Department of Computer Science
  • Ohio Northern University

2
improving cache performance
  • average mem access time Hit timemissratemiss
    penalty
  • reduce the miss rate
  • reduce the miss penalty
  • reduce the time to hit cache
  • main memory
  • virtual memory
  • examples and issues

3
  • 3 Cs Compulsory, Capacity, Conflict1. Reduce
    Misses via Larger Block Size2. Reduce Misses via
    Higher Associativity3. Reducing Misses via
    Victim Cache4. Reducing Misses via
    Pseudo-Associativity5. Reducing Misses by HW
    Prefetching Instr, Data6. Reducing Misses by SW
    Prefetching Data7. Reducing Misses by Compiler
    OptimizationsRemember danger of concentrating on
    just one parameter when evaluating performance

4
reduce the cache misses
  • Most cache research has concentrated on reducing
    the miss rate-- we will start there
  • we will put all sorts in 3 categories ( three
    Cs)
  • compulsory
  • capacity
  • conflict

5
  • Compulsory The very first access to a block
    cannot be in the cache, so the b lock must be
    brought into the cache. These are also called
    cold start misses or first reference misses
  • Capacity If the cache cannot contain all the
    blocks needed during execution of a program,
    capacity misses will occur because of blocks
    being discarded and later retrieved
  • Conflict If the block placement strategy is set
    associative or direct mapped, conflict misses(in
    addition to compulsory and capacity) will occur
    because a block can be discarded and later
    retrieved if too many blocks map to its set.
    These also called collision misses or
    interference misses

6
  • To show the benefit of associative, conflict
    misses are divided into misses caused by each
    decrease in associativity here are 4 division
  • Eight-way conflict misses due to going from
    fully associative ( no conflicts) to eight-way
    associative
  • Four-way conflict misses due to going from
    eight-way associative to four-way associative
  • two-way conflict misses due to going from
    four-way associative to two-way associative
  • one-way conflict misses due to going from
    two-way associative to one-way associative
    (direct mapped)

7
Total miss rate
8
miss rate 1-way associative cache size X
miss rate 2-way associative cache size X/2
conflict
21 Cache Rule
9
3 Cs Relative Miss Rate
Flaws for fixed block size Good insight gt
invention
10
How Can Reduce Misses?
  • 3 Cs Compulsory, Capacity, Conflict
  • In all cases, assume total cache size not
    changed
  • What happens if
  • 1) Change Block Size Which of 3Cs is obviously
    affected?
  • 2) Change Associativity Which of 3Cs is
    obviously affected?
  • 3) Change Compiler Which of 3Cs is obviously
    affected?

11
1. Reduce Misses via Larger Block Size
12
Note
  • at the same time, larger blocks increase the miss
    penalty.
  • Since the they reduce the number of blocks in the
    cache, larger blocks may increase conflicts
    misses and even capacity misses if the cache is
    small

13
2. Reduce Misses via Higher Associativity
  • Miss rates improve with higher associativity(
    last figures)
  • 21 Cache Rule direct-
  • Miss Rate Direct-Mapped cache size N Miss
    Rate 2-way cache size N/2
  • Beware Execution time is only final measure!
  • Will Clock Cycle time increase?
  • Hill 1988case for direct-mapped caches-
    computers 21 25-40-gt suggested hit time for
    2-way vs. 1-way external cache 10, internal
    2

14
3. Reducing Misses via aVictim Cache
  • How to combine fast hit time of direct mapped
    yet still avoid conflictsses?
  • Add buffer to place data discarded from cache
  • Jouppi 1990 4-entry victim cache removed 20
    to 95 of conflicts for a 4 KB direct mapped data
    cache
  • Used in Alpha, HP machines

Cpu address data data in out
tag
?
data
Victim cache
?
Write buffer
Low level mem
15
4. Miss rate reduction technique
Pseudo-Associative Cache
  • A cache access proceeds just as in the
    direct-mapped cache for a hit. On a miss,
    however, before going to the next level of the
    memory hierarchy, another cache entry is checked
    to see if it matches there

Hit time
Miss penalty
Pseudo hit time
time
16
5. Reducing Misses by Hardware Prefetching of
Instructions Data
  • E.g., Instruction Prefetching
  • Alpha 21064 fetches 2 blocks on a miss
  • Extra block placed in stream buffer
  • On miss check stream buffer
  • Works with data blocks too
  • Jouppi 1990 1 data stream buffer got 25 misses
    from 4KB cache 4 streams got 43
  • Palacharla Kessler 1994 for scientific
    programs for 8 streams got 50 to 70 of misses
    from 2 64KB, 4-way set associative caches
  • Prefetching relies on having extra memory
    bandwidth that can be used without penalty

17
6. Reducing Miss by Compiler(Software)-controlled
prefetching
  • An alternative to hardware prefetching is for the
    compiler to insert prefetch instructions ton
    request the data before they are needed. There
    are several flavors
  • register prefetch will load the value into a
    register
  • cache prefetch loads data only into the cache and
    not the register
  • Issuing Prefetch Instructions takes time
  • Is cost of prefetch issues lt savings in reduced
    misses?
  • Higher superscalar reduces difficulty of issue
    bandwidth

18
7. Reducing Misses by compiler optimization
  • McFarling 1989 reduced caches misses by 75 on
    8KB direct mapped cache, 4 byte blocksoftware
  • Instructions
  • Reorder procedures in memory so as to reduce
    conflict misses
  • Profiling to look at conflicts(using tools they
    developed)
  • Data
  • Merging Arrays improve spatial locality by
    single array of compound elements vs. 2 arrays
  • Loop Interchange change nesting of loops to
    access data in order stored in memory
  • Loop Fusion Combine 2 independent loops that
    have same looping and some variables overlap
  • Blocking Improve temporal locality by accessing
    blocks of data repeatedly vs. going down whole
    columns or rows

19
Review Improving Cache Performance
  • 1. Reduce the miss rate,
  • 2. Reduce the miss penalty,
  • 3. Reduce the time to hit in the cache.

20
Review Who Cares About the Memory Hierarchy?
  • Processor Only Thus Far in Course
  • CPU cost/performance, ISA, Pipelined Execution
  • CPU-DRAM Gap
  • 1980 no cache in µproc 1995 2-level cache on
    chip(1989 first Intel µproc with a cache on chip)

µProc 60/yr.
1000
CPU
Moores Law
100
Processor-Memory Performance Gap(grows 50 /
year)
Performance
10
DRAM 7/yr.
DRAM
1
1980
1981
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
1982
21
1. Reducing Miss Penalty Read Priority over
Write on Miss
  • Write through with write buffers offer RAW
    conflicts with main memory reads on cache misses
  • If simply wait for write buffer to empty, might
    increase read miss penalty (old MIPS 1000 by 50
    )
  • Check write buffer contents before read if no
    conflicts, let the memory access continue
  • Write Back?
  • Read miss replacing dirty block
  • Normal Write dirty block to memory, and then do
    the read
  • Instead copy the dirty block to a write buffer,
    then do the read, and then do the write
  • CPU stall less since restarts as soon as do read

22
2. Reducing Miss Penalty Subblock Placement
  • if you are designing a cache that must fit on the
    chip
  • you may find that your tag are too large, either
    because they dont fit on the chip or because they
    are too slow.
  • simple solution is to go to large blocks, which
    reduces tag storage without decreasing the amount
    of information you can store in the cache.
  • miss rate will improve, increase in miss penalty
    could make the larger blocks a bad decision
  • one solution is called sub-block placement

23
  • a valid bit is added to units smaller than the
    full block, called sub-blocks
  • only a single sub block need to read on a miss
  • clearly sub blocks are less miss penalty than
    full blcoks

24
3. Reducing Miss Penalty early restart and
critical word first
  • first 2 techniques requires extra hardware to
    reduce miss penalty, but not this third technique
  • it is based on the observation that the cpu needs
    just one word of the block at a time
  • this strategy is impatience dont wait for the
    full block to be loaded before sending sending
    the requested word and restarting the CPU

25
  • here are 2 specific strategies
  • Early restart As soon as the requested word of
    the block arrives, send it to the CPU and let the
    CPU continue execution
  • critical word fit- request the missed word first
    memory and send it to the CPU as soon as it
    arrives let the CPU continue execution while
    filling the rest of the word in the block.
    critical word first fetch is also called wrapped
    fetch and requested word first
  • this is benefit only for larger cache blocks,
    since the benefit is low unless blocks are large

26
4. Reducing Miss Penalty Non-blocking Caches to
reduce stalls on misses
  • Non-blocking cache or lockup-free cache allow
    data cache to continue to supply cache hits
    during a miss
  • requires out-of-order execution CPU
  • hit under miss reduces the effective miss
    penalty by working during miss vs. ignoring CPU
    requests
  • hit under multiple miss or miss under miss
    may further lower the effective miss penalty by
    overlapping multiple misses
  • Significantly increases the complexity of the
    cache controller as there can be multiple
    outstanding memory accesses
  • Requires multiple memory banks (otherwise cannot
    support)
  • Pentium Pro allows 4 outstanding memory misses

27
Value of Hit Under Miss for SPEC
0-gt1 1-gt2 2-gt64 Base
Hit under n Misses
Integer
Floating Point
  • FP programs on average AMAT 0.68 -gt 0.52 -gt
    0.34 -gt 0.26
  • Int programs on average AMAT 0.24 -gt 0.20 -gt
    0.19 -gt 0.19
  • 8 KB Data Cache, Direct Mapped, 32B block, 16
    cycle miss

28
5th Miss Penalty
  • L2 Equations (L2second level caches)
  • AMAT Hit TimeL1 Miss RateL1 x Miss
    PenaltyL1
  • Miss PenaltyL1 Hit TimeL2 Miss RateL2 x Miss
    PenaltyL2
  • AMAT Hit TimeL1 Miss RateL1 x (Hit TimeL2
    Miss RateL2 Miss PenaltyL2)
  • Definitions
  • Local miss rate misses in this cache divided by
    the total number of memory accesses to this cache
    (Miss rateL2)
  • Global miss ratemisses in this cache divided by
    the total number of memory accesses generated by
    the CPU (Miss RateL1 x Miss RateL2)
  • Global Miss Rate is what matters

29
(No Transcript)
30
Reducing Miss Penalty Summary
  • Five techniques
  • Read priority over write on miss
  • Subblock placement
  • Early Restart and Critical Word First on miss
  • Non-blocking Caches (Hit under Miss, Miss under
    Miss)
  • Second Level Cache
  • Can be applied recursively to Multilevel Caches
  • Danger is that time to DRAM will grow with
    multiple levels in between
  • First attempts at L2 caches can make things
    worse, since increased worst case is worse

31
What is the Impact of What Youve Learned About
Caches?
  • 1960-1985 Speed (no. operations)
  • 1990
  • Pipelined Execution Fast Clock Rate
  • Out-of-Order execution
  • Superscalar Instruction Issue
  • 1998 Speed (non-cached memory accesses)
  • Superscalar, Out-of-Order machines hide L1 data
    cache miss (5 clocks) but not L2 cache miss
    (50 clocks)?

32
Cache Optimization Summary
  • Technique MR MP HT Complexity
  • Larger Block Size 0Higher
    Associativity 1Victim Caches 2Pseudo-As
    sociative Caches 2HW Prefetching of
    Instr/Data 2Compiler Controlled
    Prefetching 3Compiler Reduce Misses 0
  • Priority to Read Misses 1Subblock Placement
    1Early Restart Critical Word 1st
    2Non-Blocking Caches 3Second Level
    Caches 2

miss rate
miss penalty
33
Cost A.M.A.T.
  • Generally Fast Hit Times and fewer misses
  • Since hits are few, target miss reduction

34
Reducing Misses Which apply to L2 Cache?
  • Reducing Miss Rate
  • 1. Reduce Misses via Larger Block Size
  • 2. Reduce Conflict Misses via Higher
    Associativity
  • 3. Reducing Conflict Misses via Victim Cache
  • 4. Reducing Conflict Misses via
    Pseudo-Associativity
  • 5. Reducing Misses by HW Prefetching Instr, Data
  • 6. Reducing Misses by SW Prefetching Data
  • 7. Reducing Capacity/Conf. Misses by Compiler
    Optimizations

35
L2 cache block size A.M.A.T.
  • 32KB L1, 8 byte path to memory
Write a Comment
User Comments (0)
About PowerShow.com