Title: Cache Memory
1Cache Memory
- Nimi Berman
- CS 147
- Fall 2003
2Cache Memory
- Nimi Berman
- CS 147
- Fall 2003
3Today we will talk about
- Associative Mapping
- Direct Mapping
- Set-Associative Mapping
- Replacing and Writing Data into Cache
41. So, what is cache memory?
- We want to minimize memory accesses by processor.
- Cache memory is just like regular memory, only
it has faster access time of 10 ns.
- Constructed using static RAM or associative
memory ? more expensive then RAM.
A lot of cache ? a lot of cash
5Static RAM vs. associative memory
- Receives address from CPU
- Access data at that address in cache
- Receives portion of data
- Search all cache locations in parallel
- Return matches
6How is Data stored in Cache?
- Valid bit 1 if holds valid data, 0 if garbage.
- CPU specifies data value to be matched.
- Find data with 1010 as four high-order bits.
- CPU loads 1111 0000 0000 0000 to mask register
and 1010 XXXX XXXX XXXX to data register. - Find matches.
7(No Transcript)
82.a. Associative Mapping
- First 16 bits memory address
- Last 8 bits the actual data stored in that
memory address
- CPU outputs 16-bit address to be accessed and
8-bit dont cares into data register and the
value - 1111 1111 1111 1111 0000 0000
- to mask register.
9(No Transcript)
10Alternate Configuration
wider memory, acts on blocks of data, not just
individual locations.
Locality of Reference Next instruction is likely
to be in adjacent memory location.
11Result
- Entire block is copied to cache.
- Next instruction is likely to already be in
cache. - No need for main memory access.
122.b. Direct Mapping
- Can be larger than Associative cache but still
cheaper.
- 10 Low-order bits (index) specific location in
cache to store data (1-1 relation).
- High-order bits (tag) same as high-order bits
of original main memory address.
13(No Transcript)
14Problem Any data from location XXXX XX11 1111
1111 in main memory map to same location in cache.
Solution Tag stores high-order bits from
original main memory address, cache address is
same as 10 low-order bits. CPU knows it accesses
correct data in cache.
? No ambivalence
15Alternate Configuration
wider memory, acts on blocks of data, not just
individual locations.
Locality of Reference Next instruction is likely
to be in adjacent memory location.
16Direct Mapping vs. Associative Cache
- More flexible than Direct Mapping
0000 0000 0000 0000 (0000H) JUMP 1000H 0001
0000 0000 0000 (1000H) JUMP 0000H
- In direct mapping
- Both map to same location (overwriting)
- Other cache locations not in use
172.c. Set-Associative Mapping
- Each address in cache contains several data
values an n-way set-associative cache contains
n values.
- Tag field is same as in Direct Mapping but has
one extra bit (9 bits for location, 7 bits for
tag).
18- Count/Valid field valid bit as before, count
bit to track when data was accessed.
19Alternate Configuration
20Which method is used?
213. Replacing and Writing Data
- When valid bit 0 memory location is free.
Problem What do we do when all memory locations
are occupied?
Solution Must overwrite on existing data. But,
which one?
22Direct Mapping Each data value can only map to
specific location, that location must be used,
old value written to main memory.
- Associative Mapping
- FIFO fill cache from top to bottom and start
replacing from top. Produces good performance.
- Least Recently Used using a counter keep track
of which data was last accessed, replace least
recently used.
- Random produces good performance.
23Set-associative Mapping Often utilizes the Least
Recently Used method.
Access D
Access E
Access A
244. Cache Performance
- Main purpose of cache memory is to improve
system performance by minimizing slow accesses to
main memory.
- Cache hits when CPU accesses cache instead of
main memory.
- Cache miss when CPU accesses main memory and
not cache.
- Hit ratio h of cache hits.
25- As h ? 1 we get faster and better system
performance.
26Associative cache h 0.389 and TM 40.56 ns
27Direct-mapped cache h 0.167 and TM 50.67 ns
28Two-way set-associative cache h 0.389 and TM
40.56 ns
29Conclusion
- Cache Memory is fast (and expensive)
- Associative Mapping
- Direct Mapping
- Set-Associative Mapping
- Replacing and Writing Data into Cache
30Bottom Line
Cache memory speeds up system performance thus
making the users life a lot easier.