Cache Memory - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Cache Memory

Description:

Replacing and Writing Data into Cache. Cache Performance. Today we ... No ambivalence. wider memory, acts on blocks of data, not just individual. locations. ... – PowerPoint PPT presentation

Number of Views:3527
Avg rating:3.0/5.0
Slides: 31
Provided by: nimian
Category:

less

Transcript and Presenter's Notes

Title: Cache Memory


1
Cache Memory
  • Nimi Berman
  • CS 147
  • Fall 2003

2
Cache Memory
  • Nimi Berman
  • CS 147
  • Fall 2003

3
Today we will talk about
  • What is Cache Memory?
  • Types of Cache Memory
  • Associative Mapping
  • Direct Mapping
  • Set-Associative Mapping
  • Replacing and Writing Data into Cache
  • Cache Performance

4
1. So, what is cache memory?
  • We want to minimize memory accesses by processor.
  • Cache memory is just like regular memory, only
    it has faster access time of 10 ns.
  • Constructed using static RAM or associative
    memory ? more expensive then RAM.

A lot of cache ? a lot of cash
5
Static RAM vs. associative memory
  • Receives address from CPU
  • Access data at that address in cache
  • Receives portion of data
  • Search all cache locations in parallel
  • Return matches

6
How is Data stored in Cache?
  • Valid bit 1 if holds valid data, 0 if garbage.
  • CPU specifies data value to be matched.
  • Example
  • Find data with 1010 as four high-order bits.
  • CPU loads 1111 0000 0000 0000 to mask register
    and 1010 XXXX XXXX XXXX to data register.
  • Find matches.

7
(No Transcript)
8
2.a. Associative Mapping
  • First 16 bits memory address
  • Last 8 bits the actual data stored in that
    memory address
  • CPU outputs 16-bit address to be accessed and
    8-bit dont cares into data register and the
    value
  • 1111 1111 1111 1111 0000 0000
  • to mask register.

9
(No Transcript)
10
Alternate Configuration
wider memory, acts on blocks of data, not just
individual locations.
Locality of Reference Next instruction is likely
to be in adjacent memory location.
11
Result
  • Entire block is copied to cache.
  • Next instruction is likely to already be in
    cache.
  • No need for main memory access.
  • Faster processing

12
2.b. Direct Mapping
  • Can be larger than Associative cache but still
    cheaper.
  • 10 Low-order bits (index) specific location in
    cache to store data (1-1 relation).
  • High-order bits (tag) same as high-order bits
    of original main memory address.

13
(No Transcript)
14
Problem Any data from location XXXX XX11 1111
1111 in main memory map to same location in cache.
Solution Tag stores high-order bits from
original main memory address, cache address is
same as 10 low-order bits. CPU knows it accesses
correct data in cache.
? No ambivalence
15
Alternate Configuration
wider memory, acts on blocks of data, not just
individual locations.
Locality of Reference Next instruction is likely
to be in adjacent memory location.
16
Direct Mapping vs. Associative Cache
  • Cheaper than Associative
  • More flexible than Direct Mapping

0000 0000 0000 0000 (0000H) JUMP 1000H 0001
0000 0000 0000 (1000H) JUMP 0000H
  • In direct mapping
  • Both map to same location (overwriting)
  • Other cache locations not in use

17
2.c. Set-Associative Mapping
  • Low cost SRAM
  • Each address in cache contains several data
    values an n-way set-associative cache contains
    n values.
  • Tag field is same as in Direct Mapping but has
    one extra bit (9 bits for location, 7 bits for
    tag).

18
  • Count/Valid field valid bit as before, count
    bit to track when data was accessed.

19
Alternate Configuration
20
Which method is used?
21
3. Replacing and Writing Data
  • When valid bit 0 memory location is free.

Problem What do we do when all memory locations
are occupied?
Solution Must overwrite on existing data. But,
which one?
22
Direct Mapping Each data value can only map to
specific location, that location must be used,
old value written to main memory.
  • Associative Mapping
  • FIFO fill cache from top to bottom and start
    replacing from top. Produces good performance.
  • Least Recently Used using a counter keep track
    of which data was last accessed, replace least
    recently used.
  • Random produces good performance.

23
Set-associative Mapping Often utilizes the Least
Recently Used method.
Access D
Access E
Access A
24
4. Cache Performance
  • Main purpose of cache memory is to improve
    system performance by minimizing slow accesses to
    main memory.
  • Cache hits when CPU accesses cache instead of
    main memory.
  • Cache miss when CPU accesses main memory and
    not cache.
  • Hit ratio h of cache hits.

25
  • As h ? 1 we get faster and better system
    performance.

26
Associative cache h 0.389 and TM 40.56 ns
27
Direct-mapped cache h 0.167 and TM 50.67 ns
28
Two-way set-associative cache h 0.389 and TM
40.56 ns
29
Conclusion
  • Cache Memory is fast (and expensive)
  • Types of Cache Memory
  • Associative Mapping
  • Direct Mapping
  • Set-Associative Mapping
  • Replacing and Writing Data into Cache
  • Cache Performance

30
Bottom Line
Cache memory speeds up system performance thus
making the users life a lot easier.
Write a Comment
User Comments (0)
About PowerShow.com