Exploiting Memory Hierarchies - PowerPoint PPT Presentation

About This Presentation
Title:

Exploiting Memory Hierarchies

Description:

Plus a (relatively) small, quickly accessible cache 'closer to you' Works because programs (vis- -vis memory) exhibit. Temporal Locality. Spatial Locality ... – PowerPoint PPT presentation

Number of Views:118
Avg rating:3.0/5.0
Slides: 20
Provided by: RowanUni8
Learn more at: http://elvis.rowan.edu
Category:

less

Transcript and Presenter's Notes

Title: Exploiting Memory Hierarchies


1
Exploiting Memory Hierarchies
  • Programs need memory to run in
  • Memory comes in two flavors
  • fast
  • large
  • Question Which kind to use?
  • Answer Both!

2
Memory Technologies(PH 1997)
Technology Typical access time /MByte
SRAM 5-25 ns 100-250
DRAM 60-120 ns 5-10
Magnetic Disk 10-20 million ns 0.10-0.20
3
Memory TechnologiesTrends
metric 1980 1985 1990 1995 2000 20001980
SRAM /MB 19,200 2,900 320 256 100 190
SRAM access (ns) 300 150 35 15 2 100
DRAM /MB 8,000 880 100 30 1 8,000
DRAM access (ns) 375 200 100 70 60 6
DRAM typical MB 0.064 0.256 4 16 64 1,000
Disk /MB 500 100 8 0.30 0.05 10,000
Disk access (ns) 87 75 28 10 8 11
Disk typical MB 1 10 160 1,000 9,000 9,000
courtesy Randy.Bryant_at_cs.cmu.edu
4
Memory Hierarchy
Speed
Size
Cost (/bit)
CPU
Cache-0
fastest
smallest
highest
Cache-1
Cache-2
slowest
largest
lowest
5
Caching
  • Large amount of (slow) memory
  • Plus a (relatively) small, quickly accessible
    cache closer to you
  • Works because programs (vis-à-vis memory) exhibit
  • Temporal Locality
  • Spatial Locality

6
CachingTemporal Locality
  • Programs tend to reference recently accessed
    memory
  • instruction memory loops
  • Memory hierarchies exploit this by caching
    recently-accessed data in close, fast memory

7
CachingSpatial Locality
  • Programs tend to reference memory close to
    recently-accessed items
  • Memory hierarchies exploit this by caching
    contiguous blocks of data
  • Well come back to this idea later

8
CachingTerminology
  • Hit when data required is found in cache
  • Miss when data is not in cache and must be
    retrieved from next level of memory
  • Hit Rate (hit ratio) fraction of memory
    accesses that result in hits
  • Miss Rate fraction of memory accesses that
    result in misses

9
CachingTerminology
  • Hit Time time to access data in cache(includes
    time to determine if data is cached)
  • Miss Penalty time to replace a block (of
    memory) in the cache

10
Caching ModelsDirect-Mapped Cache
  • Q. How do we know if an item is in cache? And if
    so, how do we find it?
  • A. Assign each memory location a unique (but not
    exclusive) place in the cache

11
Caching ModelsDirect-Mapped Cache
Valid? tag data




00
00
00
00
01
01
01
01
10
10
10
10
11
11
11
11
tag
  • cache 4 1-word slots
  • memory 16 words (addresses 0000-1111)

12
Caching ModelsDirect-Mapped Cache
  • Exercise
  • How many cache hits/misses for the following
    memory access pattern?
  • 0010,0011,0110,1110,0000,1111,1001,0110
  • What if we had 8 cache slots instead of 4?

13
Caching ModelsCache Misses Reading vs. Writing
  • What if program writes (rather than reads) a
    memory location?
  • write-through (the cache to next level)
  • write-buffer (buffer of pending writes)
  • write-back (to next level before entry replaced)

14
Caching ModelsExploiting Spatial Locality
  • So far, cache only exploits temporal locality
  • Can exploit spatial locality by caching blocks of
    contiguous words, rather than singles

15
Caching ModelsDirect-Mapped Cache
00
00
00
00
01
01
01
01
10
10
10
10
11
11
11
11
tag/word 00 01 10 11
V? W? tag data data data data



  • cache 4 4-word slots
  • memory 64 words (addresses 000000-111111)

16
Caching ModelsOther mapping models
  • Fully-associative
  • No strict mapping from memory to cache line
  • Cached entries retained longest, extra overhead
    to determine presence
  • Set-associative
  • mix of direct-mapping and associative

17
Caching ModelsOther mapping models
  • How to determine which cache entry to replace?
  • Least-recently used (LRU)(not always best, but
    simplest)

18
Virtual Memory
  • Caching, but for a different purpose
  • Physical memory as a cache for virtual memory
    (disk-resident program memory)
  • Allows program(s) to address more than physical
    dimensions of memory

19
Virtual MemoryTranslation-lookaside Buffer (TLB)
  • Cache of recently translated (physical ? virtual)
    addresses
  • No larger (usually smaller) than number of lines
    in memory.
Write a Comment
User Comments (0)
About PowerShow.com