Internal Memory - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Internal Memory

Description:

Bits stored as charge in capacitors. Charges leak. Need refreshing even when powered ... A main memory block can load into any line of cache ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 35
Provided by: adria231
Category:
Tags: amain | internal | memory

less

Transcript and Presenter's Notes

Title: Internal Memory


1
Computer System Architecture
Chapter 6 12 Memory
2
Semiconductor Memory
  • RAM
  • Misnamed as all semiconductor memory is random
    access
  • Read/Write
  • Volatile
  • Temporary storage
  • Static or dynamic

3
Dynamic RAM
  • Bits stored as charge in capacitors
  • Charges leak
  • Need refreshing even when powered
  • Need refresh circuits
  • Simpler construction
  • Smaller per bit
  • Less expensive
  • Slower
  • Main memory

4
Refreshing
  • Refresh circuit included on chip
  • Disable chip
  • Count through rows
  • Read Write back
  • Takes time
  • Slows down apparent performance

5
Static RAM
  • Bits stored as on/off switches
  • No charges to leak
  • No refreshing needed when powered
  • More complex construction
  • Larger per bit
  • More expensive
  • Faster
  • Cache

6
Read Only Memory (ROM)
  • Permanent storage and can be used for
  • Microprogramming
  • Library subroutines
  • Systems programs (BIOS)
  • Function tables

7
Characteristics of Memory system
  • Location
  • Capacity
  • Unit of transfer
  • Access method
  • Performance
  • Physical type
  • Organisation

8
Memory hierarchy
CPU
Internal
external
9
Unit of Transfer
  • Internal
  • Usually governed by data bus width
  • External
  • Usually a block which is much larger than a
    byte/word
  • Addressable unit
  • Smallest location which can be uniquely addressed
  • Word/byte internally
  • Block/cluster on disks

10
Access Methods
  • Random
  • Individual addresses identify locations exactly
  • Access time is independent of location or
    previous access
  • e.g. RAM
  • Associative
  • Data is located by a comparison with contents of
    a portion of the stored address
  • Access time is independent of location or
    previous access
  • e.g. cache

11
Do you want fast?
  • It is possible to build a computer which uses
    only static RAM
  • This would be very fast
  • This would need no cache
  • How can you cache cache?
  • This would cost a very large amount

12
If you want fast and cheap (BOB)
  • A tradeoff is to integrate different memory into
    a hierarchy
  • Why is this possible?

13
1) Program behaviour - Locality
  • During the course of the execution of a program,
    memory references tend to cluster
  • e.g. loops

14
2) Programming smartly
  • Memory is organized in a linear space
  • Programmer must be aware of this
  • These two can perform very different

For (i0 ilt1000 i) for (j0 jlt1000 j)
sum arrayji
For (i0 ilt1000 i) for (j0 jlt1000 j)
sum arrayij
15
If the linear space is row based
16
Cache
  • Because memory speed is slower than CPU, small
    amount of fast memory is needed
  • Sits between normal main memory and CPU
  • May be located on CPU chip or module

1 M
17
Cache operation - overview
  • CPU requests send to CCU (cache control unit)
  • CCU checks cache for this data
  • If present, get from cache (fast)
  • If not present, read required block from main
    memory to cache
  • Then deliver from cache to CPU
  • Cache includes tags to identify which block of
    main memory is in each cache slot

18
Cache operation (cont)
  • If cache is full and new request arrival, CCU has
    to make room for new data
  • Which block should be selected to dump?
  • If the victim selected has been changed since it
    loaded into cache, it must write back

19
Cache Design
  • Size
  • Mapping Function
  • Replacement Algorithm
  • Write Policy
  • Block Size
  • Number of Caches

20
Size does matter
  • Cost
  • More cache is expensive
  • Speed
  • More cache is faster (up to a point), then get
    slow
  • Checking cache for data takes time

21
Typical Cache Organization
22
Mapping Function
  • Cache of 64kByte
  • Cache block of 4 bytes
  • i.e. cache is 16k (214) lines of 4 bytes
  • 16MBytes main memory
  • 24 bit address
  • (22416M)

4 bytes
16k
23
Direct Mapping Example
24
Direct Mapping pros cons
  • Simple
  • Inexpensive
  • Fixed location for given block
  • If a program accesses 2 blocks that map to the
    same line repeatedly, cache misses are very high

100
100
100
cache
Memory
25
Associative Mapping
  • A main memory block can load into any line of
    cache
  • Memory address is interpreted as tag and word
  • Tag uniquely identifies block of memory
  • Every lines tag is examined for a match
  • Cache searching gets expensive

26
Fully Associative Cache Organization
27
Associative Mapping Example
4
F
F
F
Shift 2 bits right
3FFFD
F
F
28
Replacement Algorithms (1)Direct mapping
  • No choice
  • Each block only maps to one line
  • Replace that line

29
Replacement Algorithms (2)Associative
  • Hardware implemented algorithm (speed)
  • Least Recently used (LRU)
  • First in first out (FIFO)
  • replace block that has been in cache longest
  • Least frequently used
  • replace block which has had fewest hits
  • Random

30
Write Policy
  • Must not overwrite a cache block unless main
    memory is up to date
  • Multiple CPUs may have individual caches
  • I/O may address main memory directly

31
Write through
  • All writes go to main memory as well as cache
  • Multiple CPUs can monitor main memory traffic to
    keep local (to CPU) cache up to date
  • Lots of traffic
  • Slows down writes

32
Write back
  • Updates initially made in cache only
  • Update bit for cache slot is set
  • If block is to be replaced, write to main memory
    only if update bit is set
  • Other caches get out of sync
  • I/O must access main memory through cache
  • N.B. 15 of memory references are writes

33
Virtual memory
  • Virtual memory separation of user logical
    memory from physical memory.
  • Only part of the program needs to be in memory
    for execution.
  • Logical address space can therefore be much
    larger than physical address space.
  • Allows address spaces to be shared by several
    processes.
  • Allows for more efficient process creation.
  • Virtual memory can be implemented via
  • Demand paging
  • Demand segmentation

yes
Cond?
no
B
A
34
Virtual Memory That is Larger Than Physical Memory
Write a Comment
User Comments (0)
About PowerShow.com