Memory Management - PowerPoint PPT Presentation

About This Presentation
Title:

Memory Management

Description:

Memory Management Jordan University of Science & Technology CPE 746 Embedded Real-Time Systems Prepared By: Salam Al-Mandil & Hala Obaidat Supervised By: Dr. Lo ai ... – PowerPoint PPT presentation

Number of Views:94
Avg rating:3.0/5.0
Slides: 55
Provided by: justEduJ4
Category:

less

Transcript and Presenter's Notes

Title: Memory Management


1
Memory Management
  • Jordan University of Science Technology
  • CPE 746 Embedded Real-Time Systems
  • Prepared By Salam Al-Mandil Hala Obaidat
  • Supervised By Dr. Loai Tawalbeh

2
Outline
  • Introduction
  • Common Memory Types
  • Composing Memory
  • Memory Hierarchy
  • Caches
  • Application Memory Management
  • Static memory management
  • Dynamic memory management
  • Memory Allocation
  • The problem of fragmentation
  • Memory Protection
  • Recycling techniques

3
Introduction
  • An embedded system is a special-purpose computer
    system designed to perform one or a few dedicated
    functions, sometimes with real-time computing
    constraints.
  • An embedded system is part of a larger system.
  • Embedded systems often have small memory and are
    required to run a long time, so memory management
    is a major concern when developing real-time
    applications.

4
Common Memory Types
5
RAM
  • DRAM
  • Volatile memory.
  • Address lines are multiplexed. The 1st half is
    sent 1st called the RAS. The 2nd half is sent
    later called CAS.
  • A capacitor and a single transistor / bit gt
    better capacity.
  • Requires periodical refreshing every 10-100 ms gt
    dynamic.
  • Cheaper / bit gt lower cost.
  • Slower gt used for main memory.

6
Reading DRAM Super cell (2,1)
  • Step 1(a) Row access strobe (RAS) selects row 2.
  • Step 1(b) Row 2 copied from DRAM array to row
    buffer.

16 x 8 DRAM chip
cols
0
1
2
3
memory controller
RAS 2
2 /
0
addr
1
rows
2
3
8 /
data
internal row buffer
7
Reading DRAM Super cell (2,1)
  • Step 2(a) Column access strobe (CAS) selects
    column 1.
  • Step 2(b) Super cell (2,1) copied from buffer to
    data lines, and eventually back to the CPU.

16 x 8 DRAM chip
cols
0
1
2
3
memory controller
CAS 1
2 /
0
addr
1
rows
2
3
8 /
data
internal row buffer
internal buffer
8
RAM
  • SRAM
  • Volatile memory.
  • Six transistors / bit gt lower capacity.
  • No refreshing required gt faster lower power
    consumption.
  • More expensive / bit gt higher cost.
  • Faster gt used in caches.

9
Some Memory Types
  • ROM
  • Non-volatile memory.
  • Can be read from but not written to, by a
    processor in an embedded system.
  • Traditionally written to, programmed, before
    inserting to embedded system.
  • Stores constant data needed by system.
  • Horizontal lines words, vertical lines data.
  • Some embedded systems work without RAM,
    exclusively on ROM, because their programs and
    data are rarely changed.

10
Some Memory Types
  • Flash Memory
  • Non-volatile memory.
  • Can be electrically erased reprogrammed.
  • Used in memory cards, and USB flash drives.
  • It is erased and programmed in large blocks at
    once, rather than one word at a time.
  • Examples of applications include PDAs and laptop
    computers, digital audio players, digital cameras
    and mobile phones.

11
Type Volatile? Writeable? Erase Size Max Erase Cycles Cost (per Byte) Speed
SRAM Yes Yes Byte Unlimited Expensive Fast
DRAM Yes Yes Byte Unlimited Moderate Moderate
Masked ROM No No n/a n/a Inexpensive Fast
PROM No Once, with a device programmer n/a n/a Moderate Fast
EPROM No Yes, with a device programmer Entire Chip Limited (consult datasheet) Moderate Fast
EEPROM No Yes Byte Limited (consult datasheet) Expensive Fast to read, slow to erase/write
Flash No Yes Sector Limited (consult datasheet) Moderate Fast to read, slow to erase/write
NVRAM No Yes Byte Unlimited Expensive (SRAM battery) Fast
12
Composing Memory
  • When available memory is larger, simply ignore
    unneeded high-order address bits and higher data
    lines.
  • When available memory is smaller, compose several
    smaller memories into one larger memory
  • Connect side-by-side.
  • Connect top to bottom.
  • Combine techniques.

13
Connect side-by-side
  • To increase width of words.

14
Connect top to bottom
  • To increase number of words.

15
Combine techniques
  • To increase number and width of words.

16
Memory Hierarchy
  • Is an approach for organizing memory and storage
    systems.
  • A memory hierarchy is organized into several
    levels each smaller, faster, more expensive /
    byte than the next lower level.
  • For each k, the faster, smaller device at level k
    serves as a cache for the larger, slower device
    at level k1.
  • Programs tend to access the data at level k more
    often than they access the data at level k1.

17
An Example Memory Hierarchy
L0
registers
CPU registers hold words retrieved from L1 cache.
Smaller, faster, and costlier (per byte) storage
devices
on-chip L1 cache (SRAM)
L1
off-chip L2 cache (SRAM)
L2
main memory (DRAM)
L3
Larger, slower, and cheaper (per
byte) storage devices
local secondary storage (local disks)
L4
remote secondary storage (distributed file
systems, Web servers)
L5
18
Caches
  • Cache The first level(s) of the memory hierarchy
    encountered once the address leaves the CPU.
  • The term is generally used whenever buffering is
    employed to reuse commonly occurring items such
    as webpage caches, file caches, name caches.

19
Caching in a Memory Hierarchy
4
10
4
10
0
1
2
3
Larger, slower, cheaper storage device at level
k1 is partitioned into blocks.
4
5
6
7
4
Level k1
8
9
10
11
10
12
13
14
15
20
General Caching Concepts
  • Program needs object d, which is stored in some
    block b.
  • Cache hit
  • Program finds b in the cache at level k. E.g.,
    block 14.
  • Cache miss
  • b is not at level k, so level k cache must fetch
    it from level k1. E.g., block 12.
  • If level k cache is full, then some current block
    must be replaced (evicted). Which one is the
    victim? Well see later.

Request 14
Request 12
14
12
0
1
2
3
Level k
14
4
9
3
14
4
12
Request 12
12
4
0
1
2
3
4
5
6
7
Level k1
4
8
9
10
11
12
13
14
15
12
21
Cache Placement
  • There are 3 categories of cache organization
  • Direct-mapped.
  • Fully-associative.
  • Set-associative.

22
Direct-Mapped
  • The block can appear in 1 place only.
  • Fastest simplest organization but highest miss
    rate due to contention.
  • Mapping is usually Block address Number of
    blocks in cache.

23
Fully-Associative
  • The block can appear anywhere in the cache.
    Slowest organization but lowest miss rate.

24
Set-Associative
  • The block can appear anywhere within a single
    set. (n-way set associative)
  • The set number is usually Block address Number
    of sets in the cache.

25
Cache Replacement
  • In a direct-mapped cache, only 1 block is checked
    for a hit, only that block can be replaced.
  • For set-associative or fully-associative caches,
    the evicted block is chosen using three
    strategies
  • Random.
  • LRU.
  • FIFO.

26
Cache Replacement
  • As the associativity increases gt LRU harder
    more expensive to implement gt LRU is
    approximated.
  • LRU random perform almost equally for larger
    caches. But LRU outperforms others for small
    caches.

27
Write Policies
  • Write Back the information is only written to
    the block in the cache.
  • Write Through the information is written to both
    the block in the cache to the block in lower
    levels.

28
Reducing the Miss Rate
  • Larger Block Sizes Caches.
  • Higher Associativity.

29
Application Memory Management
  • Allocation to allocate portions of memory to
    programs at their request.
  • Recycling freeing it for reuse when no longer
    needed.

30
Memory Management
  • In many embedded systems, the kernel and
    application programs execute in the same space
    i.e., there is no memory protection.
  • The embedded operating systems thus make large
    effort to reduce its memory occupation size.

31
Memory Management
  • An RTOS uses small memory size by including only
    the necessary functionality for an application.
  • We have two kinds of memory management
  • Static
  • Dynamic

32
Static memory management
  • provides tasks with temporary data space.
  • The systems free memory is divided into a pool
    of fixed sized memory blocks.
  • When a task finishes using a memory block it must
    return it to the pool.

33
Static memory management
  • Another way is to provide temporary space for
    tasks is via priorities
  • A high priority pool is sized to have the
    worst-case memory demand of the system
  • A low priority pool is given the remaining free
    memory.

34
Dynamic memory management
  • employs memory swapping, overlays,
    multiprogramming with a fixed number of tasks
    (MFT), multiprogramming with a variable number of
    tasks (MVT) and demand paging.
  • Overlays allow programs larger than the available
    memory to be executed by partitioning the code
    and swapping them from disk to memory.

35
Dynamic memory management
  • MFT a fixed number of equalized code parts are
    in memory at the same time.
  • MVT is like MFT except that the size of the
    partition depends on the needs of the program.
  • Demand paging have fixed-size pages that reside
    in non-contiguous memory, unlike those in MFT and
    MVT

36
Memory Allocation
  • is the process of assigning blocks of memory on
    request .
  • Memory for user processes is divided into
    multiple partitions of varying sizes.
  • Hole is a block of available memory.

37
Static memory allocation
  • means that all memory is allocated to each
    process or thread when the system starts up. In
    this case, you never have to ask for memory while
    a process is being executed. This is very costly.
  • The advantage of this in embedded systems is that
    the whole issue of memory-related bugs-due to
    leaks, failures, and dangling pointers-simply
    does not exist .

38
Dynamic Storage-Allocation
  • How to satisfy a request of size n from a list of
    free holes. This means that during runtime, a
    process is asking the system for a memory block
    of a certain size to hold a certain data
    structure.
  • Some RTOSs support a timeout function on a memory
    request. You ask the OS for memory within a
    prescribed time limit.

39
Dynamic Storage-Allocation Schemes
  • First-fit Allocate the first hole that is big
    enough, so it is fast
  • Best-fit Allocate the smallest hole that is big
    enough must search entire list, unless ordered
    by size.
  • Buddy it divides memory into partitions to try
    to satisfy a memory request as suitably as
    possible.

40
Buddy memory allocation
  • allocates memory in powers of 2
  • it only allocates blocks of certain sizes
  • has many free lists, one for each permitted size

41
How buddy works?
  • If memory is to be allocated
  • 1-Look for a memory slot of a suitable size (the
    minimal 2k block that is larger then the
    requested memory)
  • If it is found, it is allocated to the program
  • If not, it tries to make a suitable memory slot.
    The system does so by trying the following
  • Split a free memory slot larger than the
    requested memory size into half
  • If the lower limit is reached, then allocate that
    amount of memory
  • Go back to step 1 (look for a memory slot of a
    suitable size)
  • Repeat this process until a suitable memory slot
    is found

42
How buddy works?
  • If memory is to be freed
  • Free the block of memory
  • Look at the neighboring block - is it free too?
  • If it is, combine the two, and go back to step 2
    and repeat this process until either the upper
    limit is reached (all memory is freed), or until
    a non-free neighbor block is encountered

43
Example buddy system
64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K
t0 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K
t1 A-64K 64K 128K 128K 256K 256K 256K 256K 512K 512K 512K 512K 512K 512K 512K 512K
t2 A-64K 64K B-128K B-128K 256K 256K 256K 256K 512K 512K 512K 512K 512K 512K 512K 512K
t3 A-64K C-64K B-128K B-128K 256K 256K 256K 256K 512K 512K 512K 512K 512K 512K 512K 512K
t4 A-64K C-64K B-128K B-128K D-128K D-128K 128K 128K 512K 512K 512K 512K 512K 512K 512K 512K
t5 A-64K 64K B-128K B-128K D-128K D-128K 128K 128K 512K 512K 512K 512K 512K 512K 512K 512K
t6 128K 128K B-128K B-128K D-128K D-128K 128K 128K 512K 512K 512K 512K 512K 512K 512K 512K
t7 256K 256K 256K 256K D-128K D-128K 128K 128K 512K 512K 512K 512K 512K 512K 512K 512K
t8 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K 1024K
44
The problem of fragmentation
  • neither first fit nor best fit is clearly better
    that the other in terms of storage utilization,
    but first fit is generally faster.
  • All the previous schemes has external
    fragmentation.
  • the buddy memory system has little external
    fragmentation.

45

Fragmentation
  • External Fragmentationtotal memory space exists
    to satisfy a request, but it is not contiguous.
  • Internal Fragmentationallocated memory may be
    slightly larger than requested memory this size
    difference is memory internal to a partition, but
    not being used.

46
Example Internal Fragmentation
47
Memory Protection
  • it may not be acceptable for a hardware failure
    to corrupt data in memory. So, use of a hardware
    protection mechanism is recommended.
  • This hardware protection mechanism can be found
    in the processor or MMU.
  • MMUs also enable address translation, which is
    not needed in RT because we use cross-compilers
    that generate PIC code (Position Independent
    Code).

48
Hardware Memory Protection
49
Recycling techniques
  • There are many ways for automatic memory managers
    to determine what memory is no longer required
  • garbage collection relies on determining which
    blocks are not pointed to by any program
    variables .

50
Recycling techniques
  • Tracing collectors Automatic memory managers
    that follow pointers to determine which blocks of
    memory are reachable from program variables.
  • Reference counts is a count of how many
    references (that is, pointers) there are to a
    particular memory block from other blocks .

51
Example Tracing collectors
  • Mark-sweep collection
  • Phase1 all blocks that can be reached by the
    program are marked.
  • Phase2 the collector sweeps all allocated
    memory, searching for blocks that have not been
    marked. If it finds any, it returns them to the
    allocator for reuse.

52
Mark-sweep collection
  • The drawbacks
  • It must scan the entire memory in use before any
    memory can be freed.
  • It must run to completion or, if interrupted,
    start again from scratch.

53
Example Reference counts
  • Simple reference counting
  • a reference count is kept for each object.
  • The count is incremented for each new reference,
    decremented if a reference is overwritten, or if
    the referring object is recycled.
  • If a reference count falls to zero, then the
    object is no longer required and can be recycled.
  • it is hard to implement efficiently because of
    the cost of updating the counts.

54
References
  • http//www.memorymanagement.org/articles/recycle.h
    tml
  • http//www.dedicated-systems.com
  • http//www.Wikipedia.org
  • http//www.cs.utexas.edu
  • http//www.netrino.com
  • S. Baskiyar,Ph.D. and N.Meghanathan,A Survey of
    Contemporary Real-time Operating Systems.
Write a Comment
User Comments (0)
About PowerShow.com