The Memory Fragmentation Problem: Solved - PowerPoint PPT Presentation

About This Presentation
Title:

The Memory Fragmentation Problem: Solved

Description:

FIFO, LIFO or address order. First, next of best fit ... Search the list of free blocks from the beginning. Use first large enough block, split if needed ... – PowerPoint PPT presentation

Number of Views:237
Avg rating:3.0/5.0
Slides: 34
Provided by: davido150
Category:

less

Transcript and Presenter's Notes

Title: The Memory Fragmentation Problem: Solved


1
The Memory Fragmentation Problem Solved?
  • Mark S. Johnstone
  • Paul R. Wilson
  • Presented by David Oren (doren_at_math.tau.ac.il)

2
Dynamic Memory Allocation
  • Memory is
  • Allocated by the running process
  • Freed when no longer needed
  • The allocator handles those requests, and keeps
    track of used and unused memory
  • No garbage collection, no compaction

3
What Is Fragmentation?
  • Inefficient use of memory, or
  • The inability to use free memory
  • There are two types of fragmentation
  • External fragmentation
  • Internal fragmentation

4
External Fragmentation
  • Arises when free blocks are available, but cannot
    be used to hold objects of the sizes requested
  • This can have several reasons
  • The free blocks are too small
  • The allocator is unable to split larger blocks

5
Internal Fragmentation
  • Arises when a block is allocated, but it is
    larger than needed the rest is wasted
  • This can have several reasons
  • Architecture constraints
  • Allocator policy

6
Allocator policies
  • Policies fall into three basic categories
  • Sequential fits
  • Segregated free lists
  • Buddy systems

7
Sequential fit
  • Classic implementations
  • Doubly linked linear or circular list
  • FIFO, LIFO or address order
  • First, next of best fit
  • Instances of policies
  • There are more efficient implementations

8
First fit
  • Search the list of free blocks from the beginning
  • Use first large enough block, split if needed
  • Easy to implement
  • Lots of small blocks at the beginning

9
Next fit
  • A common optimization on first fit
  • Each search begins where the previous one ended
  • No accumulation of small blocks
  • Generally increases fragmentation

10
Best fit
  • Use the smallest block large enough
  • Minimize the amount of wasted space
  • Sequential best-fit is inefficient

11
Segregated free lists
  • Use a set of lists of free blocks
  • Each list hold blocks of a given size
  • A freed block is pushed onto the relevant list
  • When a block is needed, the relevant list is used

12
Simple segregated storage
  • All blocks in the list are of the same size
  • Large free blocks are not split
  • Smaller blocks are not coalesced
  • Efficient implementation
  • Internal fragmentation

13
Segregated fit algorithms
  • Free list contain several sizes
  • When memory is requested
  • The relevant list is (sequentially) searched
  • If no block is found, the other lists are
    searched, and the block is split
  • Freed blocks are coalesced
  • Approximates best-fit

14
Buddy systems
  • Memory is conceptually split into buddies
  • Only buddies may be coalesced
  • Very easy coalescing
  • Internal fragmentation
  • (but may use several buddy-systems to reduce it)

15
Test Method
  • Allocators were traditionally tested with
    synthetically generated allocations
  • However, we cannot be certain that they resemble
    real programs
  • Therefore, the test was conducted using real
    programs

16
Test Program Criterions
  • Allocation intensive programs
  • Programs with large amount of live data
  • A wide variety of program types
  • Avoiding over-representation of one type
  • Non-leaking programs

17
Test Programs
  • In the end, eight programs were chosen
  • They present a variety of different memory usage
    requirements
  • It can still be argued whether they are
    representative of real-life problems

18
Test Programs
  • Espresso
  • gcc (2.5.1)
  • Ghostscript
  • Grobner
  • Hyper
  • LRUsim
  • P2C
  • Perl

19
Program Statistics
20
Testing Methods
  • The goal is to test policy, not implementation
  • The allocators were tested offline, not
    incorporated into running programs
  • Runtime was not tested

21
Testing Methods (Cont.)
  • Testing was done in three steps
  • Replacing memory allocation functions with new
    ones, which create a trace of calls
  • Reading the trace, and extracting statistics
    about the program
  • Reading the trace, and calling the allocation
    functions of the policy being tested (keeping
    track of allocation from the OS)

22
Removing Overhead
  • Header and footer overhead
  • Less memory was allocated in the simulation than
    the program requested
  • Alignment overhead
  • The amount of allocated memory was multiplied by
    16
  • The amount requested from the OS was divided by 16

23
Defining Fragmentation
  • There are different definitions of fragmentation
  • We define it as percentages over and above the
    amount of live data
  • Fragmentation can be measured in several ways

24
Measuring Fragmentation
  • We have chosen two methods
  • Maximum amount of memory used by the allocator,
    relative to the amount requested by the program,
    at the point of maximum memory usage
  • Maximum amount of memory used by the allocator,
    relative to the maximum amount of live memory

25
Other Measures
  • Other measures are also interesting
  • There is no right way to measure fragmentation
  • Fragmentation should be measured for those
    conditions under which it is a problem

26
Experimental Error
  • Generally, memory was requested from the OS in
    blocks of 4kb
  • The measurement of the heap size can be an
    over-estimate by no more than 4kb
  • This amount should be divided by 16, yielding 256
    bytes

27
Results
28
Strategy
  • There is a relation between the successful
    policies
  • They all immediately coalesce memory
  • They reallocate objects that have died recently
  • This can be called a good strategy

29
Best fit
  • Tries to use small free blocks
  • This gives neighbors of large blocks time to die
  • They are merged into yet larger blocks
  • When a large block is split, it is likely to be
    used again

30
AO first-fit
  • Allocate blocks from one end of the heap
  • Blocks at the other end have more time to die and
    merge into larger blocks
  • This is obviously not true for next-fit policies

31
Lifetime
  • Objects allocated at the same time tend to die at
    the same time
  • On average, after 2.5Kb of allocation, 90 of all
    objects have both neighbors free
  • It pays to wait a short time after an object is
    freed

32
Object size
  • On average 90 of allocated objects have 6 sizes
  • Many memory requests are for objects of the same
    size as freed ones
  • Good allocators should use this fact
  • There is no reason to increase internal
    fragmentation

33
Conclusions
  • we arrive at the conclusion that the
    fragmentation problem is a problem of recognizing
    that good allocation policies already exist, and
    have inexpensive implementations.
Write a Comment
User Comments (0)
About PowerShow.com