Energy Efficient Prefetching and Caching - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

Energy Efficient Prefetching and Caching

Description:

... 'hints' for ... info in the DB to generate hints on behalf of the application one ... of prefetching depth: protects system from incorrect hints ... – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 27
Provided by: vijays9
Category:

less

Transcript and Presenter's Notes

Title: Energy Efficient Prefetching and Caching


1
Energy Efficient Prefetching and Caching
  • Athanasios E. Papathanasiou
  • Michael L. Scott
  • Univ. of Rochester
  • Presented by Vijay S. Kumar

2
Problem Definition
  • Traditional disk management strategies
  • Caching and prefetching
  • Goal Maximize performance (increase throughput,
    decrease latency)
  • Mobile systems
  • Goal Save energy by powering down the disk
  • Hard disk accounts for 9-32 of laptop energy
    consumption
  • Do common caching prefetching strategies work
    well from the point of view of energy
    consumption?
  • No. Conflicts exist.

3
Motivation
  • Caching Prefetching
  • Eliminate as many I/O requests as possible
  • Spread remaining requests uniformly over time.
    Avoid disk congestion.
  • Smooth access pattern ? short idle intervals
  • Mobile Systems
  • Power down the disk during long idle intervals
  • Traditional prefetching ignores and frustrates
    the goal of energy efficiency.

4
Goals
  • Develop new strategies for energy-conscious
    caching and prefetching
  • Maximize power-down opportunities
  • Increase idle-time intervals
  • Without any loss of performance
  • Should work kinds of applications

5
Contributions
  • New rules for prefetching and caching
  • Shaping of access patterns. Create bursty
    access patterns ? Contributes to energy
    efficiency.
  • Automated system extension to the memory
    management system
  • Monitors past application behavior
  • Generates hints for prefetching
  • Coordinates I/O activity across all concurrently
    running applications
  • 60-80 disk energy savings on Linux kernel

6
Mobile Systems
  • Magnetic disks, network interfaces, chips provide
    low-power states
  • Devices become non-operational ? saves energy
  • Modern disks At least 4 power modes.
  • Lower modes Breakeven time several seconds.

7
Idle interval examples
8
Fetch-on-demand
  • Application reference string A B C D E F..
  • One access every 10 time units
  • Time to fetch a block 1 time unit
  • Buffer cache 3 blocks
  • No prefetching
  • 35 time units
  • 4 idle intervals (10 time units each)

9
Rules for optimal prefetching - Cao et al.
(SIGMETRICS 95)
  • Optimal prefetching
  • Fetch next block
  • Optimal replacement
  • Discard block whose reference is farthest in the
    future
  • Do no harm
  • First opportunity Prefetch
  • When a fetch completes
  • After a reference
  • 32 time units
  • 3 idle intervals (On average 9 time units each)

10
Energy-efficient prefetching
  • Rules 1, 2, 3 hold good
  • 4. Maximize disk utilization
  • Always prefetch after a fetch if replacement is
    possible.
  • 5. Respect Idle time
  • Never interrupt an idle period unless it is
    needed to maintain optimal performance
  • Dynamically switch between first opportunity
    and just-in-time based on disk state
  • 32 time units
  • 1 idle interval (27 time units)

11
Challenges
  • Prefetching needs to be very aggressive and more
    speculative
  • False positives can be fetched to avoid a
    potential disk power-up operation.
  • Need to take disk activation and congestion into
    account
  • Coordinate accesses from multiple applications

12
Design Deciding when to prefetch
Epoch-based mechanism
  • New epoch triggered by
  • Initiation of new prefetching cycle
  • Demand miss ? Could be costly
  • Low system memory

13
Design Deciding what to prefetch
  • Prediction is based on hints Manual or
    automatic

14
Automatic hints Monitor daemon
  • Monitor file system use of all other executing
    applications.
  • Traces open, close, read, write, execve, setpgid
    system calls
  • Prepares DB describing file accesses for each
    application (Access Analysis) ? done offline
  • Use info in the DB to generate hints on behalf of
    the application one hint per file in the DB

15
Design Deciding what to replace
  • Goal maximize the time to the first miss
  • Prefetching depth during an epoch
  • Use type of the first miss to determine
    prefetching depth for next epoch
  • Compulsory miss No prior information
  • Prefetch miss Miss on a page in spite of
    prediction ? Increase prefetching depth
  • Eviction miss Miss on a page evicted in favor of
    prefetched data ? Decrease prefetching depth
  • Dynamic estimation of prefetching depth protects
    system from incorrect hints

16
Implementation details
  • Prefetch thread Coordination across multiple
    applications
  • Long idle intervals Access patterns need to be
    in sync
  • Memory mgmt system has an update daemon and swap
    daemon
  • However, read and prefetch requests are generated
    within each process context
  • Prefetch daemon acts as a centralized entity to
    handle read activity
  • First miss across all applications should occur
    within a small window of time

17
Implementation details
  • Prefetch cache Augmentation to the kernels page
    cache
  • Pages requested by the prefetch thread reside
    here
  • Each such page has a timestamp
  • After reference or timestamp expiry, page is
    moved to standard LRU list

18
Implementation details
  • Eviction cache Keeps track of pages evicted in
    favor of prefetching
  • Maintains addresses of evicted pages
  • Unique serial number Eviction number
  • Counts the of pages evicted

19
(No Transcript)
20
Workloads
  • MPEG playback (sequential access)
  • Concurrent MPEG playback MP3 encoding
    (coordination, read and write activity)
  • Linux kernel compilation (access to multiple
    files)
  • SPHINX speech recognition (random access)
  • Metrics
  • Length of idle intervals
  • Energy savings
  • Performance slowdown

21
Length of idle intervals MPEG playback (two
76MB files)
  • Bursty has longer idle intervals than that of
    Linux
  • Idle interval length increases with memory size

22
Length of idle intervals Concurrent workload
  • Working set always greater than memory
  • Bursty has useful longer idle intervals than
    Linux at 256MB and 492MB
  • Request coordination and concurrent read/write
    seem to work well

23
Length of idle intervals Kernel compilation
  • At memory 128 MB, all accessed files are
    prefetched by Bursty leading to increased idle
    interval lengths

24
Disk energy savings vs. Memory size
  • Base case standard Linux kernel
  • Bursty-MPEG,64MB 40 saving even though idle
    intervals were not too long
  • Kernel compilation, 64MB Increased energy
    consumption
  • Bursty Sphinx, 492MB 78 savings!

25
Impact on Performance
  • In almost all cases, the performance of Bursty
    is within a small factor of that on Linux
  • Prefetching technique avoids delay caused by
    disk spin-up operations

26
Conclusions
  • Energy-aware prefetching and caching
  • New prefetching strategies
  • Algorithms implemented in Linux 2.4.20
  • Epoch-based algorithm
  • Hints generator
  • Access coordination
  • 60-80 energy savings without losses in
    performance
  • Amount of savings scales well with the memory on
    the system
Write a Comment
User Comments (0)
About PowerShow.com