Title: Design and Performance Studies of an Adaptive Cache Retrieval Scheme in a Mobile Computing Environment
1Design and Performance Studies of an Adaptive
Cache Retrieval Scheme in a Mobile Computing
Environment
- Paper by Wen-Chih Peng and Ming-Syan Chen
- Presented by Arseny Bogomolov
- October 25, 2005
2Presentation Outline
- Setting the Context
- Problem Statement
- Related Work
- Preliminaries
- Cache Retrieval Models
- Cost Analysis for an Adaptive Scheme
- Performance Study
- Conclusions
3The Typical Mobile Environment
- A traveling mobile user (MU)
- A running application
- Communication with application server
- MU moves to a new service area in a distributed
server architecture - Service Handoff
- Minimizes communication overhead
- Minimizes application delays
- Balances workload
- Fault tolerance
lt- Must be seamless!
4An Example Application
- Traffic Reports
- Shortest distance routes
- Up-to-date traffic status
- Location-dependent information
- Live traffic video streaming wirelessly
- Temporal Locality?
- What and where can be cached?
5System Architecture
- SA Application Server
- LBn Local Buffer
- One per MU
- Caches data
- For one Ui
- Coordinator
- Concurrency control
- Transaction monitoring
- Can also cache data
- For Ui..n
6The Problem A Mobile User
- The user changes service areas
- Service Handoff must occur
- LBA has cache for Ui, LBB is empty
Cache
7The Proposed Solution
- Three caching schemes
- From Local Buffer (FLB)
- From Coordination Buffer (FCB)
- From Previous Server (FPS)
- Which one to use?
- Depends on transaction properties
- Temporal Locality
- Cost of cache miss
- Authors propose DAR
8Dynamic Adaptive Cache Retrieval Scheme (DAR)
- Authors main objective contribution
- A set of decision rules for service handoff
- Selects most appropriate of the 3 schemes
- For each phase of transaction
- Based on transaction properties
9Paper Contributions Summary
- Evaluates properties of FLB, FCB, and FPS
- Proposes DAR scheme for service handoffs
- Comparative analysis of the 4 schemes
- Simulations to validate the results
10Related Work
- Caching at the proxy servers
- Service handoff not considered
- Cache invalidation schemes
- Impact of client disconnects on performance
- Energy-efficient schemes
- Adaptive schemes adjust size of report
- For the most part all schemes are static
- Previous use of coordinator
- Concurrency control
- Transaction execution monitoring
- Not used for caching
11Preliminaries
- Temporal Locality tendency of data pages to be
referenced again soon - Intra-transaction Locality same data pages
referenced in same transaction - Inter-transaction Locality same data pages
referenced by consecutive transactions - Type of temporal locality influences type of
caching scheme
12From Local Buffer
- One per each MU
- Created when MU
- enters service area
- First transaction triggers warm-up
- No pre-fetching is done
- Best when no temporal locality
- Find shortest distance to X from where I am
- Saves on pre-fetching costs
13From Previous Server
- Data is fetched from
- previous server
- Best for intra-transaction locality
- User is in service area of SA
- Find all routes to home (pages 23, 26)
- User moves to service area of SB
- Pages 23, 26 referenced twice
14From Coordinator Buffer
- Data fetched from
- central location
- Best for inter-
- transaction locality
- MU Uj is in Sc
- Frequent updates of parking info (44, 39)
- MU Ui moves to service area of SB
- Still wants up-to-date parking info
- Pre-fetched data results in a cache hit
15Dynamic Adaptive Retrieval
- Need to pick best scheme for each stage based on
transaction properties - Initial Stage
- Consider FLB and FCB
- Execution Stage
- FLB, FCB, or FPS?
- Termination
- Coordinator buffer write
Service Handoff
16Cost Analysis and DAR
- Want to derive rules for picking best scheme
- Whats considered?
- Number of cache entries per user - N
- Cost of cache miss - Cm
- Cost of cache replacement
- Transaction properties (temporal locality)
- Pintra probability page has intra-transaction
temporal locality - Pinter probability page has inter-transaction
temporal locality - PCB probability page w/temporal locality is in
CB
17DAR Rules for Initial Phase
- FCB or FLB?
- Cache miss costs
- FLB N Cm
- N (Pintra Pinter)
- N (Pintra Pinter) (1 PCB) -gt Ntl
- N (1 (Pintra Pinter)) -gt N-tl
- FCB (Ntl N-tl) 2Cm
Data w/temporal locality
CB misses for temporal
CB misses for non-temporal
18Rules for Initial Phase (contd)
- FCB profitable if
- Cost of FCB lt Cost of FLB
- (Ntl N-tl) 2Cm lt N Cm
- (Pintra Pinter) gt 0.5 / PCB
- If temporal locality is prominent, FCB is used
(CB has more frequently used pages) - If it is not, use FLB to avoid unnecessary
pre-fetching costs
19DAR Rules for Execution Phase
- FCB, FLB, or FPS?
- If no temporal locality, go with FLB (same as for
Initial Phase) - Otherwise, decide based on ratio of intra- to
inter- - Use threshold f to decide
- Pintra/Pinter f ? FPS
- Pintra/Pinter lt f ? FCB
20DAR Rules Summary
- Initial Phase
- Execution Phase
21Performance Study
- 64 servers - 8x8 mesh topology
- MU moves randomly between servers
- TXNSIZE of data objects (4K page) in
transaction (20-30) - Cache sizes
- Local Buffer 1 of DBSIZE
- Coordinator Buffer variant
- DB has K data objects
- Pintra K (intra-transaction objects)
- Pinter K (inter-transaction objects)
- Ppseudo K (objects w/no temporal locality)
22Impact of Temporal Locality
- Pinter fixed at 10, Pintra varied
- Pintra fixed at 10, Pinter varied
(Pintra Pinter) gt 0.5 / PCB 0.5 / PCB 0.1
23Performance of DAR
- Ppseudo set at 20
- Vary Pintra/Pinter ratio
24Performance of DAR (contd)
- Can adapt to transaction properties
- Higher cache hit ratio regardless of whether MUs
transactions have more inter- or intra- pages - Employs the appropriate cache retrieval method
dynamically - Pintra/Pinter f ? FPS
- Pintra/Pinter lt f ? FCB
- What should the value of f be?
25Performance of DAR (contd)
1.60
1.75
2.25
CB 20 DBSIZE
CB 50 DBSIZE
CB 80 DBSIZE
26Performance of DAR (contd)
- f is at the point of intersection of FCB and FPS
cache hit ratio curves - With size of coordinator buffer increasing, it is
expected that hit ratio of FCB will increase - Therefore, f increases
- But f decreases if we increase Ppseudo
- In the end, value depends on estimation of Pintra
and Pinter
27Paper Summary
- Examines several cache retrieval schemes to
improve performance during service handoff - Presents a cost model for evaluating the schemes
presented - Analyzes impact of temporal locality
- Analyzes impact of coordinator buffer size
- Performance analysis of all schemes presented,
including validation on simulator
28Conclusion
- FPS best for intra-transaction locality
- FCB best for inter-transaction locality
- Adaptive algorithm (DAR) presented
- Higher cache hit ratio of inter-transaction
pages in transactions with prevalent
inter-transaction pages - Higher cache hit ratio of intra-transaction
pages in transactions with prevalent
intra-transaction pages - Makes DAR advantageous over other schemes