Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs

Description:

1. Introduction. 2. Cooperative caching. 3. Related works. 4. Proposed approach COOP ... 1. Introduction. 5. 1.3. Cooperative caching. Cooperative caching ... – PowerPoint PPT presentation

Number of Views:144
Avg rating:3.0/5.0
Slides: 34
Provided by: yudu
Learn more at: http://impact.asu.edu
Category:

less

Transcript and Presenter's Notes

Title: Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs


1
Improving On-demand Data Access Efficiency with
Cooperative Caching in MANETs
Phd Dissertation Defense 11.21.05_at_CSE.ASU Yu Du
Chair Dr. Sandeep Gupta
Committee Dr. Partha Dasgupta Dr. Arunabha Sen Dr. Guoliang Xue
Supported in part by NSF grants ANI-0123980,
ANI-0196156, and ANI-0086020, and Consortium for
Embedded Systems.
2
Roadmap
  • 1. Introduction
  • 2. Cooperative caching
  • 3. Related works
  • 4. Proposed approach COOP
  • 5. Performance evaluation
  • 6. Conclusions and future works

3
1.1 Problems of data access in MANETs
1. Introduction
  • MANETs Mobile Ad hoc Networks
  • Wireless medium
  • Multi-hop routes
  • Dynamic topologies
  • Resource constraints
  • On-demand data access client/server model.

4
1.2. Reducing data access costs in MANETs
1. Introduction
  • The locality principle Denning
  • Computer programs tend to repeat referencing a
    subset of data/instructions.
  • Used in processor caches, storage hierarchies,
    Web browsers, and search engines.
  • Zipfs law Zipf
  • P(i) ? 1/ia(aclose to unity), common interests in
    popular data.
  • 80-20 rule 80 data accesses happen on 20 data.
  • Cooperative caching
  • Multiple nodes share and cooperatively manage
    their cached contents.

5
1.3. Cooperative caching
1. Introduction
  • Cooperative caching
  • A caching node not only serves its own data
    requests but also the requests from others.
  • A caching node not only stores data for its own
    needs but also for others.
  • Shorter path, less expensive links, less
    conflictions, lower risks of route breakage.
  • save time, energy, and bandwidth consumption as
    well as improves data availability.
  • Why?
  • Data locality and commonality in users
    interests.
  • Client/Server communication Vs. inter-cache
    communication.
  • Users around the same location tend to have
    similar interests.
  • People gathered around the food court menus.
  • Exploration team environmental information.

6
Roadmap
  • 1. Introduction
  • 2. Cooperative caching
  • 2.1. Overview
  • 2.2. Cache resolution
  • 2.3. Cache management
  • 2.4. Cache consistency control
  • 3. Related works
  • 4. Proposed approach COOP
  • 5. Performance evaluation
  • 6. Conclusions and future works

7
2.1 Overview
2. Cooperative caching
  • Cooperative caching
  • Multiple nodes share and cooperatively manage
    their cached contents.
  • Cache resolution
  • Cache management
  • Cache consistency control
  • Used in Webcache/Proxy servers on Internet.
  • To alleviate server overloading and response
    delay.
  • Did not consider special features of MANETs.

8
2.2 Cache resolution
2. Cooperative caching
  • How to find a cache storing the requested data?

Hierarchical
Directory-based
Hash table based





1
2
3
4
5
Caching node Data items
Node 1 Node 2 Item1 Item2
Node 1
Node 2
Node 3
Harvest Chank96
Summary Fan00
Squirrel Lyer02
9
2.3 Cache management
2. Cooperative caching
  • What to cache?
  • Admission control.
  • Cache replacement algorithm.
  • LRU
  • Extended LRU (Squirrel)
  • any access has same impact, whether it is from
    the local node or other nodes.

10
2.4 Cache consistency control
2. Cooperative caching
  • How to maintain the consistency between server
    and cache?
  • Strong/Weak consistency whether consistency is
    always guaranteed.
  • Pull/Push-based who (client/server) initiates
    the consistency verification.
  • TTL is used in this research.
  • Each data item has a Time-To-Live field allowed
    caching time.
  • TTL is popularly adopted in real applications
    HTTP.
  • Lower cost than strong-consistency protocols.

Pull-based Push-based
Weak TTL Synchronous Invalidation
Strong Lease Asynchronous Invalidation
11
3. Related works
Schemes Cache Resolution Cache management Consistency control Network model
Harvest Chank96 Hierarchically No specification TTL WAN
Summary Fan00 Directory-based LRU TTL WAN
Squirrel Lyer02 Hash-based Extended LRU TTL LAN
Cao04 Cao04 CacheData, CachePath, HybridCache LRU TTL MANET
12
Roadmap
  • 1. Introduction
  • 2. Cooperative caching
  • 3. Related works
  • 4. Proposed approach COOP
  • 4.1. System architecture
  • 4.2. Cache resolution
  • 4.3. Cache management
  • 5. Performance evaluation
  • 6. Conclusions and future works

13
4.1 System architecture
4. Proposed approach COOP
  • Each node runs a COOP instance.
  • The running COOP instance
  • Receives data requests from users applications.
  • Resolves requests using the cocktail cache
    resolution scheme.
  • Decides what data to cache using COOP cache
    management scheme.
  • Uses the underlying protocol stack.

14
4.2. Cache Resolution
4. Proposed approach COOP
  • 4.2.1. Hop-by-Hop
  • 4.2.2. Zone-based
  • 4.2.3. Profile-based
  • 4.2.4. COOP cache resolution a cocktail approach

15
4.2.1 Hop-by-Hop cache resolution
4. Proposed approach COOP, 4.2 Cache resolution
  • The forwarding nodes try to resolve a data
    request before relaying it to the next hop.
  • Reduces the travel distance of requests/replies.
  • Helps to avoid expensive/unreliable network
    channels.

16
4.2.2 Zone-based cache resolution
4. Proposed approach COOP, 4.2 Cache resolution
  • Users around the same location tend to share
    common interests.
  • Cooperation zone the surrounding nodes within
    r-hop range.
  • r the radius of the cooperation zone
  • To find an item within the cooperation zone
  • Reactive approach flooding within the
    cooperation zone.
  • Proactive approach record previous heard
    requests.

17
4.2.3 Profile-based cache resolution
4. Proposed approach COOP, 4.2 Cache resolution
  • Records received request to assist future cache
    resolution
  • RRT Recent Request Table.
  • Entry is deleted when if the recorded requester
    fails to reply the corresponding data item.
  • When the table is full, use LRU to decide
    replacement.

Requester Time Requested Data ID
192.168.0.11 15265908162005 D1
192.168.0.15 15255908162005 D2
192.168.0.18 15205908162005 D3
18
4.2.4 COOP cache resolution a cocktail approach
4. Proposed approach COOP, 4.2 Cache resolution
19
4.3. Cache Management
4. Proposed approach COOP
  • 4.3.1. Primary and secondary data
  • 4.3.2. Inter-category and intra-category rules

20
4.3.1. Primary and secondary data
4. Proposed approach COOP, 4.3 Cache management
  • Different cache misses may introduce different
    costs.
  • Example cache miss cost for X is higher than
    cache miss cost for Y.
  • Primary data and secondary data.
  • Primary data not available within cooperation
    zone.
  • Secondary data available within cooperation
    zone.

Y has to be obtained from the server.
X can be obtained from a neighbor.
Data Server
Data Server
21
4.3.2. Inter-category and intra-category rules
4. Proposed approach COOP, 4.3 Cache management
  • Inter-category rule
  • when replacement decision is to be made between
    different categories.
  • Primary data have precedence over secondary data
  • Intra-category rule
  • when replacement decision is to be made within
    the same category.
  • LRU
  • Example A1 A5 (Primary) B1 B6 (Secondary)

22
Roadmap
  • 1. Introduction
  • 2. Cooperative caching
  • 3. Related works
  • 4. Proposed approach COOP
  • 5. Performance evaluation
  • 5.1. The impact of different zone radius
  • 5.2. The impact of data access pattern
  • 5.3. The impact of cache size
  • 5.4. Data availability
  • 5.5. Time cost average travel distance
  • 5.6. Cache miss ratio
  • 5.7. Energy cost message overhead
  • 6. Conclusions and future works

23
5.1 The impact of different zone radius
5. Performance evaluation
  • (1) Average probability of finding a requested
    item d within the zone.
  • (2) Average time cost
  • assuming time cost is proportional to the number
    of covered hops
  • (3) Average energy cost
  • assuming time cost is proportional to the number
    of messages.

(1)
(2)
(3)
Pd average probability of a node caches d.
? the average node density.
L the distance (hops) between the requesting node and the server.
r the cooperation zone radius.
24
5.1 The impact of different zone radius
5. Performance evaluation
  • If an item is not found within a certain size
    cooperation zone, it is unlikely to find it
    within a larger size zone.
  • The saturation point.

25
5.2 The impact of access pattern
5. Performance evaluation
  • a
  • Cache miss ratio - -
  • CT-3, CT-2, CT-1, HBH, SC
  • Average travel distance - -
  • CT-3, CT-2, CT-1, HBH, SC
  • Average messages - -
  • HBH CT-1, SC CT-2, CT-3

26
5.3 The impact of cache size
5. Performance evaluation
  • Cache size
  • Cache miss ratio - -
  • CT-3, CT-2, CT-1, HBH, SC
  • Average travel distance - -
  • CT-3 CT-2, CT-1, HBH, SC
  • Average messages - -
  • HBH CT-1, SC CT-2, CT-3

27
5.4 Data availability
5. Performance evaluation
  • Varied factors
  • node number
  • pause time
  • node velocity
  • Data availability
  • CT-2, CT-1, HBH, SC

28
5.5 Time cost average travel distance
5. Performance evaluation
  • Varied factors
  • node number
  • pause time
  • node velocity
  • Average travel distance
  • CT-2, CT-1, HBH, SC

29
5.6 Cache miss ratio
5. Performance evaluation
  • Varied factors
  • node number
  • pause time
  • node velocity
  • Cache miss ratio
  • CT-2, CT-1, HBH, SC

30
5.7 Energy cost average messages
5. Performance evaluation
  • Varied factors
  • node number
  • pause time
  • node velocity
  • Average messages
  • CT-1 HBH, SC, CT-2

31
6. Conclusions and future works
  • Cooperative caching is supported by data locality
    and the commonality in users interests.
  • Proposed approach COOP
  • Higher data availability
  • Less time cost
  • Smaller cache miss ratio
  • The tradeoff is message overhead
  • Tradeoff is dependent the cooperation zone
    radius.
  • Future works
  • Adapt cooperation zone radius based on users
    requirements.
  • Explore different cooperation structure.
  • Enforce fairness in cooperative caching.

32
References
  • Cao04 L. Yin and G. Cao, Supporting
    cooperative caching in ad hoc networks, INFOCOM,
    2004.
  • Chank96 A. Chankhunthod et al. A Hierarchical
    internet object cache, USENIX Annual Technical
    Conference, 1996.
  • Denning P. Denning, The locality principle,
    Communications of the ACM, July 2005.
  • Fan00 L. Fan et al. Summary cache A scalable
    wide-area web cache sharing protocol, Sigcomm,
    1998.
  • Lyer02 S. Lyer et al. Squirrel A
    decentralized peer-to-peer web cache, PODC,
    2002.
  • Zipf G. Zipf, Human behavior and the principle
    of least effort, Addison-Wesley, 1949.

33
Q A
Thank You!
Write a Comment
User Comments (0)
About PowerShow.com