Ivy: A Read/Write Peer-to-Peer File System - PowerPoint PPT Presentation

About This Presentation
Title:

Ivy: A Read/Write Peer-to-Peer File System

Description:

A Co-operative file system where multiple users can share a directory structure. Data is stored on distributed hosts ... Inode. File. 536 attrs Log Records ... – PowerPoint PPT presentation

Number of Views:105
Avg rating:3.0/5.0
Slides: 12
Provided by: msa107
Category:
Tags: file | inode | ivy | peer | read | system | write

less

Transcript and Presenter's Notes

Title: Ivy: A Read/Write Peer-to-Peer File System


1
Ivy A Read/Write Peer-to-Peer File System
  • Muthitacharoen, R. Morris, T. Gil, and B. Chen
  • Presented by Matthew Allen

2
Introduction
  • A Co-operative file system where multiple users
    can share a directory structure
  • Data is stored on distributed hosts
  • Built on DHash (Chord with some modifications)

3
I-Nodes
Name
Block
Disk
Chord Ring
Concurrent access is a problem
Security and integrity are a problems
4
Logs
New
Old
Log Head
Write 536 2 ltdatagt
Link 536 2 tmp
Inode File 536 ltattrsgt
End
Log Records
  • Log head is equivalent to a username and has a
    key derived from its owners public key
  • Log records are immutable records keyed on a
    content hash that store all the operations the
    user has performed

5
Views and Combining Logs
View Block Root I-number 2 Participant List
Athie
Robbie
Thom
Benjie
Conflicts are possible!
A3
A2
A1
R3
R2
T1
T2
R1
B1
New
Old
6
Snapshots
Athie
A4
A3
A2
A1
root
Robbie
B3
B2
B1
D1
F1
F2
Thom
T1
T2
T3
T4
F3
F4
D2
Benjie
B1
F5
F6
7
Other Issues
  • Security Can always fall back on logs
  • Cache consistency Cache is updated from DHash on
    all reads, but it withholds writes until there is
    a close
  • Concurrent operations Can occur on writes and
    partitions, and must be resolved explicitly with
    lc
  • Exclusive create directory modifications need to
    be synchronized, so two-phase commits are used

8
Results Single user LAN
  • Used the (Modified) Andrew Benchmark to drive the
    simulation
  • Compared a local Ivy server with an NFS server
    connected via 100Mb LAN
  • Ivy is 150 slower, with Compile and Write/Create
    being the most expensive costs
  • Ivy uses 8.8Mb to manage the 1.6Mb generated by
    the MAB

9
Results Single user WAN
  • 3 Ivy servers with RTT of 9, 16, and 82 ms and 79
    ms to NFS
  • Operations dominated by time to collect three log
    heads, which are stored on each of the servers
  • Performance for Ivy is about 30 worse, and
    dominated by Write/Create and Mkdir
  • Local computer sends 7Mb to Ivy

10
Other Results
  • Many logs do not seem to change the results of
    the last simulation
  • Using between 8 and 32 DHash servers on random
    planetlab hosts, the performance does not change
    dramatically (20 degradation from best to worst)
  • With 32 DHash servers and multiple instances of
    the MAB running on different hosts, degradation
    is less than linear

11
Other Results II
  • Snapshot intervals, not surprisingly, have a huge
    impact on the performance
  • With 4 log heads, 1 MAB instance, and 32 DHash
    servers, the average time to complete the MAB
    almost doubles when the number of logs between
    snapshots goes above 70
  • CVS works poorly on IVY because it reads lots of
    files on each commit
Write a Comment
User Comments (0)
About PowerShow.com