Strong Security for Network-Attached Storage - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Strong Security for Network-Attached Storage

Description:

First two are CPU intensive because they use public-key encryption and are more secure ... at the SNAD Server CPU. Client CPU check the hash and signature ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 37
Provided by: SPANI
Learn more at: https://www.cs.odu.edu
Category:

less

Transcript and Presenter's Notes

Title: Strong Security for Network-Attached Storage


1
Strong Security for Network-Attached Storage
  • Course CS795
  • Advisor Dr. Ravi Mukkamala
  • Speaker Sridhar Panigrahi
  • Date March 24, 2004

Department of Computer Science, Old Dominion
University
2
Table of Contents
  • Introduction
  • Storage Architecture and Secure Network-Attached
    Disks (SNAD)
  • SNAD System Design and Operations
  • Basic Mechanism and Features
  • SNAD Data Structure
  • SNAD Schemes 1,2 and 3
  • SNAD Security and Operation
  • SNAD Performance
  • Conclusions
  • References

3
Introduction
  • Why Secure Storage ?
  • File system stored in a centralized Server is
    vulnerable to anyone who can obtain super-user
    access
  • Files are stored in clear text
  • NFS provides limited security
  • Network-Attached Storage (NAS) can replace
    traditional centralized storage system

4
Storage Architecture
5
Secure Network-Attached Disks (SNAD)
  • Disks are attached directly to a Network and rely
    upon own Security rather than Servers protection
  • Provides authentication and Encryption
  • System Stores and transfer encrypted data across
    the network and only decrypted at the client
    workstation
  • Administrator backing up the storage system has
    access to only encrypted data
  • Authorized Group/Users of a particular file(s)
    are only ones with access to its unencrypted
    contents.

6
SNAD System Design
  • Three (3) Security Alternatives
  • First two are CPU intensive because they use
    public-key encryption and are more secure
  • Third alternative avoids use of public-key
    encryption resulting in high performance.
  • Desirable Features
  • Disk must not contain enough information to
    decrypt data.
  • Data integrity User must be sure that data
    received from disk are those originally stored.
  • Flexibility File sharing feasible
  • High Performance and Scalability

7
SNAD Basic Mechanism
  • System uses symmetric key for data encryption and
    decryption at the client
  • Server Provided with sufficient information to
    authenticate the writer/reader to verify
    end-to-end data integrity
  • Public-key cryptography is used to allow disks to
    store information to decrypt their files
  • Symmetric keys are stored in the disk encrypted
    with public-key

8
SNAD Basic Mechanism contd
  • Disk provides encrypted data blocks and encrypted
    keys to the client upon request
  • User decrypt the symmetric key with his
    private-key to get the plaintext
  • Employ strong password/smartcard authentication
    to access the system (client)
  • The private key is stored at the client
  • SNAD uses 128-bit keys for symmetric algorithm
  • Can use longer than 128-bit keys

9
SNAD Basic Mechanism
  • Client uses RC5 algorithm ( also Rijndael
    acceptable) to encrypt data before it leaves the
    client
  • SNAD used cryptographic hashes and keyed hashes
    such as MD4, MD5, SHA-1 HMAC
  • HMAC use a cryptographic hash in conjunction with
    a shared secret key to check integrity and
    authenticate writer

10
SNAD Data Structure
  • Four Basic Structures
  • Secure Blocks
  • File Objects
  • Key Objects
  • Certificate objects
  • Data need not be stored contiguously on the disk

11
Relationships between Objects in a Secure
Network-Attached Disks (SNAD)
Certificate Object
Key Object
Key Object
File Object
File Object
File Object
Secure Block
Secure Block
Secure Block
Secure Block
Secure Block
Secure Block
12
SNAD Data Structure contd.
  • Multiple file objects use a single key object.
  • This corresponds to a situation where two files
    have the same access control.
  • Relatively few key objects on the disk just as
    there are few unique groups in a standard unix
    file system
  • Each data object requires 36-100 bytes overhead
    depending on the security scheme
  • File objects require little overhead just as a
    pointer to a key object.
  • Key objects require 76 bytes for header and 72
    bytes for each user.

13
SNAD Data Structure contd.
  • For 10,000 users is a part of 200 different
    groups
  • there is need of 148 MB of Key Objects or 0.37
    of a 40 GB Disk
  • CA object requires less than 300 bytes/user (3
    MB)
  • SNAD occupies less than 3 overhead for a 40 GB
    disk.
  • (Unix file system typically occupies 1-2 of
    total storage)

14
File Objects
  • Composed of one or more secure blocks along with
    per-file metadata
  • Metadata
  • Block pointers
  • file size
  • timestamps
  • A pointer to key object

15
Secure Blocks
  • Minimum unit of data that can be read or written
    in the secure file system
  • Block ID unique identifier for the file
    identifier and block number in a file
  • User ID Determines which public-key or writer
    authentication key to use to check the security
    of the block
  • Time Stamp Used to prevent Replay attacks
  • Data is encrypted using symmetric key. Key is
    obtained from key object associated with the file

16
Secure Blocks
Block Security Information
Block ID (may be part of file metadata)
User ID (may be part of file metadata)
Timestamp
Data (encrypted)
17
Key Objects
Key file ID User ID Signature Ref Count
User ID Encrypted Key Permissions
User ID Encrypted Key Permissions

User ID Encrypted Key Permissions
18
Key Objects
  • Key Object Header
  • Key File ID Unique Identifier for the key object
  • User ID Last user modified the key object
  • Signature When a user writes the object, he
    hashes the entire object except for the reference
    count and signs the hash with his private key and
    store in the signature field
  • Signature (H ( key Object Ref count) )
    private-key
  • Ref Count System needs to know when this key is
    no longer needed
  • Any authorized user who modifies the key object
    must sign to trace illegitimate modification to a
    particular user

19
Key Objects contd
  • Each tuple of the key object includes a User ID ,
    encrypted key and user permission for that user
  • User ID need not belongs to a single user may be
    of a group of users with shared access to a
    single private key
  • Encrypted Key Key for symmetric RC5 algorithm.
    This is encrypted with Users public key
  • Disk cannot decrypt unless it gets users private
    key (private keys are kept in the client
    machine, never sent to the disk)

20
Key Objects contd
  • Permission field is used by the disk to determine
    whether the user is allowed to write the key
    object
  • A key object may be used for more than one file
  • All files that use the key object are encrypted
    with the same symmetric encryption key and are
    accessible by the same set of users corresponds
    to a group

21
Certificate Objects
  • Each Network-attached disk contains a single
    certificate object

User ID Public Key HMAC Key Timestamp
User ID Public Key HMAC Key Timestamp
User ID Public Key HMAC Key Timestamp
.
User ID Public Key HMAC Key Timestamp
22
Certificate Objects contd
  • The disk uses the information in the certificate
    object to authenticate users and do basic storage
    management
  • User ID Identifies user or group
  • Public key
  • stored in disk for convenience and need not
    consult a centralized key server
  • For writer authentication

23
Certificate Objects contd
  • HMAC key is used for Schemes 1,2
  • Used to verify the user identity who is writing
    data and is stored encrypted
  • the decryption key for the HMAC keys held in
    non-volatile memory on the disk.
  • on disk startup HMAC keys are decrypted and
    cached into volatile memory.
  • Timestamp field updates each time a user writes a
    file object and used to prevent replay attacks.

24
SNAD Security Scheme 1
  • Disk Write Ops
  • At the client ( H ( encrypt Data Block )
    File Object ) ) signed with private-key
  • Disk Re-computes the hash using users public-key
    to authenticate the user before writes to disk
  • Read Ops
  • No operation at the SNAD Server CPU
  • Client CPU check the hash and signature and
    then decrypt the data

25
SNAD Security Scheme 1
Read
Write
Operations Client NAS Client NAS
En/Decrypt X X
Hash X X X
Signature X
Verification X X
Cryptographic Operations used in Scheme 1
Network-attached storage (NAS)
26
SNAD Security Scheme 2
  • Same as Scheme 1 except
  • Signature verification is NOT done at SNAD Server
  • Advantages Reduces load on SNAD disk CPU
  • Client verifies the signature on Read operation

27
SNAD Security Scheme 2
Read
Write
Operations Client NAS Client NAS
En/Decrypt X X
Hash X X X
Signature X
Verification X
Cryptographic Operations used in Scheme 2
Network-attached storage (NAS)
28
SNAD SecurityScheme 3
  • Write
  • Uses keyed-hash (HMAC) approach to authenticate a
    writer of data block and verify block integrity
  • Client encrypt secure block and computes HMAC
    over ciphertext and send it to disk
  • Disk re-computes HMAC for authentication using
    the shared secret key
  • Read
  • Disk calculates HMAC using the key from the
    requesting user and sent the data object along
    with the new HMAC to the client

29
SNAD Security Scheme3
Read
Write
Operations Client NAS Client NAS
En/Decrypt X X
Hash X X X X
Signature
Verification
Cryptographic Operations used in Scheme 3
Network-attached storage (NAS)
30
Block Write Operation
HMAC
Block ID
UID
Time Stamp
IV
User (UID)
1.Generate RC5 keys 2. Encrypt this key
3. Send to drive 4.Break file into 64 KB Blocks
5. Encrypt each block a
to obtain Ba 6. Append timestamp
7. Append Keyed Hash
8. Send block to disk 9. Modify key object if
needed
Network Disk
1. Verify keyed-hash 2. Verify timestamp
3. Store new timestamp 4. Verify
write permission 5. Write block
Data
31
Block Read Operation
HMAC
Block ID
UID
Time Stamp
IV
User (UID)
Network Disk
1. Verify timestamp 2. Verify HMAC
3. Decrypt to obtain Ba
1.Receive request for a particular block
2. Calculate HMAC based on the user
authentication key (KEYmac) 3.
Update timestamp in certificate object
4. Send secure data object
Data
32
Performance
  • Strong Security can be achieved without
    sacrificing performance
  • Sequential Data transfer (read write)
  • 15- 20 performance loss over raw data transfer.
  • Penalty of between 1-20 depending on the work
    load. Can be reduced by introducing encryption
    hardware on CPUs.
  • Random I/O operation (write read)
  • Suffers almost no performance penalty

33
Performance
300 MHz Power PC with G3 processor Scheme 1, 2 Bandwidth (MB/sec) Scheme 3 Bandwidth (MB/sec)
Read 6.4 MB/sec 10.0MB/sec
Write 1.4 MB/sec 12.7 MB/sec
34
Cryptographic Overhead for SNADwith 300 MHz. MPC
750 (Power PC G3 processor)
Time (ms)
35
Conclusions
  • SNAD offers strong security without reducing
    performance drastically
  • Provides 88 of maximum for writes and 81 of
    maximum for reads for sequential operation
  • Random access reads and writes suffers almost no
    penalty
  • System provides user data Confidentiality and
    Integrity the moment it leaves the client
    computer
  • SNAD performs better with the distributed storage
    than centralized storage
  • User addition to access group is straightforward,
    but revocation is still an issue.

36
References
  • 1 Ethan L. Miller, Darrell D. E. Long, William
    E. Freeman, Benjamin C. Reed. Strong Security for
    Network-Attached Storage. In Proceedings of the
    FAST 2002 Conference on File and Storage
    Technologies, Monterey, California, USA., January
    28-30, 2002
  • 2 William Freeman, Ethan Miller. Design for a
    Decentralized Security System for Network
    Attached Storage. 8th NASA Goddard Conference on
    Mass Storage Systems, held jointly with the 17th
    IEEE Symposium on Mass Storage Systems, College
    Park, MD, March 2001.
  • 3 Scott A. Brandt, Ethan L. Miller, Darrell
    D.E. Long, Lan Xue,
  • Storage Systems Research Center, UC,Santa
    Cruz. Efficient Metadata
  • Management in a Large Distributed Storage
    Systems, 20th IEEE/11th
  • NASA Goddard Conference on Mass Storage
    Systems and Technologies,
  • San Diego, CA, April 2003.
Write a Comment
User Comments (0)
About PowerShow.com