Title: CMPUT429/CMPE382 Winter 2001
1CMPUT429/CMPE382 Winter 2001
- TopicC I/O
- (Adapted from David A. Pattersons CS252,
- Spring 2001 Lecture Slides)
-
2Motivation Who Cares About I/O?
- CPU Performance Improves 60 per year
- I/O system performance limited by mechanical
delays (disk I/O) - improves less than 10 per year (IO per sec)
- Amdahl's Law system speed-up limited by the
slowest part! - 10 IO 10x CPU gt 5x Performance (lose
50 of CPU gain) - 10 IO 100x CPU gt 10x Performance (lose 90
of CPU gain) - I/O bottleneck
- Diminishing fraction of time in CPU
- Diminishing value of faster CPUs
3I/O Systems
interrupts
Processor
Cache
Memory - I/O Bus
Main Memory
I/O Controller
I/O Controller
I/O Controller
Graphics
Disk
Disk
Network
4Outline
- Disk Basics
- Disk History
- Disk options in 2000
- Disk fallacies and performance
- Tapes
- RAID
5Disk Device Terminology
- Several platters, with information recorded
magnetically on both surfaces (usually)
- Bits recorded in tracks, which in turn divided
into sectors (e.g., 512 Bytes)
- Actuator moves head (end of arm,1/surface) over
track (seek), select surface, wait for sector
rotate under head, then read or write - Cylinder all tracks under heads
6Photo of Disk Head, Arm, Actuator
Spindle
Arm
Head
Actuator
7Disk Device Performance
Inner Track
Head
Sector
Outer Track
Controller
Arm
Spindle
Platter
Actuator
- Disk Latency Seek Time Rotation Time
Transfer Time Controller Overhead - Seek Time? depends no. tracks move arm, seek
speed of disk - Rotation Time? depends on speed disk rotates, how
far sector is from head - Transfer Time? depends on data rate (bandwidth)
of disk (bit density), size of request
8Disk Device Performance
- Average distance sector from head?
- 1/2 time of a rotation
- 10000 Revolutions Per Minute ? 166.67 Rev/sec
- 1 revolution 1/ 166.67 sec ? 6.00 milliseconds
- 1/2 rotation (revolution) ? 3.00 ms
- Average no. tracks move arm?
- Sum all possible seek distances from all
possible tracks / possible - Assumes average seek distance is random
- Disk industry standard benchmark
9Data Rate Inner vs. Outer Tracks
- To keep things simple, orginally kept same number
of sectors per track - Since outer track longer, lower bits per inch
- Competition ? decided to keep BPI the same for
all tracks (constant bit density) - ? More capacity per disk
- ? More of sectors per track towards edge
- ? Since disk spins at constant speed, outer
tracks have faster data rate - Bandwidth outer track 1.7X inner track!
- Inner track highest density, outer track lowest,
so not really constant - 2.1X length of track outer / inner, 1.7X bits
outer / inner
10Devices Magnetic Disks
- Purpose
- Long-term, nonvolatile storage
- Large, inexpensive, slow level in the storage
hierarchy - Characteristics
- Seek Time (8 ms avg)
- positional latency
- rotational latency
- Transfer rate
- 10-40 MByte/sec
- Blocks
- Capacity
- Gigabytes
- Quadruples every 2 years (aerodynamics)
Track
Sector
Cylinder
Platter
Head
7200 RPM 120 RPS gt 8 ms per rev ave rot.
latency 4 ms 128 sectors per track gt 0.25 ms
per sector 1 KB per sector gt 16 MB / s
11Disk Performance Model /Trends
- Capacity
- 100/year (2X / 1.0 yrs)
- Transfer rate (BW)
- 40/year (2X / 2.0 yrs)
- Rotation Seek time
- 8/ year (1/2 in 10 yrs)
- MB/
- gt 100/year (2X / 1.0 yrs)
- Fewer chips areal density
12State of the Art Barracuda 180
- 181.6 GB, 3.5 inch disk
- 12 platters, 24 surfaces
- 24,247 cylinders
- 7,200 RPM (4.2 ms avg. latency)
- 7.4/8.2 ms avg. seek (r/w)
- 64 to 35 MB/s (internal)
- 0.1 ms controller time
- 10.3 watts (idle)
Track
Sector
Cylinder
Track Buffer
Platter
Arm
Head
source www.seagate.com
13Disk Performance Example (will fix later)
- Calculate time to read 64 KB (128 sectors) for
Barracuda 180 X using advertised performance
sector is on outer track - Disk latency average seek time average
rotational delay transfer time controller
overhead - 7.4 ms 0.5 1/(7200 RPM) 64 KB / (65
MB/s) 0.1 ms - 7.4 ms 0.5 /(7200 RPM/(60000ms/M)) 64 KB
/ (65 KB/ms) 0.1 ms - 7.4 4.2 1.0 0.1 ms 12.7 ms
14Areal Density
- Bits recorded along a track
- Metric is Bits Per Inch (BPI)
- Number of tracks per surface
- Metric is Tracks Per Inch (TPI)
- Disk Designs Brag about bit density per unit area
- Metric is Bits Per Square Inch
- Called Areal Density
- Areal Density BPI x TPI
15Areal Density
- Areal Density BPI x TPI
- Change slope 30/yr to 60/yr about 1991
16MBits per square inch DRAM as of Disk over
time
9 v. 22 Mb/si
470 v. 3000 Mb/si
0.2 v. 1.7 Mb/si
source New York Times, 2/23/98, page C3,
Makers of disk drives crowd even more data into
even smaller spaces
17Historical Perspective
- 1956 IBM Ramac early 1970s Winchester
- Developed for mainframe computers, proprietary
interfaces - Steady shrink in form factor 27 in. to 14 in
- Form factor and capacity drives market, more than
performance - 1970s Mainframes ? 14 inch diameter disks
- 1980s Minicomputers,Servers ? 8,5 1/4 diameter
- PCs, workstations Late 1980s/Early 1990s
- Mass market disk drives become a reality
- industry standards SCSI, IPI, IDE
- Pizzabox PCs ? 3.5 inch diameter disks
- Laptops, notebooks ? 2.5 inch disks
- Palmtops didnt use disks, so 1.8 inch diameter
disks didnt make it - 2000s
- 1 inch for cameras, cell phones?
18Disk History
Data density Mbit/sq. in.
Capacity of Unit Shown Megabytes
1973 1. 7 Mbit/sq. in 140 MBytes
1979 7. 7 Mbit/sq. in 2,300 MBytes
source New York Times, 2/23/98, page C3,
Makers of disk drives crowd even more data into
even smaller spaces
19Disk History
1989 63 Mbit/sq. in 60,000 MBytes
1997 1450 Mbit/sq. in 2300 MBytes
1997 3090 Mbit/sq. in 8100 MBytes
source New York Times, 2/23/98, page C3,
Makers of disk drives crowd even mroe data into
even smaller spaces
201 inch disk drive!
- 2000 IBM MicroDrive
- 1.7 x 1.4 x 0.2
- 1 GB, 3600 RPM, 5 MB/s, 15 ms seek
- Digital camera, PalmPC?
- 2006 MicroDrive?
- 9 GB, 50 MB/s!
- Assuming it finds a niche in a successful
product - Assuming past trends continue
21Disk Characteristics in 2000
447
435
828
22Disk Characteristics in 2000
23Disk Characteristics in 2000
24Disk Characteristics in 2000
25Fallacy Use Data Sheet Average Seek Time
- Manufacturers needed standard for fair comparison
(benchmark) - Calculate all seeks from all tracks, divide by
number of seeks gt average - Real average would be based on how data laid out
on disk, where seek in real applications, then
measure performance - Usually, tend to seek to tracks nearby, not to
random track - Rule of Thumb observed average seek time is
typically about 1/4 to 1/3 of quoted seek time
(i.e., 3X-4X faster) - Barracuda 180 X avg. seek 7.4 ms ? 2.5 ms
26Fallacy Use Data Sheet Transfer Rate
- Manufacturers quote the speed off the data rate
off the surface of the disk - Sectors contain an error detection and correction
field (can be 20 of sector size) plus sector
number as well as data - There are gaps between sectors on track
- Rule of Thumb disks deliver about 3/4 of
internal media rate (1.3X slower) for data - For example, Barracuda 180X quotes 64 to 35
MB/sec internal media rate - ? 47 to 26 MB/sec external data rate (74)
27Disk Performance Example
- Calculate time to read 64 KB for UltraStar 72
again, this time using 1/3 quoted seek time, 3/4
of internal outer track bandwidth (12.7 ms
before) - Disk latency average seek time average
rotational delay transfer time controller
overhead - (0.33 7.4 ms) 0.5 1/(7200 RPM) 64 KB
/ (0.75 65 MB/s) 0.1 ms - 2.5 ms 0.5 /(7200 RPM/(60000ms/M)) 64 KB
/ (47 KB/ms) 0.1 ms - 2.5 4.2 1.4 0.1 ms 8.2 ms (64 of 12.7)
28Future Disk Size and Performance
- Continued advance in capacity (60/yr) and
bandwidth (40/yr) - Slow improvement in seek, rotation (8/yr)
- Time to read whole disk
- Year Sequentially Randomly (1 sector/seek)
- 1990 4 minutes 6 hours
- 2000 12 minutes 1 week(!)
- 3.5 form factor make sense in 5 yrs?
- What is capacity, bandwidth, seek time, RPM?
- Assume today 80 GB, 30 MB/sec, 6 ms, 10000 RPM
29Tape vs. Disk
- Longitudinal tape uses same technology as
- hard disk tracks its density improvements
- Disk head flies above surface, tape head lies on
surface - Disk fixed, tape removable
- Inherent cost-performance based on geometries
- fixed rotating platters with gaps
- (random access, limited area, 1 media /
reader) - vs.
- removable long strips wound on spool
- (sequential access, "unlimited" length,
multiple / reader) - Helical Scan (VCR, Camcoder, DAT)
- Spins head at angle to tape to improve
density
30Current Drawbacks to Tape
- Tape wear out
- Helical 100s of passes to 1000s for longitudinal
- Head wear out
- 2000 hours for helical
- Both must be accounted for in economic /
reliability model - Bits stretch
- Readers must be compatible with multiple
generations of media - Long rewind, eject, load, spin-up times not
inherent, just no need in marketplace - Designed for archival
31Automated Cartridge System StorageTek Powderhorn
9310
7.7 feet
8200 pounds,1.1 kilowatts
10.7 feet
- 6000 x 50 GB 9830 tapes 300 TBytes in 2000
(uncompressed) - Library of Congress all information in the
world in 1992, ASCII of all books 30 TB - Exchange up to 450 tapes per hour (8 secs/tape)
- 1.7 to 7.7 Mbyte/sec per reader, up to 10 readers
32Library vs. Storage
- Getting books today as quaint as programming in
the 1970s - punch cards, batch processing
- wander thru shelves, anticipatory purchasing
- Cost 1 per book to check out
- 30 for a catalogue entry
- 30 of all books never checked out
- Write only journals?
- Digital library can transform campuses
33Whither tape?
- Investment in research
- 90 of disks shipped in PCs 100 of PCs have
disks - 0 of tape readers shipped in PCs 0 of PCs
have tapes - Before, N disks / tape today, N tapes / disk
- 40 GB/DLT tape (uncompressed)
- 80 to 192 GB/3.5" disk (uncompressed)
- Cost per GB
- In past, 10X to 100X tape cartridge vs. disk
- Jan 2001 40 GB for 53 (DLT cartridge), 2800
for reader - 1.33/GB cartridge, 2.03/GB 100 cartridges 1
reader - (10995 for 1 reader 15 tape autoloader,
10.50/GB) - Jan 2001 80 GB for 244 (IDE,5400 RPM), 3.05/GB
- Will /GB tape v. disk cross in 2001? 2002? 2003?
- Storage field is based on tape backup what
should we do? Discussion if time permits?
34Use Arrays of Small Disks?
- Katz and Patterson asked in 1987
- Can smaller disks be used to close gap in
performance between disks and CPUs?
Conventional 4 disk designs
10
5.25
3.5
14
High End
Low End
Disk Array 1 disk design
3.5
35Advantages of Small Formfactor Disk Drives
Low cost/MB High MB/volume High MB/watt Low
cost/Actuator
Cost and Environmental Efficiencies
36Replace Small Number of Large Disks with Large
Number of Small Disks! (1988 Disks)
IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15
MB/s 600 I/Os/s 250 KHrs 250K
x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900
IOs/s ??? Hrs 150K
IBM 3.5" 0061 320 MBytes 0.1 cu. ft. 11 W 1.5
MB/s 55 I/Os/s 50 KHrs 2K
Capacity Volume Power Data Rate I/O Rate
MTTF Cost
9X
3X
8X
6X
Disk Arrays have potential for large data and I/O
rates, high MB per cu. ft., high MB per KW, but
what about reliability?
37Array Reliability
- Reliability of N disks Reliability of 1 Disk
N - 50,000 Hours 70 disks 700 hours
- Disk system MTTF Drops from 6 years to 1
month! - Arrays (without redundancy) too unreliable to
be useful!
Hot spares support reconstruction in parallel
with access very high media availability can be
achieved
38Redundant Arrays of (Inexpensive) Disks
- Files are "striped" across multiple disks
- Redundancy yields high data availability
- Availability service still provided to user,
even if some components failed - Disks will still fail
- Contents reconstructed from data redundantly
stored in the array - ? Capacity penalty to store redundant info
- ? Bandwidth penalty to update redundant info
39Redundant Arrays of Inexpensive DisksRAID 1
Disk Mirroring/Shadowing
recovery group
- Each disk is fully duplicated onto its mirror
- Very high availability can be achieved
- Bandwidth sacrifice on write
- Logical write two physical writes
- Reads may be optimized
- Most expensive solution 100 capacity overhead
- (RAID 2 not interesting, so skip)
40Redundant Array of Inexpensive Disks RAID 3
Parity Disk
P contains sum of other disks per stripe mod 2
(parity) If disk fails, subtract P from sum of
other disks to find missing information
41RAID 3
- Sum computed across recovery group to protect
against hard disk failures, stored in P disk - Logically, a single high capacity, high transfer
rate disk good for large transfers - Wider arrays reduce capacity costs, but decreases
availability - 33 capacity cost for parity in this configuration
42Inspiration for RAID 4
- RAID 3 relies on parity disk to discover errors
on Read - But every sector has an error detection field
- Rely on error detection field to catch errors on
read, not on the parity disk - Allows independent reads to different disks
simultaneously
43Problems of Disk Arrays Small Writes
RAID-5 Small Write Algorithm
1 Logical Write 2 Physical Reads 2 Physical
Writes
D0
D1
D2
D3
D0'
P
old data
new data
old parity
(1. Read)
(2. Read)
XOR
XOR
(3. Write)
(4. Write)
D0'
D1
D2
D3
P'
44System Availability Orthogonal RAIDs
Array Controller
String Controller
. . .
String Controller
. . .
String Controller
. . .
String Controller
. . .
String Controller
. . .
String Controller
. . .
Data Recovery Group unit of data redundancy
Redundant Support Components fans, power
supplies, controller, cables
End to End Data Integrity internal parity
protected data paths
45System-Level Availability
host
host
Fully dual redundant
I/O Controller
I/O Controller
Array Controller
Array Controller
. . .
. . .
. . .
Goal No Single Points of Failure
. . .
. . .
. . .
with duplicated paths, higher performance can
be obtained when there are no failures
Recovery Group
46Berkeley History RAID-I
- RAID-I (1989)
- Consisted of a Sun 4/280 workstation with 128 MB
of DRAM, four dual-string SCSI controllers, 28
5.25-inch SCSI disks and specialized disk
striping software - Today RAID is 19 billion dollar industry, 80
nonPC disks sold in RAIDs
47Summary RAID Techniques Goal was performance,
popularity due to reliability of storage
1 0 0 1 0 0 1 1
1 0 0 1 0 0 1 1
Disk Mirroring, Shadowing (RAID 1)
Each disk is fully duplicated onto its "shadow"
Logical write two physical writes 100
capacity overhead
1 0 0 1 0 0 1 1
0 0 1 1 0 0 1 0
1 1 0 0 1 1 0 1
1 0 0 1 0 0 1 1
Parity Data Bandwidth Array (RAID 3)
Parity computed horizontally Logically a single
high data bw disk
High I/O Rate Parity Array (RAID 5)
Interleaved parity blocks Independent reads and
writes Logical write 2 reads 2 writes
48Summary Storage
- Disks
- Extraodinary advance in capacity/drive, /GB
- Currently 17 Gbit/sq. in. can continue past 100
Gbit/sq. in.? - Bandwidth, seek time not keeping up 3.5 inch
form factor makes sense? 2.5 inch form factor in
near future? 1.0 inch form factor in long term? - Tapes
- No investment, must be backwards compatible
- Are they already dead?
- What is a tapeless backup system?
49Reliability Definitions
- Examples on why precise definitions so important
for reliability - Is a programming mistake a fault, error, or
failure? - Are we talking about the time it was designed or
the time the program is run? - If the running program doesnt exercise the
mistake, is it still a fault/error/failure? - If an alpha particle hits a DRAM memory cell, is
it a fault/error/failure if it doesnt change the
value? - Is it a fault/error/failure if the memory doesnt
access the changed bit? - Did a fault/error/failure still occur if the
memory had error correction and delivered the
corrected value to the CPU?
50IFIP Standard terminology
- Computer system dependability quality of
delivered service such that reliance can be
placed on service - Service is observed actual behavior as perceived
by other system(s) interacting with this systems
users - Each module has ideal specified behavior, where
service specification is agreed description of
expected behavior - A system failure occurs when the actual behavior
deviates from the specified behavior - failure occurred because an error, a defect in
module - The cause of an error is a fault
- When a fault occurs it creates a latent error,
which becomes effective when it is activated - When error actually affects the delivered
service, a failure occurs (time from error to
failure is error latency)
51Fault v. (Latent) Error v. Failure
- A fault creates one or more latent errors
- Properties of errors are
- a latent error becomes effective once activated
- an error may cycle between its latent and
effective states - an effective error often propagates from one
component to another, thereby creating new errors
- Effective error is either a formerly-latent error
in that component or it propagated from another
error - A component failure occurs when the error affects
the delivered service - These properties are recursive, and apply to any
component in the system - An error is manifestation in the system of a
fault, a failure is manifestation on the service
of an error
52Fault v. (Latent) Error v. Failure
- An error is manifestation in the system of a
fault, a failure is manifestation on the service
of an error - Is a programming mistake a fault, error, or
failure? - Are we talking about the time it was designed or
the time the program is run? - If the running program doesnt exercise the
mistake, is it still a fault/error/failure? - A programming mistake is a fault
- the consequence is an error (or latent error) in
the software - upon activation, the error becomes effective
- when this effective error produces erroneous data
which affect the delivered service, a failure
occurs
53Fault v. (Latent) Error v. Failure
- An error is manifestation in the system of a
fault, a failure is manifestation on the service
of an error - Is If an alpha particle hits a DRAM memory cell,
is it a fault/error/failure if it doesnt change
the value? - Is it a fault/error/failure if the memory doesnt
access the changed bit? - Did a fault/error/failure still occur if the
memory had error correction and delivered the
corrected value to the CPU? - An alpha particle hitting a DRAM can be a fault
- if it changes the memory, it creates an error
- error remains latent until effected memory word
is read - if the effected word error affects the delivered
service, a failure occurs
54Fault v. (Latent) Error v. Failure
- An error is manifestation in the system of a
fault, a failure is manifestation on the service
of an error - What if a person makes a mistake, data is
altered, and service is affected? - fault
- error
- latent
- failure
55Fault Tolerance vs Disaster Tolerance
- Fault-Tolerance (or more properly,
Error-Tolerance) mask local faults(prevent
errors from becoming failures) - RAID disks
- Uninterruptible Power Supplies
- Cluster Failover
- Disaster Tolerance masks site errors(prevent
site errors from causing service failures) - Protects against fire, flood, sabotage,..
- Redundant system and service at remote site.
- Use design diversity
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
56Defining reliability and availability
quantitatively
- Users perceive a system alternating between 2
states of service with respect to service
specification - 1. service accomplishment, where service is
delivered as specified, - 2. service interruption, where the delivered
service is different from the specified service,
measured as Mean Time To Repair (MTTR) - Transitions between these 2 states are caused by
failures (from state 1 to state 2) or
restorations (2 to 1) - module reliability a measure of continuous
service accomplishment (or of time to failure)
from a reference point, e.g, Mean Time To Failure
(MTTF) - The reciprocal of MTTF is failure rate
- module availability measure of service
accomplishment with respect to alternation
between the 2 states of accomplishment and
interruption MTTF / (MTTFMTTR)
57Fail-Fast is Good, Repair is Needed
Lifecycle of a module fail-fast gives short
fault latency High Availability is
low UN-Availability Unavailability MTTR
MTTFMTTR
-
- As MTTFgtgtMTTR, improving either MTTR or MTTF
gives benefit - Note Mean Time Between Failures (MTBF) MTTFMTTR
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
58Dependability The 3 ITIES
- Reliability / Integrity does the right thing.
(Also large MTTF) - Availability does it now. (Also small MTTR
MTTFMTTRSystem
Availabilityif 90 of terminals up 99 of DB
up? (gt89 of transactions are serviced on time).
Security
Integrity
Reliability
Availability
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
59Reliability Example
- If assume collection of modules have
exponentially distributed lifetimes (age of
compoent doesn't matter in failure probability)
and modules fail independently, overall failure
rate of collection is sum of failure rates of
modules - Calculate MTTF of a disk subsystem with
- 10 disks, each rated at 1,000,000 hour MTTF
- 1 SCSI controller, 500,000 hour MTTF
- 1 power supply, 200,000 hour MTTF
- 1 fan, 200,000 MTTF
- 1 SCSI cable, 1,000,000 hour MTTF
- Failure Rate 101/1,000,000 1/500,000
1/200,000 1/200,000 1/1,000,000 (10 2 5
5 1)/1,000,000 23/1,000,000 - MTTF1/Failure Rate 1,000,000/23 43,500 hrs
60What's wrong with MTTF?
- 1,000,000 MTTF gt 100 years infinity?
- How calculated?
- Put, say, 2000 in a room, calculate failures in
60 days, and then calculate the rate - As long as lt3 failures gt 1,000,000 hr MTTF
- Suppose we did this with people?
- 1998 deaths per year in US ("Failure Rate")
- Deaths 5 to 14 year olds 20/100,000
- MTTFhuman 100,000/20 5,000 years
- Deaths gt85 year olds 20,000/100,000
- MTTFhuman 100,000/20,000 5 years
source "Deaths Final Data for 1998,"
www.cdc.gov/nchs/data/nvs48_11.pdf
61What's wrong with MTTF?
- 1,000,000 MTTF gt 100 years infinity?
- But disk lifetime is 5 years!
- gt if you replace a disk every 5 years, on
average it wouldn't fail until 21st replacement - A better unit that fail
- Fail over lifetime if had 1000 disks for 5
years (1000 disks 36524) / 1,000,000
hrs/failure 43,800,000 / 1,000,000 44
failures 4.4 fail with 1,000,000 MTTF - Detailed disk spec lists failures/million/month
- Typically about 800 failures per month per
million disks at 1,000,000 MTTF, or about 1 per
year for 5 year disk lifetime
62Dependability Big Idea No Single Point of Failure
- Since Hardware MTTF is often 100,000 to 1,000,000
hours and MTTF is often 1 to 10 hours, there is a
good chance that if one component fails it will
be repaired before a second component fails - Hence design systems with sufficient redundancy
that there is No Single Point of Failure
63HW Failures in Real Systems Tertiary Disks
- A cluster of 20 PCs in seven 7-foot high, 19-inch
wide racks with 368 8.4 GB, 7200 RPM, 3.5-inch
IBM disks. The PCs are P6-200MHz with 96 MB of
DRAM each. They run FreeBSD 3.0 and the hosts are
connected via switched 100 Mbit/second Ethernet
64When To Repair?
- Chances Of Tolerating A Fault are 10001 (class
3) - A 1995 study Processor Disc Rated At 10khr
MTTF - Computed Single Observed
- Failures Double Fails Ratio
- 10k Processor Fails 14 Double 1000 1
- 40k Disc Fails, 26 Double 1000 1
- Hardware Maintenance
- On-Line Maintenance "Works" 999 Times Out Of
1000. - The chance a duplexed disc will fail during
maintenance?11000 - Risk Is 30x Higher During Maintenance
- gt Do It Off Peak Hour
- Software Maintenance
- Repair Only Virulent Bugs
- Wait For Next Release To Fix Benign Bugs
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
65Sources of Failures
- MTTF MTTR
- Power Failure 2000 hr 1 hr
- Phone Lines
- Soft gt.1 hr .1 hr
- Hard 4000 hr 10 hr
-
- Hardware Modules 100,000hr 10hr (many are
transient) - Software
- 1 Bug/1000 Lines Of Code (after vendor-user
testing) - gt Thousands of bugs in System!
- Most software failures are transient dump
restart system. - Useful fact 8,760 hrs/year 10k hr/year
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
66Case Study - Japan"Survey on Computer Security",
Japan Info Dev Corp., March 1986. (trans Eiichi
Watanabe).
Vendor
4
2
Tele Comm lines
1
2
1
1
.
2
Environment
2
5
Application Software
9
.
3
Operations
- Vendor (hardware and software) 5 Months
- Application software 9 Months
- Communications lines 1.5 Years
- Operations 2 Years
- Environment 2 Years
- 10 Weeks
- 1,383 institutions reported (6/84 - 7/85)
- 7,517 outages, MTTF 10 weeks, avg
duration 90 MINUTES - To Get 10 Year MTTF, Must Attack All These Areas
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
67Case Studies - Tandem Trends Reported MTTF by
Component
- 1985 1987 1990
- SOFTWARE 2 53 33 Years
- HARDWARE 29 91 310 Years
- MAINTENANCE 45 162 409 Years
- OPERATIONS 99 171 136 Years
- ENVIRONMENT 142 214 346 Years
- SYSTEM 8 20 21 Years
- Problem Systematic Under-reporting
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
68Is Maintenance the Key?
- Rule of Thumb Maintenance 10X HW
- so over 5 year product life, 95 of cost is
maintenance
69OK So Far
- Hardware fail-fast is easy
- Redundancy plus Repair is great (Class 7
availability) - Hardware redundancy repair is via modules.
- How can we get instant software repair?
- We Know How To Get Reliable Storage
- RAID Or Dumps And Transaction Logs.
- We Know How To Get Available Storage
- Fail Soft Duplexed Discs (RAID 1...N).
- ? How do we get reliable execution?
- ? How do we get available execution?
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
70Does Hardware Fail Fast? 4 of 384 Disks that
failed in Tertiary Disk
71High Availability System ClassesGoal Build
Class 6 Systems
Availability 90. 99. 99.9 99.99 99.999 99.99
99 99.99999
UnAvailability MTTR/MTBF can cut it in ½ by
cutting MTTR or MTBF
From Jim Grays Talk at UC Berkeley on Fault
Tolerance " 11/9/00
72How Realistic is "5 Nines"?
- HP claims HP-9000 server HW and HP-UX OS can
deliver 99.999 availability guarantee in
certain pre-defined, pre-tested customer
environments - Application faults?
- Operator faults?
- Environmental faults?
- Collocation sites (lots of computers in 1
building on Internet) have - 1 network outage per year (1 day)
- 1 power failure per year (1 day)
- Microsoft Network unavailable recently for a day
due to problem in Domain Name Server if only
outage per year, 99.7 or 2 Nines
73Summary Dependability
- Fault gt Latent errors in system gt Failure in
service - Reliability quantitative measure of time to
failure (MTTF) - Assuming expoentially distributed independent
failures, can calculate MTTF system from MTTF of
components - Availability quantitative measure of time
delivering desired service - Can improve Availability via greater MTTF or
smaller MTTR (such as using standby spares) - No single point of failure a good hardware
guideline, as everything can fail - Components often fail slowly
- Real systems problems in maintenance, operation
as well as hardware, software
74Summary Dependability
- Fault gt Latent errors in system gt Failure in
service - Reliability quantitative measure of time to
failure (MTTF) - Assuming expoentially distributed independent
failures, can calculate MTTF system from MTTF of
components - Availability quantitative measure of time
delivering desired service - Can improve Availability via greater MTTF or
smaller MTTR (such as using standby spares) - No single point of failure a good hardware
guideline, as everything can fail - Components often fail slowly
- Real systems problems in maintenance, operation
as well as hardware, software
75Introduction to Queueing Theory
Arrivals
Departures
- More interested in long term, steady state than
in startup gt Arrivals Departures - Littles Law Mean number tasks in system
arrival rate x mean reponse time - Observed by many, Little was first to prove
- Applies to any system in equilibrium, as long as
nothing in black box is creating or destroying
tasks
76A Little Queuing Theory Notation
- Queuing models assume state of equilibrium
input rate output rate - Notation
- r average number of arriving customers/secondTs
er average time to service a customer
(tradtionally µ 1/ Tser )u server utilization
(0..1) u r x Tser (or u r / Tser
)Tq average time/customer in queue Tsys average
time/customer in system Tsys Tq
TserLq average length of queue Lq r x Tq
Lsys average length of system Lsys r x Tsys - Littles Law Lengthsystem rate x Timesystem
(Mean number customers arrival rate x mean
service time)
77A Little Queuing Theory
- Service time completions vs. waiting time for a
busy server randomly arriving event joins a
queue of arbitrary length when server is busy,
otherwise serviced immediately - Unlimited length queues key simplification
- A single server queue combination of a servicing
facility that accomodates 1 customer at a time
(server) waiting area (queue) together called
a system - Server spends a variable amount of time with
customers how do you characterize variability? - Distribution of a random variable histogram?
curve?
78A Little Queuing Theory
- Server spends a variable amount of time with
customers - Weighted mean m1 (f1 x T1 f2 x T2 ... fn x
Tn)/F (Ff1 f2...) - variance (f1 x T12 f2 x T22 ... fn x Tn2)/F
m12 - Must keep track of unit of measure (100 ms2 vs.
0.1 s2 ) - Squared coefficient of variance C variance/m12
- Unitless measure (100 ms2 vs. 0.1 s2)
- Exponential distribution C 1 most short
relative to average, few others long 90 lt 2.3 x
average, 63 lt average - Hypoexponential distribution C lt 1 most close
to average, C0.5 gt 90 lt 2.0 x average, only
57 lt average - Hyperexponential distribution C gt 1 further
from average C2.0 gt 90 lt 2.8 x average, 69 lt
average
Avg.
79A Little Queuing Theory Variable Service Time
- Server spends a variable amount of time with
customers - Weighted mean m1 (f1xT1 f2xT2 ... fnXTn)/F
(Ff1f2...) - Squared coefficient of variance C
- Disk response times C 1.5 (majority seeks lt
average) - Yet usually pick C 1.0 for simplicity
- Another useful value is average time must wait
for server to complete task m1(z) - Not just 1/2 x m1 because doesnt capture
variance - Can derive m1(z) 1/2 x m1 x (1 C)
- No variance gt C 0 gt m1(z) 1/2 x m1
80A Little Queuing TheoryAverage Wait Time
- Calculating average wait time in queue Tq
- If something at server, it takes to complete on
average m1(z) - Chance server is busy u average delay is u x
m1(z) - All customers in line must complete each avg
Tser - Tq u x m1(z) Lq x Ts er 1/2 x u x Tser
x (1 C) Lq x Ts er Tq 1/2 x u x Ts er x
(1 C) r x Tq x Ts er Tq 1/2 x u x Ts er
x (1 C) u x TqTq x (1 u) Ts er x u
x (1 C) /2Tq Ts er x u x (1 C) / (2 x
(1 u)) - Notation
- r average number of arriving customers/secondTs
er average time to service a customeru server
utilization (0..1) u r x TserTq average
time/customer in queueLq average length of
queueLq r x Tq
81A Little Queuing Theory M/G/1 and M/M/1
- Assumptions so far
- System in equilibrium
- Time between two successive arrivals in line are
random - Server can start on next customer immediately
after prior finishes - No limit to the queue works First-In-First-Out
- Afterward, all customers in line must complete
each avg Tser - Described memoryless or Markovian request
arrival (M for C1 exponentially random),
General service distribution (no restrictions), 1
server M/G/1 queue - When Service times have C 1, M/M/1 queueTq
Tser x u x (1 C) /(2 x (1 u)) Tser x
u / (1 u) - Tser average time to service a
customeru server utilization (0..1) u r x
TserTq average time/customer in queue
82A Little Queuing Theory An Example
- processor sends 10 x 8KB disk I/Os per second,
requests service exponentially distrib., avg.
disk service 20 ms - On average, how utilized is the disk?
- What is the number of requests in the queue?
- What is the average time spent in the queue?
- What is the average response time for a disk
request? - Notation
- r average number of arriving customers/second
10Tser average time to service a customer 20
ms (0.02s)u server utilization (0..1) u r x
Tser 10/s x .02s 0.2Tq average time/customer
in queue Tser x u / (1 u) 20 x
0.2/(1-0.2) 20 x 0.25 5 ms (0 .005s)Tsys
average time/customer in system Tsys Tq Tser
25 msLq average length of queueLq r x Tq
10/s x .005s 0.05 requests in queueLsys
average tasks in system Lsys r x Tsys
10/s x .025s 0.25
83A Little Queuing Theory Another Example
- processor sends 20 x 8KB disk I/Os per sec,
requests service exponentially distrib., avg.
disk service 12 ms - On average, how utilized is the disk?
- What is the number of requests in the queue?
- What is the average time a spent in the queue?
- What is the average response time for a disk
request? - Notation
- r average number of arriving customers/second
20Tser average time to service a customer 12
msu server utilization (0..1) u r x Tser
/s x . s Tq average time/customer
in queue Ts er x u / (1 u) x
/( ) x
msTsys average time/customer in system Tsys
Tq Tser 16 msLq average length of queueLq r
x Tq /s x s
requests in queue Lsys average tasks in
system Lsys r x Tsys /s x s
84A Little Queuing Theory Another Example
- processor sends 20 x 8KB disk I/Os per sec,
requests service exponentially distrib., avg.
disk service 12 ms - On average, how utilized is the disk?
- What is the number of requests in the queue?
- What is the average time a spent in the queue?
- What is the average response time for a disk
request? - Notation
- r average number of arriving customers/second
20Tser average time to service a customer 12
msu server utilization (0..1) u r x Tser
20/s x .012s 0.24Tq average time/customer in
queue Ts er x u / (1 u) 12 x
0.24/(1-0.24) 12 x 0.32 3.8 msTsys average
time/customer in system Tsys Tq Tser 15.8
msLq average length of queueLq r x Tq 20/s
x .0038s 0.076 requests in queue Lsys average
tasks in system Lsys r x Tsys 20/s x
.016s 0.32
85A Little Queuing TheoryYet Another Example
- Suppose processor sends 10 x 8KB disk I/Os per
second, squared coef. var.(C) 1.5, avg. disk
service time 20 ms - On average, how utilized is the disk?
- What is the number of requests in the queue?
- What is the average time a spent in the queue?
- What is the average response time for a disk
request? - Notation
- r average number of arriving customers/second
10Tser average time to service a customer 20
msu server utilization (0..1) u r x Tser
10/s x .02s 0.2Tq average time/customer in
queue Tser x u x (1 C) /(2 x (1 u))
20 x 0.2(2.5)/2(1 0.2) 20 x 0.32 6.25 ms
Tsys average time/customer in system Tsys Tq
Tser 26 msLq average length of queueLq r x
Tq 10/s x .006s 0.06 requests in
queueLsys average tasks in system Lsys r x
Tsys 10/s x .026s 0.26
86Pitfall of Not using Queuing Theory
- 1st 32-bit minicomputer (VAX-11/780)
- How big should write buffer be?
- Stores 10 of instructions, 1 MIPS
- Buffer 1
- gt Avg. Queue Length 1 vs. low response time