A System Perspective: Processors, Memory and InputOutput - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

A System Perspective: Processors, Memory and InputOutput

Description:

Video editing and general manipulation. How much data in a single NTSC signal? ... Video on Demand (Blockbuster in your dorm room) ... – PowerPoint PPT presentation

Number of Views:59
Avg rating:3.0/5.0
Slides: 18
Provided by: concurrent
Category:

less

Transcript and Presenter's Notes

Title: A System Perspective: Processors, Memory and InputOutput


1
A System Perspective Processors, Memory and
Input/Output
  • Last Time
  • Interrupt-driven Input/Output
  • Direct Memory Access, slave processors, block
    Input/Output
  • This Time
  • A perspective on the entire computer
  • Relative speeds, rates, and relationships
  • Reminders/Announcements
  • HW 4 due Friday 6/4 (in class)
  • Course review on Friday
  • Final Exam, 6/7/99, 8-11am
  • A - O Center 113
  • P - S Center 217A
  • T - Z Center 217B

2
Input/Output Perspective
  • How does all of this really fit together?
  • Bus arbitration, interrupts, and DMA
  • Typically, Memory and I/O busses are multi-master
  • Arbitration precedes all of the major bus
    activity
  • .... lifetime of a computation, or an I/O
    operation....

3
Basic Computation
  • Processor computes, accessing its cache, many
    hits and local computation (hopefully)
  • Misses in cache, must go out over the memory
    bus.
  • 1st arbitrates for the bus
  • 2nd accesses the memory
  • 3rd memory arbitrates for bus (if split
    transaction, otherwise bus held during memory
    latency)
  • 4th data is returned
  • Bus arbitration increases Cache miss penalty

4
Basic Input/Output
  • Polled I/O -- as before, processor arbitrates for
    bus and mediates everything
  • Interrupt-driven I/O -- basically as before
  • DMA based I/O -- the Direct memory access
    controller must be able to master the bus, to
    transfer the data to memory
  • Both processor and DMA controller must arbitrate
    for the bus before using it.
  • Both must release the bus in a reasonable amount
    of time to avoid long waits.

5
Entire DMA based I/O Operation
  • Processor arbitrates, writes I/O Control regs,
    I/O begins
  • I/O interrupts processor, Processor arbitrates,
    writes DMA control regs
  • DMA arbitrates, moves data into memory
  • DMA interrupts processor, processor arbitrates,
    updates memory structures
  • Arbitration buried at many steps -- must be done
    fast.

6
Input/Output Summary
  • Processor-mediated Input/Output
  • Interrupt-driven Input/Output
  • DMA based Input/Output
  • Most of this is done in the operating systems
    (kernel and device drivers)
  • In a few low-level systems (Macs, PCs), some of
    this can be done by the user (if desired).
  • gt Major reason for PC hardware compatibility
  • Early MSDOS I/O was implemented poorly (slow)
  • Applications bypassed the OS, to get better
    performance
  • Now, they depend critically on the underlying
    hardware

7
System Perspective
  • Computer elements
  • Relative Speeds and Rates
  • Where are the bottlenecks?

8
Processing Speeds
  • Pipelined Instruction processor
  • Built from extremely fast (and small) Silicon
    transistors
  • Execute at 500Mhz , multiple issue
  • 2x109 instructions per second (billions)
  • ... except, sometimes it has to access the memory
    ...

9
Memory Speeds
  • Memory speeds varies with size and cost.
  • Fastest is equal to the processor speeds.
  • Bulk is much slower 100s of megabytes of
    storage
  • 50ns typical access time, 20 Millions accesses
    per second
  • There must be 100s to 1000s of instructions per
    slow memory access to achieve good performance
  • lots of cache hits, system BW of 1GB/s

10
Input/Output Speeds
  • I/O devices are generally larger.
  • Wide variance in properties, but slower.
  • Disks -gt 10ms seeks, 100Hz, we have at least ten
    millions of instruction times for a disk
    input/output operation
  • Monitor (frame buffer) -gt 60 Hz, we have 100s of
    millions of instructions between screen refreshes
  • Keyboard -gt 10Hz, we have billions of
    instructions per keyboard stroke

11
Input/Output Perspective
  • Input/Output Events are usually extremely
    slow/rare. Not glamorous, poor stepchild.
  • Handling them efficiently is important to free
    the processor to do other work.
  • Problem processors and memories are scaling up
    in performance much faster than input/output
    devices (disks, in particular).
  • Many systems / computations are becoming
    input/output bound. Further, many new
    applications are I/O intensive.
  • High performance I/O is a hot topic of research!
  • parallelism, prefetching, caching, storage
    capacity

12
Relative Time (area)
  • Instruction
  • Memory Access
  • Average Disk Seek (access)

13
Parallel Input/Output
Parallel Disks
Parallel Tapes
  • Parallelism for higher bandwidth and lower
    transfer latency
  • How to organize data across the parallel units?
  • Prefetching (anticipating to reduce latency)
  • Caching (same idea as before), but petabytes
    (1015 bytes) of storage
  • a terabyte of disk only cost 25K, aggregate
    transfer BW 1GB/s

14
Multi-media Servers
  • Audio servers
  • Answering machines, Voice manipulation
  • Video servers
  • Picture phones (answering machines)
  • Video editing and general manipulation
  • How much data in a single NTSC signal?
  • 100Mbits/sec, HDTV? 2-4x
  • a 2-hour movie on NTSC?
  • 100GB, 2GB compressed
  • HDTV?
  • 400GB, 8GB compressed
  • Data volumes are HUGE, these will be I/O
    intensive systems.

15
Video on Demand (Blockbuster in your dorm room)
Video or Multimedia Server
High Speed Network
Parallel Tapes
Parallel Disks
  • How do we build a system to support 100
    simultaneous video streams? 1000?
  • What about when every wants to watch at the same
    time?
  • What if everyone wants to watch the same movie?

16
Large Scale Web Servers
WWW Server
High Speed Network
Parallel Tapes
Parallel Disks
  • Serves multimedia, but perhaps less predictably
    and in smaller chunks
  • What do these workloads look like? Hot page,
    fresh news? Free software?
  • The next generation of machine design problems
    appear to be I/O intensive and information
    intensive NOT compute intensive...

17
Next time....
  • Whirlwind review of the course
Write a Comment
User Comments (0)
About PowerShow.com