Title: A1260947394sJPWO
1 CS 351 Computer Architecture Instructor Ba
la Ravikumar (Ravi) 116 I Darwin Hall Office
hours TBA
2 Catalog Description  Lecture, 4 hours.
Instruction set design stages of instruction
execution, data and control path design CISC,
RISC, stack architectures pipelining program
optimization techniques, memory hierarchy cache
models and design issues, virtual memory and
secondary storage I/O interfacing advanced
topics to include some of the following parallel
architectures, DSP or other special purpose
architecture, FPGA, reconfigurable architecture,
asynchronous circuit design. Prerequisite CS
215 and CS 252, or consent of instructor.Â
3- Course Goals
- Â Computer architecture deals with the
functionality of all the major components of a
computer - instruction set architecture
- ALU
- control and data paths
- cache and main memory
- I/O
- network and bus interconnection
- The main focus will be on the following topics
- performance measures of computer systems
- MIPS assembly language
- computer arithmetic and ALU circuit design
- CPU design
- pipelining
- cache memory
- multicore CPU and parallel programming (as time
permits).
4Text
Computer Organization and Design The
hardware/software interface, 4th edition,
Morgan-Kaufman Publishers ISBN
978-0-123744937. Â
Text
- Other References
- Â
- Parhami, Behrooz, Computer Architecture From
Microprocessors to Supercomputers, Oxford
University Press, 556 xx pp., February 2005
(ISBN 0-19-515455-X). - M. Murdocca and V. Heuring, Computer
Architecture, Prentice-Hall. - W. Stallings, Computer Organization and
Architecture Designing for Performance,
Prentice-Hall.
5Assigned Work and Evaluation   Quiz (10 - 15)
There will be a quiz every class. Duration 10
to 15 minutes. Two Mid-Term tests (20 - 25)
Both tests will be in class and will be about 100
minutes long. Each test will have a closed-book
and an open-book section. Home Work assignments
(20 - 30) This will include some
implementation (in MIPS assembly language,
hardware description language etc.) as well as
other design and problem solving
exercises. Final Examination (35 - 40) The
final examination will be comprehensive and will
have a closed- and an open-book section. Â
6- Useful online sources
-
- http//www.cs.wisc.edu/arch/www/
- http//www.cs.utexas.edu/users/dburger/teaching/cs
382m-f03/homework/papers.html
7History of computer architecture
Abacus is probably more than 3000 years
old. Still in use in many countries in Asia,
Africa and Middle-east. (Some can use it to
calculate faster than by a calculator.) Used by
visually impaired people.
8Pascals work on calculating machines
Pascal began work on his calculator in 1642, when
he was only 19 years old. He had been assisting
his father, who worked as a tax commissioner, and
sought to produce a device which could reduce
some of his workload. By 1652 Pascal had produced
fifty prototypes and sold just over a dozen
machines, but the cost and complexity of the
Pascaline combined with the fact that it could
only add and subtract, and the latter with
difficulty was a barrier to further sales, and
production ceased in that year.
9Babbages calculators
- Babbage built difference engine.
- had all features seen in modern computers- means
for - reading input data, storing data,
- performing calculations,
- producing output and
- automatically controlling the operations of the
machine. - Babbages analytical engine is a model of a
general-purpose computer, but it was too complex
to be built in his life time.
10- Turings work on computers
- The process of encryption used by Enigma (a
German code) was known for a longtime. But
decoding was a much harder task. - Alan Turing, a British Mathematician, and others
in England, built an electromechanical machine to
decode the message sent by ENIGMA. - The Colossus was a successful code breaking
machine that came out of Turing's research. - Colossus had all the features of an electronic
computer. - Vacuum tubes to store the contents of a paper
tape that is fed into the machine. - Computations took place among the vacuum tubes
and programming was performed with plug boards.
11 Shannons work on digital computers
- Shannon showed in his Masters thesis (at MIT)
how to use Boolean algebra to design circuits
that define any Boolean function. - Shannon also developed information theory that
formed the basis for error-correcting codes. This
concept became a central idea in networking and
storage devices. - Watch this youtube video clip on Shannon
- http//www.youtube.com/watch?vz2Whj_nL-x8feature
PlayListp688DB457F0B24F42playnext1playnext_f
romPLindex12
12ENIAC
- Built at U of Pennsylvania around 1940.
- ENIAC consisted of 18000 vacuum tubes, which made
up the computing section of the machines
programming and data entry were performed by
setting switches. - There was no concept of stored program.
- But these were not serious limitations since
ENIAC was intended to do special-purpose
calculations. - ENIAC was not ready until the war was over, but
it was successfully used for nine years after the
war (1946-1955).
13Chapter 1
- Computer Abstractions and Technology
14The Computer Revolution
1.1 Introduction
- Progress in computer technology
- Underpinned by Moores Law
- Every two years, the number of transistors in IC
doubles. - Makes novel applications feasible
- Computers in automobiles pollution control,
ABS, airbags etc. - Cell phones
- Human genome project
- World Wide Web 1 billion web sites, 10
billion web pages. - Search Engines
- Computers are pervasive
15Moores law
16Classes of Computers
- Desktop computers
- General purpose, variety of software
- Subject to cost/performance tradeoff
- Server computers
- Network based
- High capacity, performance, reliability
- Range from small servers to building sized
- Embedded computers
- Hidden as components of systems
- Stringent power/performance/cost constraints
17The Processor Market
18What You Will Learn
- How programs are translated into the machine
language - And how the hardware executes them
- The hardware/software interface
- What determines program performance
- And how it can be improved
- How hardware designers improve performance
- What is parallel processing
19Understanding Performance
- Algorithm
- Determines number of operations executed
- Programming language, compiler, architecture
- Determine number of machine instructions executed
per operation - Processor and memory system
- Determine how fast instructions are executed
- I/O system
- Determines how fast I/O operations are executed
20Below Your Program
- Application software
- Written in high-level language
- System software
- Compiler translates HLL code to machine code
- Operating System service code
- Handling input/output
- Managing memory and storage
- Scheduling tasks sharing resources
- Hardware
- Processor, memory, I/O controllers
1.2 Below Your Program
21Levels of Program Code
- High-level language
- Level of abstraction closer to problem domain
- Provides for productivity and portability
- Assembly language
- Textual representation of instructions
- Hardware representation
- Binary digits (bits)
- Encoded instructions and data
22Components of a Computer
1.3 Under the Covers
- Same components forall kinds of computer
- Desktop, server,embedded
- Input/output includes
- User-interface devices
- Display, keyboard, mouse
- Storage devices
- Hard disk, CD/DVD, flash
- Network adapters
- For communicating with other computers
The BIG Picture