Lecture 1: Flynn - PowerPoint PPT Presentation

About This Presentation
Title:

Lecture 1: Flynn

Description:

Lecture 1: Flynn s Taxonomy The Task of A Computer Designer Determine what attributes are important for a new machine. Design a machine to maximize performance ... – PowerPoint PPT presentation

Number of Views:137
Avg rating:3.0/5.0
Slides: 45
Provided by: UCF46
Learn more at: http://www.cs.ucf.edu
Category:
Tags: floating | flynn | home | lecture

less

Transcript and Presenter's Notes

Title: Lecture 1: Flynn


1
Lecture 1 Flynns Taxonomy
2
The Global View of Computer Architecture
Applications
Parallelism
History
Technology
Computer Architecture -instruction set
design -Organization -Hardware/Software boundary
Programming Languages
OS
Measurement and Evaluation
Compilers
Interface Design (ISA)
3
The Task of A Computer Designer
  • Determine what attributes are important for a
    new machine.
  • Design a machine to maximize performance while
    staying within cost constrains.

4
Flynns Taxonomy
  • Michael Flynn (from Stanford)
  • Made a characterization of computer systems
    which became known as Flynns Taxonomy

5
Flynns Taxonomy
  • SISD Single Instruction Single Data Systems

SI
SISD
SD
6
Flynns Taxonomy
  • SIMD Single Instruction Multiple Data Systems
    Array Processors

SISD
SD
SI
SISD
SD
Multiple Data
SISD
SD
7
Flynns Taxonomy
  • MIMD Multiple Instructions Multiple Data System
    Multiprocessors
  • Multiple Instructions Multiple Data

SI
SISD
SD
SI
SISD
SD
SI
SISD
SD
8
Flynns Taxonomy
  • MISD- Multiple Instructions / Single Data System
  • Some people say pipelining lies here, but this
    is debatable.
  • Multiple Instructions
    Single Data

SISD
SI
SI
SISD
SD
SISD
SI
9
AbbreviationsSISD one address Machine.
  • IP Instruction pointer
  • MAR Memory Address Register
  • MDR Memory Data Register
  • A Accumulator
  • ALU Arithmetic Logic Unit
  • IR Instruction Register
  • OP Opcode
  • ADDR Address

IP
MAR
MEMORY
A
MDR
ADDR
OP
DECODER
ALU
10
  • LOAD X
  • MAR ? IP
  • MDR ? MMAR IP ? IP 1
  • IR ? MDR
  • DECODER ?IR.OP
  • MAR ? IR.ADDR
  • MDR ?MMAR
  • A ? MDR

One address format
OP
ADDRESS
IP
MAR
MEMORY
A
MDR
ADDR
OP
DECODER
ALU
11

One address format
  • ADD X
  • - MAR ? IP
  • MDR ? MMAR IP ? IP 1
  • IR ? MDR
  • DECODER ?IR.OP
  • MAR ? IR.ADDR
  • MDR ?MMAR
  • A ? A MDR

OP
ADDRESS
IP
MAR
MEMORY
A
MDR
ADDR
OP
DECODER
ALU
12

One address format
  • STORE X
  • - MAR ? IP
  • MDR ? MMAR IP ? IP 1
  • IR ? MDR
  • DECODER ?IR.OP
  • MAR ? IR.ADDR
  • MDR ? A
  • MMAR ? MDR

OP
ADDRESS
IP
MAR
MEMORY
A
MDR
ADDR
OP
DECODER
ALU
13
SISD Stack Machine
  • First Stack Machine

  • B5000

IP
MAR
MEMORY
Stack
MDR
ADDR
OP
1
2
3
ALU
4
DECODER
14
PUSH ? ? ST4 ST4 ? ST3 ST3 ?
ST2 ST2 ? ST1 ST1 ? MDR
means in parallel
IP
MAR
LOAD X MAR ? IP MDR ? MMAR IP ? IP 1 IR
? MDR DECODER ?IR.OP MAR ? IR.ADDR MDR
?MMAR PUSH
MEMORY
Stack
MDR
ADDR
OP
1
2
ST
3
ALU
4
DECODER
15
  • POP
  • MDR ?ST1
  • ST1 ?ST2
  • ST2 ?ST3
  • ST3 ?ST4
  • ST4 ? 0

means in parallel
IP
MAR
STORE X MAR ? IP MDR ? MMAR IR ? MDR DECODER ?
IR.OP MAR? IR.ADDR POP MMAR ? MDR
MEMORY
Stack
MDR
ADDR
OP
1
2
3
ALU
4
DECODER
16
Zero address format
OP
NOT USED
  • ADD
  • MAR ? IP
  • MDR ? MMAR
  • IR ? MDR
  • DECODER ? IR.OP
  • ST2 ? ST1 ST2
  • ST1 ?ST2
  • ST2 ?ST3
  • ST3 ?ST4
  • ST4 ? 0

IP
MAR
MEMORY
Stack
MDR
ADDR
OP
1
2
3
ALU
4
DECODER
17
Example
  • Stack Trace
  • Loadi 1 _ _ _ _
  • Loadi 2
  • Add
  • Store X

18
Cont
Stack Trace Loadi 1 1 _ _ _ Loadi 2
Add Store X
IP
MAR
MEMORY
Stack
MDR
ADDR
OP
1
ALU
DECODER
19
Cont
  • Stack Trace
  • Loadi 1 1 _ _ _
  • Loadi 2 2 1 _ _
  • Add
  • Store X

IP
MAR
MEMORY
Stack
MDR
ADDR
OP
2
1
ALU
DECODER
20
ADD Step 1



IP
  • Stack Trace
  • Push 1 1 _ _ _
  • Push 2 2 1 _ _
  • Add 2 3 _ _
  • Store X

MAR
MEMORY
Stack
MDR
ADDR
OP
2
3
ALU
DECODER
21
ADD step 2



IP
  • Stack Trace
  • Push 1 1 _ _ _
  • Push 2 2 1 _ _
  • Add 3 _ _ _
  • Store X

MAR
MEMORY
Stack
MDR
ADDR
OP
3
ALU
DECODER
22
Before Store X is executed
IP
MAR
  • Stack Trace
  • Push 1 1 _ _ _
  • Push 2 2 1 _ _
  • Add 3 _ _ _
  • Store X 3 _ _ _

MEMORY
Stack
MDR
ADDR
OP
3
ALU
DECODER
23
After Store X is executed
IP
  • Stack Trace
  • Push 1 1 _ _ _
  • Push 2 2 1 _ _
  • Add 2 3 _ _
  • Store _ _ _ _

MAR
MEMORY
Stack
MDR
ADDR
OP
ALU
DECODER
24
SIMD(Array Processor)
IP
MAR
MEMORY
ADDR
OP
MDR
A1
B1
C1
A2
B2
C2
AN
BN
CN
DECODER
ALU
ALU
ALU
25
Array Processors
  • One of the first Array Processors was the ILLIIAC
    IV
  • Load A1, V1
  • Load B1,Y1
  • Load A2, V2
  • Load B2, Y2
  • Load An, Vn
  • Load Bn, Yn
  • ADD
  • Store C1, W1
  • Store C2, W2
  • Store C3, W3
  • ..
  • ..
  • Store Cn, Wn

26
Matrix Access
  • Parallel access by Rows
  • Parallel access by Columns
  • Parallel access by Diagonals
  • Skewed matrix representation

27
Parallel access by Row
  • a11 a12 a13 a14
  • a21 a22 a23 a24
  • a31 a32 a33 a34
  • a41 a42 a43 a44

Serial access by Columns
P1 P2 P3 P4
28
Parallel access by Columnreordering is necessary
  • a11 a21 a31 a41
  • a12 a22 a32 a42
  • a13 a23 a33 a43
  • a14 a24 a34 a44

Serial access by Rows
P1 P2 P3 P4
29
Parallel access by Diagonals
  • a11 a12 a13 a14
  • a24 a21 a22 a23
  • a33 a34 a31 a32
  • a42 a43 a44 a41

P1
P2
P3 P1
P4
30
Parallel Column/Row Access
  • a11 a12 a13 a14
  • a24 a21 a22 a23
  • a33 a34 a31 a32
  • a42 a423 a44 a41

a13
a11
a21
a23
a31
a33
a41
a43
P1
P2
P3
P4
Skewed matrix representation
31
Parallel Column/Row/Diagonal Access
Skewed matrix representation (5 banks)
  • a11 a12 a13 a14
  • a21 a22 a23 a24
  • a34 a31 a32 a33
  • a43 a44 a41 a42

Interconnection Network
P1
P2
P3
P4
32
Pipelining
  • Definition Pipelining is an implementation
    technique whereby multiple instructions are
    overlapped in execution, taking advantage of
    parallelism that exists among actions needed to
    execute an instruction.
  • Example Pipelining is similar to an automobile
    assembly line.

33
Pipelining Automobile Assembly line
  • T0 Frame
  • T1 Frame wheels
  • T2 Frame wheels Engine
  • T3 Frame Wheels Engine Body ? New Car
  • If it takes 1 hour to complete one car, and each
    of the above stages takes 15 minutes, then
    building the first car takes 1 hour one car can
    produced every 15 minutes. This same principle
    can be applied to computer instructions.

34
Pipelining Floating point addition
  • Suppose a floating point addition operation could
    be divided into 4 stages, each completion ¼ of
    the total addition operation.
  • Then the following chart would be possible.

STAGE1
STAGE2
STAGE3
STAGE4
35
Example
  • 0.9056 102
  • 3.7401 104 .3749156 105
  • 0.9056 102
  • 374.01 102 374.9156 102

Compare Exponents
Alignment
Normalize
Add

36
Floating point unit
Compare Exponents
ALU
Alignments
Add
Normalization
37
The steps
  • Step one Compare and choose the exponents.
  • Step two Set both numbers to the same exponent.
  • Step three Perform addition/subtraction on the
    two numbers.
  • Last step Normalization.

38
Vector processors
  • Vector processor Characteristics
  • - Pipelining is used in processing.
  • - Vector Registers are used to provide the ALU
    with constant input.
  • - Memory interleaving is used to load input to
    the vector registers.
  • - AKA Supercomputers.
  • - the CRAY-1 is regarded as the first of these
    types.

39
Vector Processors Diagram
IP
MAR
MEMORY
A
B
C
MDR
ADDR
OP
DECODER
ALU
40
Vector Processing Compilers
  • The addition of vector processing has led to
    vectorization in compiler design.
  • Example the following loop construct
  • FOR I, 1 to N
  • Ci ? Ai Bi
  • Can be unrolled into the following instruction
  • C1 ? A1B1
  • C2 ? A2B2
  • C3 ? A3B3..

41
Memory Interleaving
  • Definition Memory Interleaving is a design used
    to gain faster access to memory, by organizing
    memory into separate memory banks, each with
    their own MAR (memory address register). This
    allows parallel access and eliminates the
    required wait for a single MAR to finish a memory
    access.

42
Memory Interleaving Diagram
MAR
MAR2
MAR3
MAR4
MAR1
MEMORY1
MEMORY2
MEMORY3
MEMORY4
MDR1
MDR2
MDR3
MDR4
MDR
43
Vector Processors Memory Interleaving Diagram
IP
MAR
Vector Register
A
B
C
MDR
OP
ADDR
DECODER
ALU
44
Multiprocessor Machines (MIMD)
MEMORY
CPU
CPU
CPU
Write a Comment
User Comments (0)
About PowerShow.com