CSCI 330 Vector Processors - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

CSCI 330 Vector Processors

Description:

Fastest machine in world at given task ... Industrial design (car crash simulation) All involve huge computations on large data sets ... – PowerPoint PPT presentation

Number of Views:52
Avg rating:3.0/5.0
Slides: 53
Provided by: AcxiomCor6
Category:

less

Transcript and Presenter's Notes

Title: CSCI 330 Vector Processors


1
CSCI 330Vector Processors
  • Spring, 2009
  • Doug L Hoffman, PhD

2
Outline
  • Review
  • Flynns Taxonomy
  • Vector Super Computers
  • Multi-Media Extension
  • Conclusion

3
Review from last time
  • No Silver Bullet for ILP No obvious over all
    leader in performance.
  • The AMD Athlon leads on SPECInt performance
    followed by the Pentium 4, Itanium 2, and Power5
  • Itanium 2 and Power5, which perform similarly on
    SPECFP, clearly dominate the Athlon and Pentium 4
    on SPECFP
  • Itanium 2 is the most inefficient processor both
    for Fl. Pt. and integer code for all but one
    efficiency measure (SPECFP/Watt)
  • Athlon and Pentium 4 both make good use of
    transistors and area in terms of efficiency,
  • IBM Power5 is the most effective user of energy
    on SPECFP and essentially tied on SPECINT

4
Review (continued)
  • Itanium architecture does not represent a
    significant breakthrough in scaling ILP or in
    avoiding the problems of complexity and power
    consumption
  • Instead of pursuing more ILP, architects are
    increasingly focusing on TLP implemented with
    single-chip multiprocessors
  • In 2000, IBM announced the 1st commercial
    single-chip, general-purpose multiprocessor, the
    Power4, which contains 2 Power3 processors and an
    integrated L2 cache
  • Since then, Sun Microsystems, AMD, and Intel have
    switch to a focus on single-chip multiprocessors
    rather than more aggressive uniprocessors.
  • Right balance of ILP and TLP is unclear today
  • Perhaps right choice for server market, which can
    exploit more TLP, may differ from desktop, where
    single-thread performance may continue to be a
    primary requirement

5
Flynns Taxonomy
CSCI 330 Computer Architecture
  • Concepts and Terminology

6
von Neumann Architecture
  • For over 40 years, virtually all computers have
    followed a common machine model known as the von
    Neumann computer. Named after the Hungarian
    mathematician John von Neumann.
  • A von Neumann computer uses the stored-program
    concept. The CPU executes a stored program that
    specifies a sequence of read and write operations
    on the memory.
  • Basic design
  • Memory is used to store both program and data
    instructions
  • Program instructions are coded data which tell
    the computer to do something
  • Data is simply information to be used by the
    program
  • A central processing unit (CPU) gets instructions
    and/or data from memory, decodes the instructions
    and then sequentially performs them.

7
Flynn's Classical Taxonomy
  • There are different ways to classify parallel
    computers. One of the more widely used
    classifications, in use since 1966, is called
    Flynn's Taxonomy, proposed by Michael J. Flynn in
    1966.
  • Flynn's taxonomy distinguishes multi-processor
    computer architectures according to how they can
    be classified along the two independent
    dimensions of Instruction and Data. Each of these
    dimensions can have only one of two possible
    states Single or Multiple.
  • The matrix below defines the 4 possible
    classifications according to Flynn.

S I S D Single Instruction, Single Data S I M D Single Instruction, Multiple Data
M I S D Multiple Instruction, Single Data M I M D Multiple Instruction, Multiple Data
8
Single Instruction, Single Data (SISD)
  • A serial (non-parallel) computer (ILP doesnt
    count).
  • Single instruction only one instruction stream
    is being acted on by the CPU during any one clock
    cycle
  • Single data only one data stream is being used
    as input during any one clock cycle
  • Deterministic execution
  • This is the oldest and until recently, the most
    prevalent form of computer
  • Examples most PCs, single CPU workstations and
    mainframes

9
Single Instruction, Multiple Data (SIMD)
  • A type of parallel computer
  • Single instruction All processing units execute
    the same instruction at any given clock cycle
  • Multiple data Each processing unit can operate
    on a different data element
  • This type of machine typically has an instruction
    dispatcher, a very high-bandwidth internal
    network, and a very large array of very
    small-capacity instruction units.
  • Best suited for specialized problems
    characterized by a high degree of regularity,
    such as image processing.
  • Synchronous (lockstep) and deterministic execution

10
Single Instruction, Multiple Data (SIMD)
  • Two varieties Processor Arrays and Vector
    Pipelines
  • Examples
  • Processor Arrays Connection Machine CM-2, Maspar
    MP-1, MP-2
  • Vector Pipelines IBM 9000, Cray C90, Fujitsu VP,
    NEC SX-2, Hitachi S820

11
Multiple Instruction, Single Data (MISD)
  • A single data stream is fed into multiple
    processing units.
  • Each processing unit operates on the data
    independently via independent instruction
    streams.
  • Few actual examples of this class of parallel
    computer have ever existed. One is the
    experimental Carnegie-Mellon C.mmp computer
    (1971).
  • Some conceivable uses might be
  • multiple frequency filters operating on a single
    signal stream
  • multiple cryptography algorithms attempting to
    crack a single coded message.

12
Multiple Instruction, Multiple Data (MIMD)
  • Currently, the most common type of parallel
    computer. Most modern computers fall into this
    category.
  • Multiple Instruction every processor may be
    executing a different instruction stream
  • Multiple Data every processor may be working
    with a different data stream
  • Execution can be synchronous or asynchronous,
    deterministic or non-deterministic
  • Examples most current supercomputers, networked
    parallel computer "grids" and multi-processor SMP
    computers - including some types of PCs.

13
Vector Computers
CSCI 330 Computer Architecture
14
Supercomputers
  • Definition of a supercomputer
  • Fastest machine in world at given task
  • A device to turn a compute-bound problem into an
    I/O bound problem
  • Any machine costing 30M
  • Any machine designed by Seymour Cray
  • CDC6600 (Cray, 1964) regarded as first
    supercomputer

15
Supercomputer Applications
  • Typical application areas
  • Military research (nuclear weapons,
    cryptography)
  • Scientific research
  • Weather forecasting
  • Oil exploration
  • Industrial design (car crash simulation)
  • All involve huge computations on large data sets
  • In 70s-80s, Supercomputer ? Vector Machine

16
Vector Supercomputers
  • Epitomized by Cray-1, 1976
  • Scalar Unit Vector Extensions
  • Load/Store Architecture
  • Vector Registers
  • Vector Instructions
  • Hardwired Control
  • Highly Pipelined Functional Units
  • Interleaved Memory System
  • No Data Caches
  • No Virtual Memory

17
Cray-1 (1976)
18
Vi
Vj
64 Element Vector Registers
Vk
Single Port Memory 16 banks of 64-bit words
8-bit SECDED 80MW/sec data load/store 320MW/sec
instruction buffer refill
FP Add
FP Mul
Sj
( (Ah) j k m )
FP Recip
Sk
Si
64 T Regs
(A0)
Si
Tjk
( (Ah) j k m )
Aj
Ai
64 B Regs
(A0)
Addr Add
Ak
Bjk
Ai
Addr Mul
NIP
64-bitx16
LIP
4 Instruction Buffers
memory bank cycle 50 ns processor cycle 12.5
ns (80MHz)
19
(No Transcript)
20
Vector Code Example
21
Vector Instruction Set Advantages
  • Compact
  • one short instruction encodes N operations
  • Expressive, tells hardware that these N
    operations
  • are independent
  • use the same functional unit
  • access disjoint registers
  • access registers in the same pattern as previous
    instructions
  • access a contiguous block of memory (unit-stride
    load/store)
  • access memory in a known pattern (strided
    load/store)
  • Scalable
  • can run same object code on more parallel
    pipelines or lanes

22
Vector Arithmetic Execution
V1
V2
V3
  • Use deep pipeline (gt fast clock) to execute
    element operations
  • Simplifies control of deep pipeline because
    elements in vector are independent (gt no
    hazards!)

Six stage multiply pipeline
V3 lt- v1 v2
23
Vector Memory System
  • Cray-1, 16 banks, 4 cycle bank busy time, 12
    cycle latency
  • Bank busy time Cycles between accesses to same
    bank

24
Vector Instruction Execution
ADDV C,A,B
25
Vector Unit Structure
Vector Registers
Elements 0, 4, 8,
Elements 1, 5, 9,
Elements 2, 6, 10,
Elements 3, 7, 11,
Memory Subsystem
26
T0 Vector Microprocessor (1995)
Lane
27
Vector Memory-Memory versus Vector Register
Machines
  • Vector memory-memory instructions hold all vector
    operands in main memory
  • The first vector machines, CDC Star-100 (73) and
    TI ASC (71), were memory-memory machines
  • Cray-1 (76) was first vector register machine

28
Vector Memory-Memory vs. Vector Register Machines
  • Vector memory-memory architectures (VMMA) require
    greater main memory bandwidth, why?
  • All operands must be read in and out of memory
  • VMMAs make if difficult to overlap execution of
    multiple vector operations, why?
  • Must check dependencies on memory addresses
  • VMMAs incur greater startup latency
  • Scalar code was faster on CDC Star-100 for
    vectors lt 100 elements
  • For Cray-1, vector/scalar breakeven point was
    around 2 elements
  • Apart from CDC follow-ons (Cyber-205, ETA-10) all
    major vector machines since Cray-1 have had
    vector register architectures
  • (we ignore vector memory-memory from now on)

29
Automatic Code Vectorization
for (i0 i lt N i) Ci Ai Bi
Vectorization is a massive compile-time
reordering of operation sequencing ? requires
extensive loop dependence analysis
30
Vector Strip-mining
  • Problem Vector registers have finite length
  • Solution Break loops into pieces that fit into
    vector registers, Stripmining

ANDI R1, N, 63 N mod 64 MTC1 VLR, R1
Do remainder loop LV V1, RA DSLL R2, R1, 3
Multiply by 8 DADDU RA, RA, R2 Bump
pointer LV V2, RB DADDU RB, RB, R2 ADDV.D V3,
V1, V2 SV V3, RC DADDU RC, RC, R2 DSUBU N, N,
R1 Subtract elements LI R1, 64 MTC1 VLR, R1
Reset full length BGTZ N, loop Any more to
do?
31
Vector Instruction Parallelism
  • Can overlap execution of multiple vector
    instructions
  • example machine has 32 elements per vector
    register and 8 lanes

Load Unit
Multiply Unit
Add Unit
time
Instruction issue
Complete 24 operations/cycle while issuing 1
short instruction/cycle
32
Vector Chaining
  • Vector version of register bypassing
  • introduced with Cray-1

LV v1 MULV v3,v1,v2 ADDV v5, v3, v4
33
Vector Chaining Advantage
34
Vector Startup
  • Two components of vector startup penalty
  • functional unit latency (time through pipeline)
  • dead time or recovery time (time before another
    vector instruction can start down pipeline)

Functional Unit Latency
First Vector Instruction
Dead Time
Dead Time
Second Vector Instruction
35
Dead Time and Short Vectors
4 cycles dead time
64 cycles active
Cray C90, Two lanes 4 cycle dead time Maximum
efficiency 94 with 128 element vectors
36
Vector Scatter/Gather
  • Want to vectorize loops with indirect accesses
  • for (i0 iltN i)
  • Ai Bi CDi
  • Indexed load instruction (Gather)
  • LV vD, rD Load indices in D vector
  • LVI vC, rC, vD Load indirect from rC base
  • LV vB, rB Load B vector
  • ADDV.D vA, vB, vC Do add
  • SV vA, rA Store result

37
Vector Scatter/Gather
  • Scatter example
  • for (i0 iltN i)
  • ABi
  • Is following a correct translation?
  • LV vB, rB Load indices in B vector
  • LVI vA, rA, vB Gather initial A values
  • ADDV vA, vA, 1 Increment
  • SVI vA, rA, vB Scatter incremented values

38
Vector Conditional Execution
  • Problem Want to vectorize loops with conditional
    code
  • for (i0 iltN i)
  • if (Aigt0) then
  • Ai Bi
  • Solution Add vector mask (or flag) registers
  • vector version of predicate registers, 1 bit per
    element
  • and maskable vector instructions
  • vector operation becomes NOP at elements where
    mask bit is clear
  • Code example
  • CVM Turn on all elements
  • LV vA, rA Load entire A vector
  • SGTVS.D vA, F0 Set bits in mask register where
    Agt0
  • LV vA, rB Load B vector into A under mask
  • SV vA, rA Store A back to memory under
    mask

39
Masked Vector Instructions
40
Compress/Expand Operations
  • Compress packs non-masked elements from one
    vector register contiguously at start of
    destination vector register
  • population count of mask vector gives packed
    vector length
  • Expand performs inverse operation

Used for density-time conditionals and also for
general selection operations
41
Vector Reductions
  • Problem Loop-carried dependence on reduction
    variables
  • sum 0
  • for (i0 iltN i)
  • sum Ai Loop-carried dependence on
    sum
  • Solution Re-associate operations if possible,
    use binary tree to perform reduction
  • Rearrange as
  • sum0VL-1 0 Vector of VL
    partial sums
  • for(i0 iltN iVL) Stripmine
    VL-sized chunks
  • sum0VL-1 AiiVL-1 Vector sum
  • Now have VL partial sums in one vector register
  • do
  • VL VL/2 Halve vector
    length
  • sum0VL-1 sumVL2VL-1 Halve no. of
    partials
  • while (VLgt1)

42
A Modern Vector Super NEC SX-6 (2003)
  • CMOS Technology
  • 500 MHz CPU, fits on single chip
  • SDRAM main memory (up to 64GB)
  • Scalar unit
  • 4-way superscalar with out-of-order and
    speculative execution
  • 64KB I-cache and 64KB data cache
  • Vector unit
  • 8 foreground VRegs 64 background VRegs
    (256x64-bit elements/VReg)
  • 1 multiply unit, 1 divide unit, 1 add/shift unit,
    1 logical unit, 1 mask unit
  • 8 lanes (8 GFLOPS peak, 16 FLOPS/cycle)
  • 1 load store unit (32x8 byte accesses/cycle)
  • 32 GB/s memory bandwidth per processor
  • SMP structure
  • 8 CPUs connected to memory through crossbar
  • 256 GB/s shared memory bandwidth (4096
    interleaved banks)

43
Mid-Term Problem 4
CSCI 6380 Advanced Computer Architecture
44
Vector Reductions
  • Midterm ILP problem (4) Solution

45
Multi-Media Extension
CSCI 330 Computer Architecture
  • Vectors for Everyone

46
Multimedia Extensions (MMX)
  • Very short vectors added to existing ISAs for
    micros
  • Usually 64-bit registers split into 2x32b or
    4x16b or 8x8b
  • Newer designs have 128-bit registers (Altivec,
    SSE2)
  • Limited instruction set
  • no vector length control
  • no strided load/store or scatter/gather
  • unit-stride loads must be aligned to 64/128-bit
    boundary
  • Limited vector register length
  • requires superscalar dispatch to keep
    multiply/add/load units busy
  • loop unrolling to hide latencies increases
    register pressure
  • Trend towards fuller vector support in
    microprocessors

47
MMX Programming Environment
  • The MMX architecture extends the Pentium
    architecture by adding the following
  • Eight 64bit MMX registers (MM0..MM7). Data only
    (no addresses).
  • Four MMX data types (packed bytes, packed words,
    packed double words, and quad word).
  • 57 MMX Instructions.
  • Each of the eight MMX 64-bit registers is
    physically equivalent to the L.O. 64-bits of each
    of the FPU's registers

48
MMX Registers
Although MM0..MM7 appear as separate registers in
the Intel Architecture, the Pentium processors
alias these registers with the FPU's registers
(ST0..ST7).
49
MMX Data Types
The MMX instruction set supports four different
data types an eight-byte array, a four-word
array, a two element double word array, and a
quadword object.
50
MMX Operations
  • the MMX instruction set does not extend the
    32-bit Pentium processor to 64-bits.
  • Intel added only those 64-bit instructions that
    were useful for multimedia operations.
  • you cannot add or subtract two 64-bit integers
    with the MMX instruction set.
  • Only the logical and shift operations directly
    manipulate 64 bits.
  • The MMX instruction set provides the Pentium with
    the capability of performing multiple eight-,
    sixteen-, or thirty-two bit operations
    simultaneously.
  • In other words, the MMX instructions are
    generally SIMD (Single Instruction Multiple Data)
    instructions.

51
MMX Operations Example
  • graphics pixel data are generally represented in
    8-bit integers, or bytes.
  • With MMX technology, eight of these pixels are
    packed together in a 64-bit quantity and moved
    into an MMX register.
  • When an MMX instruction executes, it takes all
    eight of the pixel values at once from the MMX
    register, performs the arithmetic or logical
    operation on all eight elements in parallel, and
    writes the result into an MMX register.
  • The degree of parallelism that can be achieved
    with the MMX technology depends on the size of
    data, ranging from 8 when using 8-bit data to 1,
    (i.e. no parallelism, when using 64-bit data.)

52
Summary
CSCI 330 Computer Architecture
53
Conclusions
  • Vector Processors are a type of SIMD computer.
  • They are best suited to applications with a great
    deal of data regularity Image processing,
    scientific number crunching, etc.
  • Not good for general purpose use.
  • MMX is a limited form of vector processing that
    is now found in most microprocessor architectures
    (PCs).
  • Useful for multi-media type processing
  • Music
  • Video
  • Image processing
  • Data encryption

54
Next Time
  • Multiprocessors
Write a Comment
User Comments (0)
About PowerShow.com