Chapter 2 Computer Systems Organization - PowerPoint PPT Presentation

1 / 72
About This Presentation
Title:

Chapter 2 Computer Systems Organization

Description:

... oxide coating on both sides of a round platter of metal (easily magnetized). Typically platters are stacked together via a spindle. ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 73
Provided by: mathe187
Category:

less

Transcript and Presenter's Notes

Title: Chapter 2 Computer Systems Organization


1
Chapter 2 - Computer Systems Organization
  • CSIS 321
  • Evans J. Adams

2
PROCESSORS
  • CPU - Executes programs stored in main memory by
    Fetching their instructions, examining them and
    executing them

3
  • Basic Components (see Figure 2-1)
  • Control Unit -
  • Fetches instructions from main memory and
    determines their type.
  • Arithmetic and Logic Unit -
  • Performs operations needed to carry out the
    instructions.
  • Registers -
  • High-Speed memory used to store temporary
    results and certain control information

4
  • Special Purpose Registers
  • Program Counter - Register which points to the
    next instruction to be executed.
  • Instruction Register - Holds the instruction
    currently being executed.
  • Data Path Cycle (see Figure 2-2)
  • The process of running two operands through the
    ALU and storing the result into a register
  • The faster the data path cycle, the faster the
    machine runs

5
  • Instruction Execution (Fetch, Decode, Execute
    Cycle) (draw extension to Fig 2-2)
  • 1. Fetch the next instruction from main
  • memory into the instruction register (IR).
  • 2. Change the program counter (PC) to point to
    the next instruction.
  • 3. Determine the type of the instruction
  • just fetched.

6
  • Instruction Execution (Contd)
  • 4. If the instruction uses data in memory,
  • determine where they are.
  • 5. Fetch the data, if any, into internal
  • CPU registers.
  • 6. Execute the instruction.
  • 7. Store the results in the proper place.
  • 8. Go to Step 1 to begin executing the
  • next instruction.

7
Speed Metrics
  • Speed of light 186,282 miles per second (about
    13 inches in one nanosecond)
  • Millisecond thousandth of a second
  • Microsecond millionth of a second
  • Nanosecond billionth of a second
  • Picosecond trillionth of a second
  • MIPS millions of instructions per second
    (processor speed)
  • Megaflops millions of floating point
    instructions per second
  • Megahertz millions of electric pulses per
    second (clock speed)
  • Gigahertz - billions of electric pulses per
    second (clock speed)

8
4 Main Computer Speed Determinants
  • System Clock
  • A circuit that generates electronic pulses at a
    fixed rate to synchronize processing activities
  • With each pulse, a new data path cycle begins
  • Bus Width
  • Amount of bits that can be moved in one bus cycle
  • Larger bus width means faster movement of data
  • Address Lines determine how much memory the CPU
    can address
  • Data Lines determine how many bits of data can be
    transferred in one cycle

9
Speed Determinants
  • Word Size
  • Number of bits a CPU can process in one clock
    cycle
  • The size of the data path within the CPU
  • 64 Bit CPU is approximately twice as fast as a 32
    bit
  • Memory Size is often more important that the
    clock cycle of the CPU
  • Amount of memory words
  • Word Size of Memory (should be same as word size
    of CPU)
  • 2 GHz chip with 1 Gigabyte of memory might
    outperform a 3GHz chip with 1 Megabyte of memory

10
  • A program may be executed by a hardware CPU
    consisting of a box of electronics or
  • by another program which fetches, decodes and
    executes its instructions, i.e., an Interpreter.
  • Computer architects must decide whether to build
    a hardware CPU or write an interpreter to execute
    a given machine language L.

11
  • Depending upon the complexity of L,
  • it may be more efficient to build a simple
    hardware CPU and
  • to write an interpreter to execute Ls
    instructions.
  • Programs at the ISA level of CISC computers
  • are executed by an interpreter running on a
    totally different and much more primitive Level 1
    machine that is called the Microarchitecture
    Level.

12
Instruction Set Examples
Data Path Hardware
See Figures 4-17 and 4-30 for example instruction
set and different interpreters with different
hardware capability
13
  • INSTRUCTION SET
  • The collection of all instructions that are
    available to a programmer at a specific level.
  • A large instruction set (CISC architecture) often
    means that instructions are not very general.
  • RISC machines
  • contain very small instruction sets,
  • do not use microprogramming and
  • are VERY FAST.

14
  • The instruction set and organization of the
    Microarchitecture Level is the instruction set
    and organization of the hardware (CPU).
  • The instruction set and organization of the ISA
    Level is determined
  • mainly by a microprogram interpreter in CISC
    architectures,
  • mainly by the hardware in RISC architectures

15
  • Von-Neumann CPU Data Path (see Figure 2.2).
  • Data Path consists of Registers and ALU and
    defines the operating characteristics of a
    machine.
  • Instruction Categories (See Figure 2.1)
  • Register-Memory -
  • Allows memory words to be fetched into general
    registers.

16
  • Register-Register -
  • Fetches two operands from general registers into
    the ALU input registers, performs the operation
    and stores the result back into a register.
  • Fastest Execution
  • Memory-Memory -
  • Fetches two operands from memory directly into
    the ALU input registers, performs the operation
    and stores the result back into memory.
  • Slowest Execution

17
RISC vs. CISC
  • RISC and RISC II designs by Patterson at UC-
    Berkley (1980)
  • New designs that did not require backward
    compatability
  • Design Goal is simple instructions that can be
    issued (started) quickly (not necessarily
    executed quickly) and executed in one cycle of
    the data path

18
  • Typically around 50 instructions compared to 200
    to 300 for CISC designs
  • Justification for RISC over CISC
  • Even though it may take 4 or 5 RISC instructions
    to do what one CISC instruction does,
  • since RISC instructions are 10 times as fast
    (because they are not interpreted),
  • then RISC wins

19
  • Intels chips (starting with the 486)
  • contain a RISC core that executes the simplest
    (and most common) instructions in one data path
    cycle
  • interpret the more complicated instructions in
    the usual CISC manner
  • thereby making the most common instructions much
    faster than the less common ones.
  • This hybrid approach is not as fast as pure RISC
    chips, but gives competitive overall performance
    while supporting backward compatibility

20
RISC Design Principles
  • All common instructions executed directly by the
    hardware
  • Maximize the rate at which instructions are
    issued (started)
  • and use parallelism to execute them
  • Instructions should be easy to decode
  • Only Loads and Stores should reference memory
  • Provide lots of Registers

21
Instruction-Level Parallelism
  • Instruction Pre-Fetch (Fig 4-30 has pre-fetch
    hardware)
  • Pre-fetch several instructions into a pre-fetch
    buffer
  • Execute instructions from the buffer instead of
    main memory
  • Pipelining
  • Extends the concept of pre-fetch by dividing
    instruction execution into several steps,
  • with each step handled by a dedicated piece of
    hardware, all of which can run in parallel

22
  • Pipeline Stages (See Figure 2-4-a)
  • Stage 1
  • Fetch an instruction from memory and load it into
    a buffer
  • Stage 2
  • Decode the instruction (determine its type and
    its operands)
  • Stage 3
  • Locate and fetch the operands
  • Stage 4
  • Run the Operands through the data path
  • Stage 5
  • Store the result into the proper register

23
  • Pipeline Stages over time (see Figure 2-4b)
  • Trace instructions 1 and 2 through the pipe
  • If the data path cycle time is 2 nsec, then it
    takes 10 nsec for each instruction to finish (2
    nsec 5 stages), but
  • In each 2 nsec clock cycle, an instruction
    completes its execution if the pipe is full
  • ? an instruction finishes execution each 2 nsec,
    instead of each 10 nsec

24
  • Pipelining allows a trade off between
  • Latency - how long it takes to execute an
    instruction, and
  • Processor Bandwidth - how many MIPS the CPU
    executes
  • MIPS Millions of Instructions Per Second

25
  • Example (see Page 51 Text)
  • For Cycle Time (T) nsec n stage pipeline
  • Latency n T nsec
  • Processor Bandwidth 1000 / T MIPS
  • In the texts example T 2 nsec and n 5
  • gt Latency 2 5 10 nsec
  • gt Processor Bandwidth 1000 / 2 500 MIPS
  • Note 1,000,000,000 ns 1 sec
  • or 1,000 million ns 1 sec

26
  • MIPS Calculation Details
  • We want to know how many instructions per second
    are equivalent to 1 instruction per 2 Nanoseconds
    (our example cycle time).
  • Therefore the equation is
  • X instructions / 1 Second 1 Instruction / 2
    Nanoseconds, or
  • X ______1___________
  • ______2___________
  • 1,000,000,000
  • Therefore, X 1,000,000,000 / 2
  • Or, X 500,000,000 Instructions Per Second 500
  • MIPS

27
  • Dual Pipelines (see Figure 2.5)
  • If one pipeline is good, surely two are better
  • A single instruction fetch unit fetches pairs of
    instructions and puts each on onto its separate
    pipeline.
  • To run in parallel, the 2 instructions must
  • not conflict over resource usage (registers, etc)
  • not depend on the result of the other
  • either the compiler of special purpose hardware
    must guarantee these situations

28
  • The Intel 486 had one pipeline
  • The Pentium had two pipelines similar to the ones
    in Figure 2-5
  • A Pentium is exactly twice as fast as a 486 under
    optimal conditions
  • Going to 4 pipelines is conceivable, but
    typically duplicates too much hardware to be
    economically feasible

29
Superscalar Architecture
  • Basic idea is to have a single pipeline with
    multiple functional units (see Figure 2-6)
  • Pentium II has a similar architecture
  • For this approach to be productive, the S3 stage
    must deliver instructions to the S4 stage much
    faster than they can be executed in S4
  • In reality, most instructions in S4 take longer
    than one clock cycle to execute, particularly the
    load, store and floating point instructions

30
Processor-Level Parallelism
  • Array and Vector Processors (see Figure 2-7)
  • Good for executing a sequence of instructions on
    arrays of data
  • Multiprocessors Multicomputers (see Figure 2-8)
  • multiprocessors are easier to program
  • multicomputers are easier to build, but require
    complex message-passing techniques to communicate
    among the multiple CPUs
  • Hybrid systems are promising
  • see Chapter 8 and Parallel Processing Course for
    details

31
Assignment
  • Problems 1, 2, 3, 6, 7

32
Primary Memory (See Fig 2.9)
  • Used for Storage of Programs and Data.
  • Bit - Basic Unit (0 or 1).
  • Memory Cell (Location) - a number of bits used to
    model data.
  • Memory Address a unique number assigned to a
    memory cell.
  • A memory of N cells has addresses 0 to N-1.

33
  • Memory addresses are expressed as binary numbers
  • An address having M bits can directly access a
    maximum of 2M cells.
  • The number of bits in the address is Independent
    of the number of bits per cell.
  • All cells in a memory contain the same number of
    bits.
  • A cell consisting of K bits can hold any of 2K
    different bit combinations.
  • See Figure 2.9 (three different organizations for
    a 96-bit memory).

34
  • N
    2N Powers Of Two
  • 0 1
  • 1 2
  • 2 4
  • 3 8
  • 4 16
  • 5 32
  • 6 64
  • 7 128
  • 8 256
  • 9 512
  • 10 1024 Kilobytes
  • 11 2048
  • 12 4096
  • 13 8192
  • 14 16,384
  • 15 32,768
  • 16 65,536

35
  • Bits per cell for a few commercial computers (see
    Figure 2-10)
  • Burroughs B1700 1
  • IBM PC 8
  • DEC PDP-8 12
  • Honeywell 6180 36
  • CDC Cyber 60

36
  • Most computer manufacturers have standardized on
    an 8-bit cell which is called a Byte.
  • Word - a group of bytes.
  • A computer with a 16-bit word has 2 bytes/word.
  • A computer with a 32-bit word has 4 bytes/word.
  • Most CPU instructions operate on entire words.
  • A 16-bit machine will have 16-bit registers for
    manipulating 16-bit words.
  • A 32-bit machine will have 32-bit registers for
    manipulating 32-bit words.
  • Show Figure 2-1

37
Assignment
  • Problems 11, 13 and the Cheap Computer Problem

38
  • Ordering of Bytes (see Figure 2.11).
  • Big Endian - Bytes in a Word are ordered from
    Left to Right (Motorola Chip Family)
  • Little Endian - Bytes in a Word are ordered from
    Right to Left (Intel Chip Family)
  • Integers are right-justified in the low-order
    (rightmost) bits in both (for the value 6 in 32
    bits)
  • 00000000000000000000000000000110
  • Character Data is stored differently (See Figure
    2.12)
  • Transferring data between machines having
    different byte orderings over a network is a
    major nuisance and is typically handled by
    special-purpose hardware (See Figure 2.12)

39
Error Detecting And Correcting Codes
  • Voltage transients may cause errors in memory.
  • Extra bits may be added to detect/correct errors.
  • When a word is read from memory, the extra bits
    are checked to see if an error occurred
  • Codeword data check bits (See Figure 2-13)
  • Omit details of this section

40
  • Example One Parity Bit
  • Chosen so that the number of 1 bits in the
    codeword is even (even parity) or odd (odd
    parity)
  • A single bit error produces a codeword with the
    wrong parity
  • Two single bit errors produce another valid
    codeword
  • When an invalid codeword is read from memory, an
    error condition is signaled by the hardware
  • ASCII is 7-bit code, 8-bit Byte allows 1 parity
    bit
  • More sophisticated schemes provide
    error-detection and error-correction at the
    hardware level (See Figure 2-14)

41
Cache Memory
  • Historically, CPUs have always been faster than
    memories
  • gt After the CPU issues a memory request, it
    will not get the word it needs for several CPU
    cycles
  • gt The slower the memory, the longer the CPU
    must wait

42
  • Two ways to deal with the speed imbalance
  • Start memory reads when they are encountered, but
    continue executing and stall the CPU if it tries
    to use the memory word before it arrives
  • Require compilers to generate code to not use
    memory words until they arrive
  • both lead to performance degradation in most
    cases
  • Cache Memory Technique (See Figure 2-16)
  • Combine a small amount of fast memory (the cache)
  • With a large amount of slow memory
  • And attempt to approximate the speed of the fast
    memory by keeping the most heavily used memory
    words in the cache
  • Success (or failure) depends on what fraction of
    the words are in the cache

43
  • Using the Locality Principle as a guide, main
    memories and caches are divided into fixed-size
    blocks
  • Blocks inside the cache are called cache lines
  • If a cache miss occurs, an entire cache line is
    loaded from main memory, not just the word needed
  • Some of the other words in the cache line will
    most likely be needed

44
  • Cache Design Issues
  • Cache Size - larger cache performs better but
    costs more
  • Cache Line Size - 1K lines of 16 bytes? 2K lines
    of 8 bytes?
  • Cache Organization - How does the cache know
    which memory words are currently being held?
  • A Unified Cache for both instructions and data or
    Two Split Caches (one for each).
  • Split Caches permit parallel access
  • Instruction Cache does not have to be written
    back to memory
  • Number of Caches
  • Primary Cache on CPU chip
  • Secondary Cache in same package as CPU chip
  • A third cache further away from CPU

45
SECONDARY MEMORY
  • Slower, Cheaper, Larger Capacity, Non-Volatile
  • Used for permanent storage of larger amounts of
    data than main memory can hold.
  • Memory Hierarchies (see Figure 2-18)
  • Increasing Parameters (down the hierarchy)
  • Access Time
  • CPU Registers and Cache Memories (a few nsec)
  • Main Memory (tens of nsec)
  • Disk (at least ten msec)
  • Tape or optical disk (potentially seconds if
    unmounted)

46
  • Storage Capacity
  • CPU Registers (lt 128 bytes)
  • Cache (a few megabytes)
  • Main Memory (tens to thousands of megabytes)
  • Magnetic Disks (lt tens of gigabytes)
  • Tapes and Optical Disks (limited only by owners
    budget)
  • Number of bits per dollar spent
  • Main Memory ( / megabyte)
  • Magnetic Disk (pennies / megabyte)
  • Magnetic Tape ( / gigabyte or less)

47
Sample Hard Drive
48
  • Magnetic Disks (see Figures 2-19, 2-20)
  • Ferrous oxide coating on both sides of a round
    platter of metal (easily magnetized).
  • Typically platters are stacked together via a
    spindle.
  • Track - concentric circle on disk surface for
    recording data.
  • Typically hundreds or thousands of tracks per
    surface.
  • Cylinder - all of the tracks which line up
    vertically on each surface.
  • Sector - a subdivision of a track (typically 512
    to 2048 bytes).
  • Read/Write Head (movable across tracks) one per
    surface with (typically) only one head active
  • ? the data stream is bit-serial to and from the
    surface.

49
  • Operating Characteristics
  • Rotation Speed - typically 3600 to 7200 rpm or 1
    revolution in 17 to 8 ms.
  • Typical Sector Organization (see Figure 2-19)
  • To access a sector, the R/W head must be moved to
    its track, this action is called a seek
  • Seek times range from 1 15 msec depending upon
    how far R/W heads must move
  • Once the R/W head is over the correct track, must
    wait for the desired sector to pass under the R/W
    head, this is called Rotational Latency (average
    of 4 - 8 msec)
  • Typical Data transfer rates are 5 to 20 MB/sec
  • Seek Time and Rotational Latency dominate
    transfer time (Discuss Problems 18 and 19)

50
  • Disk Drive Electronics
  • Provides a physical interface to a standard bus
  • Provides a Logical interface that allows the
    processor to treat the disk as a (extremely slow)
    memory device.

51
  • Include a disk controller (a special purpose
    processor) which
  • Seeks and finds the requested data.
  • Streams data off of the surface.
  • Error checks and possibly error-corrects the data
    on-the-fly.
  • Assembles the data into bytes.
  • Stores the data into an on-board buffer.
  • Signals the processor that the data is available
    in the buffer.

52
  • Transferring Data from a Disk requires a program
    to provide
  • Cylinder and Head Number (which defines a unique
    track).
  • Sector number where the desired data resides.
  • Number of sectors to transmit (data is
    transferred by sector).
  • Main memory source/destination of the data.
  • Operation to perform (read or write).

53
  • The Disk Access Process
  • Operating System communicates a Logical Block
    Address to the disk controller and issues a read
    (or write) command.
  • The drive Seeks the correct track by moving the
    heads to the correct position and enabling the
    head on the specified surface.

54
  • The read head senses sector numbers as they
    travel by until the requested sector is found.
  • Sector data and ECC stream into a buffer on the
    drive interface. ECC is done on-the-fly.
  • The drive communicates data ready to the OS
  • The OS reads the data and places it into a main
    memory buffer.

55
RAID
  • Redundant Array of Independent Disks (see Figure
    2-23)
  • The gap between CPU and disk performance has
    become wider over time
  • RAID uses the concept of parallel I/O to improve
    disk performance and reliability
  • The RAID controller allows the O/S to address the
    RAID as though it were a single disk

56
  • RAID Level 1 duplicates all disks so that each
    one has a backup
  • On a Write, each strip is written twice
  • On a Read, either copy can be used, distributing
    the load over more drives
  • Write performance is no better
  • Read performance can be twice as good due to
    potential parallelism
  • Fault tolerance is excellent (if a drive crashes,
    the copy is used instead)
  • The various RAID Levels provide different
    performance and reliability characteristics with
    relevant tradeoffs

57
  • CD ROM (Compact Disk Read Only Memory)
  • Data is laid down in one continuous spiral track
    that begins at the center of the disk and
    proceeds to the outside (see Figure 2-24).
  • Bits are encoded as small pits in the reflective
    surface of the disk which are burned in by a
    high-powered laser.
  • Transitions between pits and lands represent 1
    bits.
  • The length of the interval between two
    transitions indicates how many 0 bits are present
    between the 1 bits.

58
  • An infrared laser in the playback head is used to
    bounce light off of the surface.
  • A detector in the head senses the reflections
    from the pits as they pass under the head.
  • 2K bytes of data is typically the basic unit of
    I/O.
  • Manufacturing process is error-prone, hence a
    complex Reed-Solomon error correcting code is
    used.
  • Storage Capacity approximately 650 Mbytes.

59
  • Storage Format (see Figure 2.25)
  • Every byte (8 bits) is encoded into a 14 bit
    symbol with 6 bits of ECC
  • 42 consecutive symbols make up a 588 bit frame
  • Each frame holds 192 data bits (24 bytes) and 396
    ECC bits
  • 98 frames make up the data portion of a sector
    (the basic unit of I/O)
  • A sector also contains a 16 byte preamble
    (containing the sector number), and
  • 288 bytes of ECC
  • Error Detection and Correction are performed by
    hardware in the controller
  • Data storage efficiency is only 28 due to the
    three levels of ECC bits

60
  • Advantages
  • Can be replicated much more inexpensively than
    magnetic disks.
  • Removable / Cheap and Reliable .
  • Disadvantages
  • Read-Only vs. Read-Write for disks.
  • Much slower access time than disk (even in 32x).
  • Modern hard disks have larger storage capacity
    (but removable disks do not).

61
  • CD Recordables
  • Becoming a common peripheral for PCs
  • WORM (Write Once Read Many Times)
  • CD ReWritables
  • Require more complex chemical media and
  • lasers with three power levels to read / write
    and re-write
  • DVD
  • Use smaller pits, a tight spiral and a more
    sensitive laser to achieve 4.7 GB storage
    capacity and 1.4 MB/sec vs 150 KB/sec (for CDs)
    data transfer rate
  • Can achieve up to 17 GB in double-sided,
    dual-layer format
  • DVD devices can be built to also read CDROMs

62
Assignment
  • Problems 18, 19, 22, 24

63
INPUT/OUTPUT
  • Buses (see Figures 2-28, 2-29, 2-30)
  • A set of parallel wires etched onto a motherboard
  • contains sockets into which I/O boards are
    inserted
  • I/O boards typically contain electronics (the
    controller) and the device itself (i.e., a disk
    drive)
  • Modern high-performance systems contain multiple
    buses

64
  • I/O Controllers - control an I/O device and
    handles bus access
  • finds data on the device
  • receives data in a serial stream from the device
    and
  • converts the bit stream into units of one or
    more words and
  • transfers the words into main memory via the bus
  • DMA controllers read / write data to memory
    without CPU intervention
  • Typically, an interrupt is signaled to the CPU
    once the data have been transferred into main
    memory
  • The CPU executes an interrupt handler to process
    the data
  • The CPU then resumes the task that was suspended
    when the interrupt occurred

65
  • Bus Arbiter
  • A chip which decides who gets the bus when
    there are multiple requests
  • The CPU also uses the bus for main memory access
    to programs and data
  • In general, I/O devices are given preference over
    the CPU and are said to steal cycles from the
    CPU when they are transferring data
  • ISA (EISA) Bus - based upon the old IBM PC Bus
    (discuss microchannel)
  • PCI (Peripheral Component Interconnect) Bus
  • faster bus designed by Intel as a successor to
    the ISA Bus
  • patents in the public domain to encourage
    adoption
  • (see Figure 2-30 for typical configuration)

66
  • Common I/O Devices (read about a few of these and
    skim the rest)
  • Terminals
  • Keyboards
  • Monitors
  • Flat-Panel Displays
  • Character-Map Terminals
  • Bit-Map Terminals
  • RS-232-C Terminals
  • Mice
  • Printers
  • Modems
  • ISDN

67
Pentium 4, 2.6 GigaHertz (499)
  • Specifications Microsoft Windows XP Home
    Edition 512K Cache/400MHz FSB 256MB PC2100
    DDR 40GB Ultra ATA Hard Drive CD-RW Drive
    1.44MB Floppy Drive ATI Radeon 7000 64MB DDR
    w/TV Out 56K v.92 Fax/Modem Intel 10/100Mbps
    Ethernet 104 Keyboard / Scroll Mouse 1 Year
    Limited Warranty

68
  • Character Codes - Standards for mapping
    characters onto integers so that computers may
    share information

69
  • ASCII (American Standard Code for Information
    Interchange)
  • 7 bits ? maximum of 128 characters
  • developed in the U.S and works OK for English but
    not for other languages with different character
    sets
  • Examples (Hex Alphabet is 0,1,2,3,4,5,6,7,8,9,A,B,
    C,D,E,F)
  • is 23 Hex or 0100011 Binary or 35 Base 10
  • where 010 2 and 0011 3
  • is 2A Hex or 0101010 Binary or 42 Base 10
  • where 010 2 and 1010 A
  • ? is 3F Hex or 0111111 Binary or 63 Base 10
  • where 011 3 and 1111 F

70
  • UNICODE
  • 16 bits ? maximum of 65,536 code points
  • Since the worlds languages collectively contain
    about 200,000 symbols, code points are a scarce
    resource
  • an International Consortium assigns code points

71
Assignment
  • Problem 34
  • Binary Numbers Assignment
  • Reference Appendix A in the text
  • Use 2s complement for negative numbers
  • Do your work by hand and check with a calculator

72
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com