Chapter 1: Machine Architecture - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Chapter 1: Machine Architecture

Description:

Basic unit of storage for a computer. We use binary because of the nature of ... Electronic circuits that perform a logical operation. Based on boolean algebra ... – PowerPoint PPT presentation

Number of Views:29
Avg rating:3.0/5.0
Slides: 37
Provided by: rfo7
Category:

less

Transcript and Presenter's Notes

Title: Chapter 1: Machine Architecture


1
Chapter 1 Machine Architecture
  • CS as science
  • science requires the construction of theories
    which are confirmed or rejected by
    experimentation
  • sometimes theories are dormant because technology
    is not yet available to test them
  • in other cases, current technology influences the
    theories
  • In CS, both of these have been true
  • neural network theories laid dormant for 20 years
    while hardware caught up to them
  • cheap processors are influencing the development
    of modern Artificial Neural Networks

2
The Machine
  • Central to Computer Science is the algorithm
  • But it is the computer that allows us to
    experiment, test our algorithms
  • So it is important to understand this machine so
    that we understand how it will influence our
    algorithms and theories
  • Here, we will concentrate on the representation
    of information in binary
  • In the next chapter, we will examine how the
    machine manipulates and processes this
    information

3
Binary
  • 1 bit (binary digit)
  • a 0 or a 1
  • Basic unit of storage for a computer
  • We use binary because of the nature of digital
    circuits
  • A circuit is either in an on state (1) or off
    state (0)
  • Examples
  • Flip Flop
  • circuit that continuously stores an electrical
    current or no electrical current until a new
    input causes its state to change
  • Relay
  • in one of two physical positions
  • Magnetic core
  • stores a magnetic charge in one of two directions
  • Capacitor
  • stores a magnetic charge or no magnetic charge
    (for a short duration)

4
Gates and Logical Operations
  • Electronic circuits that perform a logical
    operation
  • Based on boolean algebra
  • Often called Boolean operations (see the gates on
    p. 20)
  • AND -- output is 1 if both inputs are 1
  • OR -- output is 1 if either inputs are 1
  • NOT -- output is 1 if input is 0
  • XOR -- output is 1 if inputs differ
  • These 4 operations form the basis for all
    computer operations!

5
Circuits Built with Gates
  • Flip-flops combine 3 logic gates
  • AND, OR, NOT
  • Look at the flip-flop in figure 1.3, p. 21 and
    the examples of setting and clearing the
    flip-flop in figure 1.4 p. 22
  • The Flip-flop is perfect for storing 1 bit
  • Half Adder (or 2-bit adder)
  • Adds together 2 binary bits, 00, 01, 10, 11
  • Uses two logic gates, an AND gate (for carry out)
    and an XOR gate (for sum)
  • Full Adder
  • Combines Half Adders together by connecting the
    Carry Out of one to the Carry In of another
  • Forms a chain of Adders, n Half Adders to add 2
    n-bit numbers

6
More on Adders
  • Adding two bits
  • a b
  • SUM computed by a XOR b
  • CARRY OUT computed by a AND b
  • Adding n sets of two bits
  • a1 a2 a3 a4 an b1 b2 b3 b4 bn
  • For a given two bits, say ai bi and a previous
    CARRY IN, C,
  • SUMi NOT (ai) AND NOT (bi) AND C OR NOT(ai)
    AND bi AND NOT (C) OR ai AND NOT(bi) AND
    NOT (C) OR ai AND bi AND C
  • CARRYi ai AND bi AND C OR ai AND bi AND
    NOT(C) OR ai AND NOT(bi) AND C OR NOT(ai)
    AND bi AND C
  • The first Full Adder will have a CARRY IN of 0
    (hardwired into it)
  • The last Full Adder will have a CARRY OUT that
    leads to the overflow bit (well talk about this
    later)

a1 b1 a0 b0
CarryIn0
Adder
Adder
Carry
Carry
Sum1 Sum0
7
Types of Memory
  • For a computer to work, it needs storage for the
    current instruction and current data
  • There are many types of memory, each different in
    speed and cost, with the faster memory costing
    more so there is less of it in the computer
  • This forms a memory hierarchy
  • Registers -- fast memory stored in the CPU made
    out of flip-flops
  • we will look at the CPU and registers in the next
    chapter
  • Cache -- expensive, nearly as fast as registers
  • Main memory -- cheaper, slower, made out of
    capacitors
  • Secondary storage -- very cheap, very slow

8
What to store?
  • In the earliest computers, storage was only used
    for data
  • Von Neumann conceived of the idea of a stored
    program
  • place the program in the same memory as data
  • This allowed computers to have flexibility
  • no need to hard code the program
  • And speed
  • no need to input program instructions from an
    external source while executing the program
  • Programs and data are both stored in binary

9
Main Memory
  • Think of main memory as a very long set of
    mailboxes where each box can store 1 item
  • Each cell can be written to or read from
  • Reads are not destructive, so its more like a
    copy
  • Repository for program instructions and data
  • Large collection of circuits, sometimes called
    cells
  • each capable of storing 1 byte (8 bits)
  • 8 bits can store 28 256 different combinations
    of 1s and 0s
  • each memory location has a unique address (a
    number)

10
Some Memory Terminology
  • Types of memory
  • Volatile memory - requires power to retain its
    contents
  • memory made out of flip-flops and capacitors are
    volatile
  • Random Access Memory (RAM)
  • memory that can be accessed by address (most
    memory is RAM)
  • volatile memory
  • RAM is readable and writable
  • Read Only Memory (ROM)
  • main memory that is readable but not writable
  • made of different hardware than flip-flops and
    capacitors
  • non-volatile memory
  • Storage Capacities
  • 1 byte 8 bits
  • 1 kilobyte (1KB or 1K) 1024 bytes (210 bytes)
  • 1 megabyte (1MB or 1M) 1024 kilobytes
    (approximately 1 million bytes or 220 bytes)
  • 1 gigabyte (1GB or 1 G) 1024 megabytes
    (approximately 1 billion bytes or 230 bytes)

11
Storage Devices
  • Main memory
  • limited in size due to cost
  • volatile
  • Additional storage is required
  • Secondary storage devices (mass storage) is
  • non-volatile, cheap (compared to main memory) and
    long-term (years to decades at least)
  • These devices are added on to the computer
  • Require physical motion of parts and therefore
    are slower and more liable to fail
  • Types magnetic disk, optical disk, magnetic tape

12
Magnetic Disk
  • Most common form of storage
  • A disk of either flexible or rigid material with
    magnetic coating that can store magnetic charges
  • Disk spins around and read/write head of drive
    reads/writes magnetic charges on disk
  • Multiple disks form a platter where each surface
    has its own read/write head, where they all move
    together over the same track, called a cylinder
  • Disk composed of tracks and sectors
  • Tracks and sectors are created when the disk is
    formatted (often in the factory these days)

13
More on Magnetic Disk
  • A directory (file allocation table) stores the
    location of a file on the disk
  • Location -- cylinder, track, sector
  • Read/Write head now accesses the file
  • Seek time -- time for R/W head to move to proper
    track
  • Rotational delay (or latency time) -- time for
    R/W head to wait for proper sector to move under
    it
  • Access time -- sum of Seek time and Rotational
    delay
  • Transfer time -- rate at which data can be
    transferred
  • Typically, a file is broken into individual
    blocks scattered across the disk, so the R/W head
    will have to move from one block to another,
    lengthening the access time

14
Types of Magnetic Disks
  • Floppy
  • slow, cheap, portable
  • 1.44 Megabyte of storage
  • R/W head touches the surface of these disks
  • Zip
  • removable like floppies, but sealed like hard
    disks
  • offers a compromise in price and capacity (100
    Megabytes)
  • Hard
  • disk enclosed with drive, not removable
  • multiple disk platters
  • offers up to 10 Gigabyte of storage
  • each platter has its own R/W heads
  • More expensive than floppy
  • but prices are much cheaper today than 5 or 10
    years ago, much faster than floppy disks
  • R/W head glides above/below disk surface
  • If the R/W head touches the disk, its called a
    head crash and can permanently damage the disk
    and head

15
Two More Storage Devices
  • Optical Disks
  • Information stored as burn marks over a
    reflective surface
  • Information read by laser
  • Once written, disks cannot be erased or rewritten
  • CD-ROM -- information comes on the disk from
    factory
  • WORM -- blank disk, written once, read only
    afterward
  • Optical disks offer
  • portability
  • large storage capacity (740MB)
  • relatively cheap but not erasable
  • Other optical technology does offer erasable disk
  • Variations of optical disks include CD-DA
    (digital audio) and DVD (digital versatile disk)
  • Magnetic Tape
  • Oldest form of storage
  • Magnetic charges placed on tape much like on disk
  • Access is sequential leading to very slow access
    times and therefore inefficient compared to disk
  • Offers large storage capacity for cheap cost
  • Not used much today, mostly backups and archives
  • Types reel-to-reel, cassette, video, DAT,
    high-speed tape cartridge

16
Units of Storage
  • In main memory, we refer to bytes
  • In mass storage, we refer to files
  • Files are broken up into blocks and distributed
    across the disk
  • usually, files are placed wholly on tape and
    unbroken
  • Since secondary storage access is slow compared
    to main memory, it is common for storage devices
    to have buffers (RAM)
  • Or access RAM directly to temporarily store
    information in memory buffers

17
Binary Representations
  • Now that we know how information is stored in the
    computer and storage devices, we need to
    determine how to represent our information using
    binary
  • need representations for
  • text (characters)
  • integer numbers (both positive and negative)
  • fractions and real numbers
  • program code
  • pictures, sounds, music

18
Text and ASCII
  • ASCII
  • Most common text representation, used by all PCs
    and most other computers
  • Each character is stored in 1 byte using ASCII
  • Need codes for
  • Upper case letters, lower case letters, 10
    digits, punctuation marks, various keyboard
    commands
  • Example
  • H 01001000
  • e 01100101
  • l 01101100
  • o 01101111
  • . 00101110
  • Hello 01001000 01100101 01101100 01101100
    01101111 00101110
  • In 1 byte, we have 256 different combinations of
    1s and 0s
  • Lets assign each combination to be a code that
    uniquely identifies one character
  • Codes
  • ASCII American Standard Code for Information
    Interchange using 7 bits (128 combinations)
  • Unicode newer method using 16 bits (65,536
    combinations)
  • ISO 32 bits (17 million combinations)

19
Representing Numbers
  • In decimal, each digit represents a unit
    multiplied by a power of ten
  • 375 3 100 7 10 5 1
  • In binary, units multiplied by a power of 2
    where each unit is a 0 or a 1
  • 101010012 127 026 125 024 123
    022 021 120 1283281 169
  • We have columns for 1s, 2s, 4s, 8s, 16s, etc
  • We similarly can have 1/2s column, 1/4s column,
    etc on the opposite side of the decimal point for
    fractions
  • Example 01100.100 is 12.5 in decimal
  • To convert from binary to decimal
  • Sum up each (digit 2i where digit 1 or 0, and
    i is the position or column of the digit)

20
Converting from Decimal to Binary
  • Given the decimal value
  • Divide the value by 2 and record the remainder
  • As long as the quotient is not 0, continue
  • The binary value is the list of remainders in the
    order they were obtained, listed from right to
    left
  • Example 13
  • 13/2 6 with a remainder of 1
  • 6/2 3 with a remainder of 0
  • 3/2 1 with a remainder of 1
  • 1/2 0 with a remainder of 1
  • Or, 13 11012

21
Representing Negative Numbers
  • Our previous binary representation only stores
    positive numbers
  • We could assign the first bit to be the sign bit
  • if it the leading bit is a 1, then the number is
    negative, if the leading bit is a 0, then the
    number is positive
  • Example 3 is 00000011, -3 is 10000011
  • This has two drawbacks
  • there are two ways to represent 0 (10000000 and
    00000000) so that 8 bits can only represent 255
    different numbers
  • we have to be careful when adding numbers to not
    add the sign bits of the two numbers
  • A better representation is twos complement

22
Twos Complement
  • The first bit is still a sign bit
  • But here, the entire number is different if
    negative
  • For instance, 3 00000011, but -3 is not
    10000011
  • To obtain the twos complement version
  • Start from the right side and copy the number
    down exactly until you reach the first 1, then
    negate each bit to the left after that first 1
  • 3 00000011, -3 is formed by working from right
    to left, leaving the first 1 bit alone, but
    inverting all remaining bits or those underlined
    in the following 00000011
  • Giving 11111101
  • To convert back to the positive version, apply
    the same technique giving 00000011

23
More on Twos Complement
  • Advantages of 2s Complement
  • Twos complement has only one way to represent 0,
    00000000
  • so that 8 bits can store 256 different numbers
  • When adding Twos Complement numbers, we no
    longer have to worry about the sign bit
  • In fact, subtraction becomes simplified
  • X - Y becomes X (-Y)
  • That is, negate Y and add the result to X
  • So we dont need a subtraction circuit, just an
    adder and a negation circuit
  • See figure 1.21 on page 50
  • Notice that a positive number is the same whether
    represented using 2s complement or unsigned
    binary or signed magnitude binary

24
Overflow
  • Notice that a carry out of the last bit in a
    binary addition is an overflow
  • 100100 gt 127 and so causes an overflow
  • The CPU monitors the last carry out for such
    circumstances
  • However, with a positive and negative number, the
    carry out is not an overflow
  • So, an overflow occurs if the sign bit of the
    result ! the sign bit of the 2 numbers
  • Example 0011 1110 1 0001 no overflow
    since sign bit of the two numbers differ
  • Example 1100 1010 1 0110 overflow since
    the sign bit of the result differs from the sign
    bits of the two numbers
  • Example 1100 1100 1 1000 no overflow
    since the sign bit of the result is the same as
    the sign bits of the two numbers

25
Excess Notation
  • Another binary representation is Excess notation
  • Here, there is no special bit for the sign
  • Instead, each number is one greater than the
    previous number with the first number in the
    sequence having the value 0
  • For instance, excess eight (or 3-bit excess
    notation) has 8 0000 and 7 1111
  • See figures 1.22 and 1.23 for examples of Excess
    notations for 3 bits and for 4 bits
  • Excess notation is not commonly used for numbers
    themselves but we will see that it is useful to
    use excess notation when representing the
    exponent portion of a floating point number

26
Binary Addition
  • Earlier, we examined the Binary Adder
  • There, we saw that a sum of two bits has four
    possibilities
  • 001, 011, 101, 110 with a carry of 1
  • Binary addition is the same process as in decimal
    addition except that we must carry an 1 into the
    next addition (see section 1.5)
  • 10011110 01010101
    11100011

27
Fractions and Reals
  • We saw earlier that we can use the normal binary
    notation for fractional components
  • 1100.1001 12.5625 (8 4 1/2 1/16)
  • However, it becomes difficult adding fractions
    together when their decimal points are not
    aligned
  • 1100.1001 1.1100011
  • Also, we are not used to dealing with fractions
    in many cases
  • like dollars and cents
  • So, we will use floating point numbers when
    dealing with real numbers and store them using
    scientific notation
  • floating point signifies that the decimal point
    is movable (it floats)

28
Scientific Notation
  • 38.2839 1 .382839 102
  • 1 sign
  • we will represent this as (-1)0 or (-1)1 , that
    is, a 0 for even, a 1 for odd
  • .382839 mantissa
  • Decimal point shifted all the way to the left
  • 2 exponent
  • How many decimal places we shifted the decimal
    point
  • Floating point numbers are represented as three
    values
  • sign, mantissa, exponent
  • The radix (in the above example is 10) is implied
  • The mantissa is normalized so that the decimal
    point precedes the first digit, and then is
    removed
  • The sign bit will be a 0 for positive, 1 for
    negative

29
8-bit Floating Point Numbers
  • We will use 1 byte for a floating point number
  • 1st bit is sign bit
  • 2nd-4th bits are exponent in excess 3-bit
    notation (refer to figure 1.23) with an implied
    radix of 2
  • 5th-8th bits are normalized mantissa
  • Example 01101011
  • sign is 0, so positive
  • exponent is 110 in excess 3-bit is 2
  • mantissa is 1011
  • 01101011 1 .1011 22 10.11 2 ¾ 2.75
  • Convert -6.5 to binary
  • Sign bit is 1
  • Mantissa is 6.5 110.1
  • To normalize the mantissa
  • Change 110.1 to .1101
  • By shifting the decimal point three places to the
    left
  • This gives an exponent of 23 or a binary exponent
    of 011
  • In Excess 3-bit notation, 3 is 111
  • So we have a sign bit of 1, an exponent of 111,
    and a mantissa of 1101 or a floating point number
    of 11111101

30
Floating Point Errors
  • In our 1 byte example, we cant store much in the
    range of numbers
  • between approximately 1/32 and 127
  • Consider 2 5/8 which is 10.101
  • After normalizing, we have a mantissa of .1010
  • we lost a bit off the mantissa
  • an exponent of of 110
  • giving us 01101010
  • If we convert back, we get 10.10 or 2.5
  • we have lost 1/8 off of our value!
  • This is a round-off or truncation error caused by
    lack of precision
  • Try another
  • 2 1/2 1/8 1/8
  • 2 1/2 01101010
  • 1/8 00101000 (1/8 .0010 gt .10002-2)
  • If we add 2 1/2 1/8 we get 2 5/8, or 10.101
  • But in our floating point notation, we lose the
    last bit so that 01101010 00101000 011010101
  • 2 1/2 1/8 2 1/2!
  • We add our current sum of 2 1/2 to the final 1/8
    and again we get 2 1/2
  • If we add 1/8 1/8 1/4 (00111000)
  • Followed by 1/4 2 1/2 01101010 2 5/8

31
Representing Images
  • Two common approaches
  • Bit maps
  • each bit represents a pixel of the picture, a 0
    is black and 1 is white (or vice versa)
  • for a color image, we can represent each pixel as
    a code that stands for its color, or we could
    represent the color using a percentage of red,
    green and blue (3 bytes, 1 byte per color)
  • Bit maps can be very large and cannot be zoomed
    in on
  • Vector
  • represent the image as a sequence of lines and
    curves
  • Vector graphics may not be as realistic but
    usually require much less memory and can be
    zoomed in on

32
Data Compression
  • Used to reduce the size of files
  • primarily for image and sound files for storage
    and transmission
  • Techniques revolve around
  • run-length encoding
  • relative encoding
  • used in music and video compression
  • frequency-dependent encoding
  • Huffman codes give shorter codes (fewer bits) to
    letters that occur more commonly (e, t, a, i)
  • Lempel-Ziv encoding
  • LZ-77 Find repetition in the data and store the
    repetition instead of the actual data
  • Ex abac (3, 2, d) (5, 4, e)
  • after abac
  • count back 3 spaces and repeat the next two
    characters followed by a d
  • then count back 5 spaces, repeat the next 4
    characters followed by an e
  • gives the string abacbadacbae
  • See further examples on pages 61-62

33
Image Compression
  • Bit maps tend to create enormous files,
    especially color bit maps
  • 3 bytes per pixel, each byte represents the
    degree of red, blue or green in the image
  • Special purpose compression formats exist to
    reduce the size of bitmaps
  • GIF (Graphic Interchange Format)
  • reduce of colors by having a palette and then
    encode the palette number in 1 byte, reducing bit
    maps to 1/3 their original size
  • JPEG (Joint Photographic Experts Group) -- used
    for photo quality images
  • lossless mode stores changes between adjacent
    pixels
  • lossy mode blurs the image by combining 2x2
    pixels into 2 colors and a brightness quality
    reducing the image by half or more

34
Communication Errors
  • There is always a chance that information can be
    corrupted during transmission
  • over MODEM or LAN lines
  • between memory and CPU or I/O/storage device
  • Parity bit
  • added to every byte to determine error
  • parity bit is 1 if number of 1 bits in the byte
    is odd (or even)
  • every byte parity bit should have an even (or
    odd) of 1 bits or else an error as arisen
  • Notice that the parity bit can detect an error
    but not determine what bit was accidentally
    flipped

35
Error-Correcting Codes
  • Another approach is to use a code that permits
    error detection and correction
  • Hamming Distance Codes allow this
  • In these codes, there is a three bit difference
    between each code so that an error in 1 bit can
    identify the correct value by finding its nearest
    code
  • A code with five bit differences between each
    code permits up to 2 errors per transmission with
    the ability to correct
  • See figures 1.28 (3 bit) and 1.29 (5 bit)
  • Notice that the 5 bit code gets around a problem
    that the 3 bit and parity bit can not solve -- if
    there is more than 1 error in a given byte

36
Hexadecimal and other notations
  • Binary notation is awkward
  • information in this notation consists of long
    strings of 0s and 1s which are hard to
    understand
  • Hexadecimal notation is often used
  • Hex is base 16 where 4 bits are combined together
    to form a single Hex digit
  • Every hex digit is really a number between 0 and
    15
  • In order to permit single digits for 10-15,
    letters are used (A-F)
  • See figure 1.6 p. 25 for a Hex-binary conversion
    table
  • Other notations are also available, Octal (base
    8) is sometimes used, and of course we all use
    decimal (base 10) notation every day
Write a Comment
User Comments (0)
About PowerShow.com