CSCI13003A - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

CSCI13003A

Description:

Many tasks using a limited set of operations. Computing an answer to an equation: ... Charles Babbage's (19th) machine was the closest to today's computers ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 52
Provided by: usersC1
Category:

less

Transcript and Presenter's Notes

Title: CSCI13003A


1
Lecture-3 (1.2 - 2.9)
  • CSCI130-03A
  • Instructor Dr. Imad Rahal

2
What Computers Do?
  • General-purpose computers
  • Many tasks using a limited set of operations
  • Computing an answer to an equation
  • 63 53 33/12 7
  • Complicated math functions using simpler ones
  • raise to a power 63 6 6 6
  • Multiplication and division can even be reduced
    to and -
  • 53 (555) or (33333)
  • 33/12
  • Keep on subtracting the denominator from the
    numerator until denominatorgtnumerator
  • 33-1221
  • 21-129
  • 9lt12 ? quit and return how many s weve done ? 2

3
Programs Algorithms
  • Assume a computer cant directly multiply
  • Multiplication task
  • 1 Get 1st number, Number_1
  • 2 Get 2nd number, Number_2
  • 3 Answer 0
  • 4 Add Number_1 to Answer
  • 5 Subtract 1 from Number_2
  • 6 if Number_2gt0, go to step 4
  • 7 Stop

4
Programs Algorithms
  • What we really need for this example
  • Store numbers
  • Add numbers (Subtract ?)
  • Compare number
  • In general, all tasks done by a general-purpose
    computer can be accomplished through the
    following set of operations
  • Store data
  • numbers (positive, negative or fractions), text,
    pictures, etc
  • Compare data (numbers, pictures, sounds, letters)
  • Add
  • Move data from one storage (memory) location to
    another
  • Editing a text document
  • Input/output
  • Not mentioned in book but important

5
Programs Algorithms
  • CAVEAT Not all computers are restricted to this
    set
  • Most likely not!
  • Why only a limited set?
  • There are potentially infinite tasks wed like to
    do
  • Not possible to hard-wire a specific operation
    for each task
  • Like the Arabic numeral system 10 digits (0
    thru 9)
  • Develop programs for non hard-wired tasks
  • Program A formal representation of a method for
    performing some task
  • Written in a code/programming language understood
    by a computer
  • Detailed and very well-organized (computers just
    do what they are told)
  • Follows an algorithm method for fulfilling the
    task
  • Plan to do something VS the actual performance

6
Programs Algorithms
  • Characteristics of an algorithm
  • List of steps that complete a task
  • Each step is PRECISELY defined and is suitable
    for the machine used
  • Add 5 to variable X
  • Make the color brighter
  • Increase the speed
  • Eat a sandwich!
  • Finite number of steps
  • The process terminates in a finite amount of time
  • No infinite loops

7
Programs Algorithms
  • Algorithm to compute the average of the first 20
    numbers
  • 1 sum 0, value 1, average 0
  • 2 add value to sum
  • 3 add 1 to value
  • 4 if value is small, go to step 2
  • 5 set average sum/20
  • 6 display average
  • 7 stop
  • How many steps?
  • Finite? Precisely defined?
  • Is this an algorithm?
  • if value lt 21, go to step 2
  • How would you make it get the average of the
    first x numbers where x is specified as input?

8
Programs Algorithms
  • We can program a task if and only if
  • an algorithm exists
  • each step is mechanically possible
  • ask a computer to eat a sandwich!
  • How intelligent are computers?
  • They are not!
  • They just follow the steps that we define for
    them
  • Why use them then?
  • Extremely powerful
  • Fast every step typically takes way less than a
    second (nanosecond --- 10-9 seconds)
  • Flexible enough to solve complicated tasks by
    combining programs that solve simpler tasks
  • Powers, multiplications!

9
The History of Computing
  • A computer originally meant a human being who
    did computations on paper
  • Abacus was the first computing device

10
The History of Computing
  • Pascal (17th), Leibnitz (18th)
  • Charles Babbages (19th) machine was the closest
    to todays computers
  • Was programmable (not special purpose) using
    punched cards
  • The punch card is a medium on which data are
    recorded by punching out holes
  • Analogously, data are recorded on a DVD by
    stamping or burning much smaller holes

11
An Example of a Punched Card
12
The History of Digital Computing
  • First generation computers
  • First actual computer was developed during WWII
  • Crack German code messages
  • Calculate trajectories for launching shells
  • Used vacuum tubes
  • Can switch electricity on/off or amplify a
    current
  • Much like a light bulb
  • generates a lot of heat and has a tendency to
    burn out
  • Slow, big (4-5 inches) and bulky

13
The History of Computing
  • ENIAC (shown on previous slide)
  • weighed over 30 tons, and consumed 200 kilowatts
    of electrical power
  • had around 18,000 vacuum tubes that constantly
    burned out, making it very unreliable
  • Computers were large, programmed in machine code,
    suffered from mechanical malfunctions, expensive
    and not easy to maintain

14
The History of Computing
  • Second generation computers
  • The transistor was invented in 1947
  • Small (½ inch), fast, reliable and effective, it
    quickly replaced the vacuum tube (mid 1960s in
    computers)
  • A vacuum tube had a size of 4 to 5 inches
  • Machines got smaller, faster, reliable
    (transistors vs. vacuum tubes), cheaper, more
    energy-efficient with much more memory (more
    transistors could be connected in a small space)
  • Development of advanced programming languages
    and applications
  • No machine language anymore
  • Computers became much more wide spread
  • Initially restricted to governments and big
    businesses

15
The History of Computing
  • Third generation computers
  • Late 1960s with the advent of integrated circuits
    (ICs)
  • Scientists found a way to reduce the size of
    transistors so they could place hundreds of them
    on small silicon chips
  • Rather than using (discrete) transistors
    separately as units, transistors were
    miniaturized and placed on silicon chips
  • ICs are often classified by the number of
    transistors they hold
  • More ? better
  • More complicated operations

16
The History of Computing
  • Increased the speed and efficiency of computers
    and decreased their sizes
  • Enabled whole computers to sit upon a desk top
    instead of requiring a whole room (desktops)
  • More reliable, cheaper, easier to maintain with
    more memory
  • Operating systems
  • We are still in the 3rd generation
  • IC chips are getting smaller and faster

17
(No Transcript)
18
Analog and Digital Signals
  • Examples of signals
  • Lights on traffic signals (G, R, Y), grade
    (performance) etc
  • Signals are the basis of all communications
  • Computers
  • Phones
  • A piece of data moving from place to another
    signal

19
Analog and Digital Signals
  • Analog signals take on a continuous set of values
  • Grades
  • Between any two analog signal values points, you
    can always find a 3rd value no matter what!
  • Instruments for measurement usually give
    estimates since any value is possible

20
Analog and Digital Signals
  • Digital signals take on a discrete/finite set of
    values
  • Traffic signals
  • R, Y, and G
  • Letter grades
  • Between any two adjacent digital signal values
    points, you cant find a 3rd value
  • We can measure the exact value cannot have
    fractions we can store them in a fixed space
    (continuous cant)
  • E.g. of students in a CSBSJU can surely be
    represented by less than 6 digits 0 - 100,000
  • This is what makes them attractive for use in
    computers and hence the name digital computers

21
Numeric Data Representation
22
2s Complement Representation
  • Used by almost all computers today
  • All places hold the value they would in binary
    except for the leftmost place which is negative
  • 8 bit integer -128 64 32 16 8 4 2 1
  • Range -128,127
  • If last bit
  • is 0, then positive ? looks just as before
  • is 1, then negative ?add values for first 7
    digits and then subtract 128
  • 1000 1101 148-128 -115

23
2s Complement Representation
  • Converting from decimal to 2s complement
  • For positive numbers find the binary
    representation
  • For negative numbers
  • Find the binary representation for its positive
    equivalent
  • Flip all bits
  • Add a 1
  • 43
  • 43 0010 1011
  • -43
  • 43 0010 1011 ? 1101 0100 ? 1101 0101
  • 1101 0101 -128641641 -43
  • We can use the usual laws of binary addition

24
2s Complement Representation
  • 125 - 19 106
  • 0111 1101 1110 1101 ------------- 0110 1010
    106 !
  • What happened to the one that we carried at the
    end?
  • Got lost but still we got the right answer!
  • We carried into and carried out of the leftmost
    column

25
2s Complement Representation
  • 12565 190
  • 0111 11010100 0001 -------------1111 1110 -
    2 !!!
  • We only carried into ? overflow
  • Number too big to fit in 8 bits since
    range-128,127
  • 12565 190 gt 127

26
2s Complement Representation
  • -125-65 -190
  • 1000 00101011 1111 -------------0100 0011 67
    !!!
  • We only carried out of ? underflow
  • -190 lt -127
  • Solution
  • use larger registers (more than 7 bits)
  • Very large positive and very small negative we
    might still have a problem ? combine two
    registers (double precision)

27
Conversion Practice
  • Binary to decimal
  • 1100 00102 0001 11002
  • 19410 2810
  • Decimal to binary
  • 24510,10210 310
  • 1111 01012, 0110 01102 , 000 000112
  • Hexa to binary
  • BA1016, CA0D16 12FE16
  • 1011 1010 0001 00002 , 1100 1010 0000 11012 ,
    0001 0010 1111 11102
  • Binary to hexa
  • 1100 00102 0001 11002
  • C216 1C16
  • Decimal to 2s complement
  • 102, -102, 3, -3. 245 -245
  • 0110 01102, 1001 10102, 0000 00112, 1111 11012,
    outside range outside
  • 2s complement to decimal
  • 1100 0010 0001 1100
  • -6210 2810

28
Real Number Representation
  • We deal with a lot of real numbers in our lives
    (class average) and while using computers
  • Fractions or numbers with decimal parts
  • 3/100.3 or 82.34
  • 82.34 810 21 31/10 41/100
  • To represent this number in binary, we use a
    radix point instead of a decimal point
  • 1101.11 8 4 1 ½ ¼ 13.75
  • But how can we represent the radix point on the
    computer? 0? 1?

29
Real Number Representation
  • We resort to the floating-point representation
  • We represent the number without the radix
    point!!!
  • Based on a popular scientific notation that has
    three parts
  • Sign
  • Mantissa (one digit before the decimal point, and
    two or more after), and
  • Exponent (written with an E following by a sign
    and an integer which represents what power to
    raise 10 to)
  • 6.124E5
  • 6.124 105 612,400
  • Sign? Mantissa? Exponent?
  • Scientific to decimal
  • -9.14E-3 -9.14 10-3 -9.14/1000 - 0.00914

30
Real Number Representation
  • Decimal to scientific
  • Divide (or multiply) by 10 until you have only
    one digit (usually non-zero) before the decimal
    point
  • Add (or subtract) one to the exponent every time
    you divide (or multiply)
  • 123.8 ?
  • 1.238E2
  • 0.2348 ?
  • 2.348E-1

31
Real Number Representation
  • Floating-point numbers is very similar
  • But use only 0s and 1s
  • For an eight bit number
  • 0 000 0000
  • Sign, exponent, mantissa (not standard lab 2)
  • Sign 0 for positive and 1 for negative
  • The exponent has three digits 0,7, but can be
    positive or negative
  • shift by 4 ? -4,3
  • 000?-4, 001?-3, 010?-2, , 111?3
  • i.e. subtract 4 from the real value
  • Use base two now (i.e. 2 raised to the power of
    the exponent value)

32
Real Number Representation
  • The mantissa has one (non-zero) place before the
    decimal point and a number of places after it
  • in binary our digits are 0 and 1 and so we always
    have 1.xxxx
  • No need to represent the one (add one to the
    result)
  • 11100010 ? 1 110 0010
  • Negative
  • 1106, -4 ? 2, exponent22
  • Mantissa 0010 01/2 01/4 11/8 01/16
    1/8
  • without the initial 1 which weve omitted ? 11/8
  • Or 1 decimal equivalent/16 1 2/16
  • - (22)(11/8) -4 1/2

33
Real Number Representation
  • Floating point to decimal conversion
  • 1 break bit pattern into sign (1 bit), exponent
    (3 bits), mantissa (4 bits)
  • 2 sign is if 1 and otherwise
  • 3 exponent decimal equivalent -4
  • 4 mantissa 1 decimal equivalent/16
  • 5 number sign (mantissa2exponent)

34
Real Number Representation
  • What about from decimal to floating point?
  • Format the number into the form 1.xxxx2exponent
  • Multiply (or divide) by 2 until we have a 1
    before the decimal point
  • Subtract (or add) 1 from (to) the exponent for
    every such multiplication (or division)
  • Add 4 to result (in floating-to-decimal
    conversion we subtracted 4)
  • Convert exponent to binary
  • Subtract 1 from the resulting mantissa
  • Multiply mantissa by 16
  • Round to the nearest integer
  • Convert mantissa to binary

35
Real Number Representation
  • -8.48
  • Sign is negative ? 1
  • 8.48 ? 4.24 ? 2.12? 1.06 ? 3 divisions
  • exponent34 7 or 1112
  • mantissa mantissa -1 ? .06
  • multiply mantissa by 16 (write it in terms of
    16ths) ? 0.96 1 0r 0001
  • 1 111 0001

36
Real Number Representation
  • Decimal to floating-point conversion
  • Sign bit 1 if negative and 0 otherwise
  • Exponent 0, mantissa absolute(number)
  • While mantissa lt 1 do
  • mantissa mantissa 2
  • exponent exponent -1
  • While mantissa gt 2 do
  • mantissa mantissa / 2
  • exponent exponent 1
  • Exponent exponent 4
  • Change to binary
  • Mantissa (mantissa -1)16
  • Round off to nearest integer and change to binary
  • Assemble number sign exponent mantissa

37
Real Number Representation
  • 0.319
  • Positive ? sign bit 02
  • mantissa 0.319 2 0.638 ? exponent -1
  • mantissa 0.638 2 1.276 ? exponent -2
  • exponent -2 4 2 0102
  • mantissa (mantissa -1)16 4.416 4 01002
  • 0 010 0100
  • - 0.319
  • Negative ? sign bit 12
  • exponent 0102
  • mantissa 01002
  • 1 010 0100
  • 0
  • Sign bit 02
  • mantissa? Not going to change to 1.xxxx no matter
    what!

38
Real Number Representation
  • No representation for 0
  • 0 000 0000 1.0 2-4 (we assumed there is a 1
    before the mantissa)
  • Assume this to be zero!
  • Another issue, is the method exact?
  • rounding? truncation (close numbers give same
    floating point values)
  • Mantissa 4.416 or 4.0123 or 4.400 are all 4
    01002
  • Problem is also due to using only 8 bits (4-bit
    mantissa)
  • Floating-point numbers require 32 or even 64 bits
    usually
  • But still we will have to round off and truncate
  • That happens regularly with us even when not
    using computers
  • PI 22/7
  • 2/3 or 1/3
  • We approximate a lot but we should know it !
  • Higher precision applications use much larger
    size registers

39
Practice
  • Scientific to decimal
  • -1.23E-10 9.01E3
  • -0.000000000123 9010
  • Decimal to scientific
  • 123.23 0.001911
  • 1.23E2 1.911E-3
  • Floating point to decimal
  • 1 101 1010 -21x1.625 -3.25
  • 0 001 1110 2-3x1.875 0.109375
  • 1 000 1111 -2-4x2 0.125
  • Decimal to floating point
  • 3.23 0 101 1010
  • 0.2911 0 010 0011

40
Non-numeric Data Representation
41
Non-numeric Data Representation
  • Text data
  • The most common type of data people use on
    computers
  • How can we transform it to binary --- not
    intuitive as numbers!
  • Words can be divided into characters
  • Each character can then be encoded by some binary
    code
  • Every language has its set of letters ? we will
    limit ourselves to the Latin alphabet
  • Numbers were (relatively) easy to map to binary
  • Decimal to binary change
  • What about letters and other symbols?

42
Non-numeric Data Representation
  • Many transformations exist and all are arbitrary
  • Two popular
  • EBCDIC (Extended Binary Coded Decimal Interchange
    Code) by IBM
  • ASCII (American Standard Code for Information
    Interchange) by American National Standards
    Institute (ANSI)
  • Most widely used (table in book page 22)
  • Every letter/symbol is represented by 7 bits
  • How many letters/symbols do we have in total?
  • A-Z (26) , a-z (26) , 0-9 (10), symbols (33),
    control characters (33)
  • If using 1 byte/character ? we have one extra bit
  • Extended ASCII-8 (more mathematical and graphical
    symbols or phonetic marks) --- 256

43
(No Transcript)
44
Non-numeric Data Representation
  • Given an integer 00110100
  • 2s complement ? 52
  • Floating point ? 0.625
  • ASCII ? character 4 (check ASCII table in book)
  • There must be some code blocks preceding such
    values informing the computer of the type
  • Sometimes called meta-data
  • What would you use if (2s, floating pt. or
    ASCII)
  • Do calculations?
  • Phone numbers?

45
Non-numeric Data Representation
  • Picture/Image Data
  • Divide the screen into a grid of cells each
    referred to as a pixel
  • 512156 image ? grid has 512 columns and 256 rows
  • Pixel values and sizes depend on the type of the
    image
  • For black white images, we can use 1 bit for
    every pixel such that 1 ? black and 0 ? white
  • For grayscale images, we use 1 byte where 255 ?
    black and 0 ? white and anything in between is
    gray (higher/lower values are closer to
    black/white)
  • For color images, we need three values per pixel
    (depends on the color scheme used)
  • Red/Green/Blue
  • We need 1 byte per value ? 3 bytes per pixel
  • For an 512x256 image
  • 512x256x3 384K bytes

46
Non-numeric Data Representation
  • Image movies are built from a number of images
    (or frames) that are displayed in a certain
    sequence at high speeds
  • 30 frames per second
  • 2-hr movie needs (assume previous image used)
  • 384K 30 60 120 83 GB (billion) bytes!
  • People use compression to reduce large movie
    sizes
  • Usually the change between two consecutive images
    is small ? store only difference between frames
    (Temporal compression)
  • Large areas with the same color can be stored by
    saving the boundary pixels only (everything
    within the boundaries has the same value)
    (Spatial compression)

47
Non-numeric Data Representation
  • Sound/Audio Data
  • An object produces sound when it vibrates in
    matter (e.g. air)
  • A vibrating object sends a wave of pressure
    fluctuations through the atmosphere
  • We hear different sounds from different vibrating
    objects because of variations in the sound wave
    frequency
  • Higher wave frequency means air pressure
    fluctuation switches back and forth more quickly
    during a period of time
  • We hear this as a higher pitch (intensity)
  • When there are fewer fluctuations in a period of
    time, the pitch is lower
  • The level of air pressure in each fluctuation,
    the wave's amplitude or height, determines how
    loud the sound is

48
Non-numeric Data Representation
  • Numbers used to represent the amplitude of sound
    wave
  • Analog is continuous and we need digital
  • Digitize the sound signal
  • Measure the voltage of the signal at certain
    intervals (40,000 per sec)
  • Reconstruct wave
  • Compression can also be used for audio files
  • MP3 reduces size to 1/10th
  • ? faster transfer over the Internet

49
Non-numeric Data Representation
  • Digital image and audio have a lot of advantages
    over non-digital ones
  • Can easily be modified by changing the bit
    pattern
  • Image enhancement, noise/distortion removal, etc
  • Superimpose one sound on another or image on
    another results in newer ones

50
Non-numeric Data Representation
  • It makes you wonder if images can be trusted
  • adapted from a course offered at BU

51
Non-numeric Data Representation
Write a Comment
User Comments (0)
About PowerShow.com