Networking and the Internet 2 - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Networking and the Internet 2

Description:

For example, OS/360 abstracted the process of printing, so that the program's ... Schedule an anti-virus upgrade and it'll run quickly ... – PowerPoint PPT presentation

Number of Views:57
Avg rating:3.0/5.0
Slides: 33
Provided by: EricB114
Category:

less

Transcript and Presenter's Notes

Title: Networking and the Internet 2


1
Networking and the Internet (2)
Useful text-book Coope, S et al (2002) Computer
Systems McGraw Hill
  • Last Week
  • Why does Networking matter?
  • Some thoughts about e-Business
  • Hardware foundation Computer architecture
  • Week 2 Focus
  • Computer architecture working through
    operations
  • Software Operating Systems, File storage
  • Operating system foundations
  • Interrupts, concurrency and multi-programming
  • Scheduling and Dispatching
  • Time-sharing and Online systems
  • Graphical Operating Systems, including Windows

2
Computer Architecture
Processor
Memory
1234567890- QWERTYUIOP ASDFGHJKL ZXCVBNM,./
Output (Information)
Input (Data)
Bus
Other long-term Storage
Disk Storage
  • Processor executes instructions from memory,
  • and works on data in memory
  • Other data flows through the bus

3
Architecture Notes
  • The machines youre working with have at least
    512MB of memory. That is, there are more than
    five hundred million separately-addressable units
    of memory, each consisting of eight electronic
    switches. These groups of eight binary digits
    (bits) are called bytes and can hold 28 values
    (0 up to 255). Because the whole machine works
    in binary, things usually come in powers of two
    a K is 210 or 1024, and an M is 220 or
    1,048,576. Inconsistently, G usually means 109
    for disk drives, and 230 for processor speeds.
  • The memory has a cycle-time of under 20
    nanoseconds, which is slower than the processor,
    but faster than the bus. Thats why the
    processor talks to memory without going through
    the bus. Its also why modern computers use a
    local bus called PCI to connect to the fastest
    devices, and the some have an even faster
    connection, AGP, to the display. PCI runs faster
    than the normal ISA (Industry standard
    architecture) bus Ive illustrated here.
  • Most machines have a diskette drive called A and
    a fixed disk drive called C You can have more
    than one of each, so you may have a B drive
    (diskette) and a D drive (fixed). Other
    devices, such as CD-ROM, usually have the next
    spare letter after the last disk.
  • Windows 95/98, NT, 2000, XP and Vista allow you
    to connect computers together in a network, and
    to share disks. These can be linked as if they
    were physically on the machine, and then usually
    have a letter after the CD-ROM. You will be
    familiar with keeping all your personal data on a
    network disk your My Documents.
  • Because data is often more valuable than the
    computer its on, and hard disks have been known
    to fail, back-up devices are needed. Tape gives
    the best combination of speed, convenience and
    cheapness of medium, but DVDs are probably more
    affordable for home.

4
Computer Processor Machine code
Clock
Arithmetic and Logic Unit
Central Processor Unit
1 Load 2 Store 3 Add 4 Subtract 5
Multiply
Memory 310 1 1 600 314 1 2 602 318 5 1 2 31A 2 1
604
...
5
Machine code example
  • To simplify this example, Ive chosen a
    processor that addresses memory by the word (of
    32 bits). This means theres room at a single
    memory address for an instruction code,
    register-number and an address, or for a
    full-word number. Such machines were common
    before the IBM 360, but most modern machines
    actually address each byte.
  • The diagram shows two areas of memory, one
    containing instructions, and the other data.
    Theres no need for this separation, but it makes
    things easier to understand. The first area
    contains program, and is laid out to show
    instruction code, register affected, and finally
    the address of memory accessed (or second
    register). That is
  • Instruction Code Meaning
  • 1 r s Load register r from memory location s
  • 2 r s Store register r into location s
  • 3 r1 r2 Add r1 to r2 and put result in r1
  • 4 r1 r2 Subtract r2 from r1 and put result in r1
  • 5 r1 r2 Multiply r1 by r2 and put result in r1
  • Well work through the program, writing in the
    changes that occur during execution
  • Homework
  • Please study the paper at the end of the
    hand-out, to understand the (slightly) more
    realistic byte-oriented computer described there.
  • If you cant do that, try working through the
    example here, noting the value of each register
    and each word in the 600 range of memory at the
    end of each instruction (as read from memory
    locations 310-31A)

6
Operating Systems
  • How we control this hardware

7
Operating Systems
  • Though the processor is simple and serial, we
    want to do more complex things, often several at
    once
  • An operating system is a program that provides
    the building blocks of complex systems
  • Some simply encapsulate function to save every
    application from having to include a copy
  • Others handle specific hardware, presenting a
    generic interface that hides behaviour unique to
    that hardware
  • Sometimes the interface is so generic that it has
    little to do with the hardware file structures
    are the best example
  • Modern operating systems make it look as if the
    computer is doing several things at the same time
  • Our operating system is Windows XP how many of
    you are now on Vista?

8
Operating Systems Notes
  • The original goal of operating systems was to
    save programmers from having to do the repetitive
    task of writing machine-code to do common
    operations such as reading cards, driving
    printers, or typing a message to the operator.
    This was achieved by producing a series of
    routines (or macros) to do these actions, and
    packaging them with the computer. So youd have
    routines to compute a square root, rewind a tape,
    punch a card, and all the other common
    activities. This is what wed now call
    encapsulation of function. You get it right
    once, put an envelope round it, and forget whats
    inside. Through the 60s, operating systems
    encapsulated ever higher levels of function. For
    example, OS/360 abstracted the process of
    printing, so that the programs print commands
    actually wrote data to a spool file, and only
    when the program closed its printer did the
    operating system actually start moving the
    records from disk to the physical printer.
    OS/360s descendants still dominate the world of
    enterprise computing (MVS, OS/390, now called
    z/OS)
  • Under DOS and its descendants (OS/2 and Windows),
    disk I/O is done by reading and writing named
    files, which are logically linear collections of
    records. The operating system takes care of
    mapping from the file identifier to the place on
    disk where the records are physically stored, and
    glues together a non-contiguous string of
    physical blocks (allocation units) to present
    logical records to the program. The operating
    system delivers the data via buffers in memory.
  • Because the application doesnt need to know the
    structure of the disk, you can run programs with
    a physical disk replaced by a CD or a memory key.
  • A key service of all operating systems is to make
    use of a single processor resource to doing
    several things concurrently. Well cover that in
    some detail in this module.

9
Concurrent Operations
  • To give the appearance of doing several things at
    once
  • OS must stay ready to accept work
  • keystrokes, mouse clicks, signals from modem,
    printer ready to receive another buffer of data
  • These can interrupt a computation already being
    run
  • It then does a bit of the required work,
  • then goes back to an interrupted task, and so on.
  • We say the machine is doing things concurrently
    theyre not simultaneous, but they look it!
  • The key is switching the CPU between logical
    processes
  • In theory, you could go round polling high
    overhead
  • In practice, concurrency depends on hardware
    interrupts

10
Essentials of an operating system
  • Controls the hardware
  • Lets applications be less hardware-specific by
    abstracting operations (who cares how big a track
    is!)
  • Reduces havoc that can be done by rogue programs
    by restricting use of risky instructions (such
    as those giving direct access to hardware)
  • Allows processes to update files with integrity
  • Encapsulates commonly-used functions
  • Manages resources, including storage
  • Supports concurrent operations
  • Success judged by performance, in terms of
    Availability, Reliability, Response-time,
    Throughput

11
In the beginning...
  • 21 June 1948 First stored-program computer
    (Manchester University Baby) ran its first
    program
  • Program keyed directly into memory
  • Results displayed as dots on a CRT
  • When program finished, it stopped
  • Next machines used tape or card for I/O
  • Monitors developed in 1950s
  • to encapsulate standard functions (for example,
    I/O)
  • to automate running of programs one after another
  • still one program at a time
  • then added SPOOLING to overlap input and output

Too much investment to let it sit around waiting
for humans to press buttons
12
The Early Days of Computing
  • The MU Baby was a kind of 0th generation
    computer to prove that a stored program computer
    could be built. The idea of writing machine-code
    using a binary keyboard made of Spitfire radio
    buttons was not seen as a practical option for
    real computers! It was the prototype of
    first-generation machines like the Ferranti Mark
    I, which had a real keyboard and paper-tape
    reader, allowing you to prepare programs
    off-line. Given the high cost of building and
    running a machine containing thousands of valves,
    you didnt want it to stand idle waiting for
    humans to input work.
  • Early computers were dedicated to a single user,
    who supplied instructions via a typewriter
    (slowly) or punched cards (faster). In the UK,
    paper tape was more usual than cards, but the
    principle was the same.
  • With the second-generation of machines, using
    transistors instead of valves, processing was
    much faster, but the electro-mechanical devices
    like readers and printers were faster by a
    smaller factor. So it made sense not to hold up
    the processor waiting for these devices to
    finish, and SPOOLing was introduced. This was
    performed under the control of a Monitor
    programs wrote their output to a SPOOL file on
    disk, and the monitor transferred data from the
    disk to printer or punch at times, stealing
    processing time from the program being run.
  • This meant that each program took a little longer
    than it would have done for its processing alone,
    but the system could start the next job without
    waiting to complete output from the previous one.
    This increased throughput, though an individual
    user might have to wait a bit longer, since
    printing didnt begin until the SPOOL file was
    closed at the end of the job.
  • Input of jobs and data was handled in the same
    way, so the monitor could switch to the next job
    in sequence without waiting for human action.
  • These monitors introduced the concept of
    concurrent processing appearing to do two
    things at once by switching the processor from
    one task to another.

13
True Operating Systems
  • Introduced with Ferranti Atlas and IBM System/360
  • Applied concurrency to user work as well as to
    SPOOL
  • Potential to run complementary jobs alongside
    each other
  • OS became a resource manager
  • Sharing processor resource between jobs
  • Providing separate memory for use by each job
  • Controlling allocation of tapes and other
    hardware
  • Scheduling jobs to fit resources available
  • Used interrupts to switch control between
    processes
  • Need to be sure we understand how they work
  • Foundation for on-line systems with terminals

14
Interrupts
  • The hardware feature called an interrupt was key
    to most operating systems developed from the 60s
    onwards. An interrupt swaps the current
    instruction counter value for the address of an
    interrupt handler, preserving the old value at an
    address defined for each class of interrupt.
    This avoids the need to poll to see if (say)
    the printer is ready for another line from SPOOL,
    and makes it efficient to switch between
    activities.
  • So the printer driver can write a line to the
    printer interface, then enter a wait state until
    the printer interrupts to show that its finished
    writing that line and is ready for another.
    Under the covers, waiting on an event simply
    means that the operating system has built a
    control block containing information to let it
    revive the waiting task when a particular
    interrupt event arises for example, when the
    printer interface returns a device-end interrupt.
  • The Operating System became the real job running
    on the computer. User work was divided into
    tasks (what wed now call processes), which
    were attached by the Operating System to be given
    resources when they were available. At the end
    of interrupt handling, instead of returning to
    the instruction that was interrupted, control
    would be returned to the operating system, which
    would allocate the processor resource to the most
    appropriate task. So you could run several jobs
    at once, possibly matching a compute-intensive
    analysis with a data-intensive billing job.
  • This concept of task-switching can also be used
    at a lower level. For example, if you have a
    business system with a thousand users sitting at
    terminals, you could have a task running for each
    terminal (plus a few more to control the whole
    system). Then each time my task writes to the
    terminal, it could go into a wait state, waiting
    for an event that shows the terminal has sent
    some new data (usually, this wont happen until
    the user has read the last output, thought about
    it, typed in a response, and hit the Enter key).
    Thus the development of efficient online systems
    was dependent on the concept of tasks and events.

15
On-line Computing
  • Terminal attached to mainframe computer
  • Operating system time-shared processor among
    users
  • Developed initially with slow lines and
    typewriter-like terminals or teleprinters
  • It was expensive to read every keystroke
  • so switched to using Block mode
  • User types into terminal buffer, presses Enter to
    transmit
  • Most transaction processing is done that way even
    today
  • Weaknesses of mainframe terminal are
  • poor bandwidth cant track mouse, write graphics
  • cant take shortcuts based on every keystroke

16
Working interactively
  • Early computers were dedicated to a single user,
    who supplied instructions via a typewriter
    (slowly) or punched cards (faster). As Operating
    Systems developed, they used the typewriter as
    Operator console the control point from which
    programs were started and stopped. This console
    was the model for the time-sharing systems of the
    late 60s implicitly as in Unix, or explicitly as
    in IBMs CP/67, which shared out system resources
    into Virtual Machines, with console, disks,
    card-punches etc. The Control Program created
    virtual machines that could run any program that
    would run on a hardware System/360, and the other
    component was a simple monitor to support a
    single user on that virtual machine.
  • Initially, each virtual console was a typewriter
    terminal with a direct wire to the mainframe, but
    this gets expensive, so techniques developed to
    attach many terminals down a single line. But
    its costly to transmit every keystroke with
    controls to indicate which device its from, and
    to have the processor waiting to unpick them. So
    terminals were provided with buffers for the user
    to type into, and data was sent in a block when
    s/he pressed the Enter key. This mode of
    operation took off with the IBM 3270 terminal,
    which held 24 lines of 80 characters.
  • Although PC screens look similar, the connection
    from keyboard to processor is more direct and
    very fast, so programs quickly took advantage of
    ability to see every keystroke (again!). The
    high bandwidth to the screen was exploited for
    graphics and typographical fonts, and for WIMPS
    interfaces such as MacOS, GEM, OS/2 and Windows.
  • PCs remain inferior to mainframes for handling
    shared data, and have a high cost of maintenance.
    How do you keep all your PC software up to data
    and compatible?

17
Basic Concepts Covered So Far
  • Faking concurrency
  • multiprogramming appearing to do several things
    at once
  • processes and threads
  • For multiple users or one user with several
    balls in the air
  • Interrupts
  • Theres more to come on
  • I/O buffering
  • Spooling (offline, online)
  • Multiprocessing
  • Using multiple processors to get more power
  • symmetrical, clustering or master-slave

18
Summary of Concepts
  • This race through the history of Operating
    systems has introduced most of the basic concepts
    on which modern computer systems rest.
  • The concept of concurrency depends on switching
    the attention of a processor between many
    different tasks, such that each of them makes
    enough progress to satisfy its human user, or to
    handle real-world event sufficiently quickly (if
    you have a computer controlling the flying
    surfaces of your Airbus, you cant afford to wait
    too long before responding to a change!). The
    computer in your cell-phone is capable of
    detecting a call, or switching cells at the same
    time as you are entering a number into memory or
    writing a text message.
  • In most systems, there are two levels of tasks,
    called processes and threads. The higher-level
    task is a process, and is associated with a
    complete system state (address-space, control
    blocks, processor state). Its quite
    time-consuming to restore this whole state, so
    processes are often divided into threads. These
    lower-level tasks can share characteristics, such
    as the process address-space, so its not
    necessary to reload this when switching threads.
    Well see later how high-performance transaction
    systems depend on thread-based despatching.
  • When we need more power than a processor can
    provide, the obvious solution is to add more
    processors. Sometimes its on a very low level,
    with separate engines inside the processor for
    different functions such as instruction decoding,
    floating point, and graphics. Where an
    instruction passes through several of these, we
    say the processor is pipelined. Where theres
    division of labour between different engines, we
    call them coprocessors.

19
What Operating Systems Do
  • OS/360 and Batch Scheduling
  • How Online Computing is different

20
OS/360
  • Betting the Business for IBM in late 1960s
  • Environment
  • Batch processing
  • Real memory only (Ferranti Atlas had paging, but
    the idea hadnt yet crossed the Atlantic)
  • Physical cards used for job entry
  • Line-printers used for output (up to 1000
    lines/min)
  • Tapes and disks expensive but heavily used
  • Concepts
  • Job Control Language (JCL) to describe each job
  • Each program ran in a Job Step

21
Structure of a Job
  • Must run steps in sequence
  • Dont run step if previous one failed
  • Must have input available at start of step
  • Need somewhere to write results
  • Usually generate spool output
  • Normally expects the Program run in each step to
    be on disk

//ERIC JOB //COMP EXEC PGMPLIOPT //SYSIN DD //
SYSPRINT DD //SYSOUT DD DSNERIC.PLI.OBJ //LIN
K EXEC PGMLKEDIT / and so on
22
Scheduling Jobs
  • Back in the days of monitors, scheduling was easy
  • When a job finishes, load the next one and run it
  • If I/O is spooled, next one will be loaded from
    spool file that contains images of cards read in
    earlier..
  • ..and CPU time needs to be shared between the
    running job and spool I/O (which uses predictably
    little CPU)
  • Gets harder when more than one real job can run
  • Have to match resource requirements with
    availability
  • Need to be concerned with sequence of jobs
  • Optimization needs awareness of job
    type(processor-heavy, I/O heavy, etc.)
  • Specified with complex Job Control Language

23
OS/360 Batch Scheduling
  • Memory was critical resource(gt1000 per
    kilobyte)
  • Divided into partitions or regions, each with a
    job initiator
  • OS reads each job in and places it on a
    (prioritized) queue
  • When job completes in region,initiator looks on
    queue for more work
  • Must fit in available memory
  • Necessary resources must be available
  • Otherwise try continue looking down queue

memory
24
Batch Scheduling Problems
  • Storage
  • With fixed-size partitions, you can have long
    queues for a big partition while one or more
    small ones are free
  • With variable-sizes (regions), you get
    fragmentation
  • Various steps may have different space
    requirements
  • Resource allocation
  • Need to collect together all resources needed by
    a step such as files and dedicated hardware
  • Can still waste run time (e.g. if you get a tape
    drive but havent mounted the right tape on it)
  • Addressed by extensions to Job Control
    Language(even more chances to make mistakes with
    it!)

25
Risks of Deadlock
  • OS/360 enforced pre-allocation to avoid deadlock
  • Consider two jobs, A B
  • Job A
  • is holding tape drive 1
  • cant complete until it can update file X.Y.Z
  • Job B
  • is writing file X.Y.Z
  • cant close the file until its written a log to
    tape
  • BUT there are no tape drives available
  • Therefore both jobs will stall, tying up both
    regions
  • But you cant pre-allocate with on-line users

26
Scheduling Today
  • Were not usually concerned with resources on a
    home PC
  • Schedule an anti-virus upgrade and itll run
    quickly(though running the virus checker might
    be much slower)
  • We may be concerned about sequence
  • Better to be sure the upgrade is complete before
    scanning
  • In business systems
  • Duration can be longer (thats why ITCS
    constrains disk space otherwise the back-ups
    would take too long)
  • Sequence can be critical dont pay out before
    taking in
  • Every transaction must run once and once only

27
Scheduling and Dispatching
  • Once weve scheduled jobs to start, we have to
    divide machine cycles between the concurrent
    processes
  • With a small degree of multiprogramming (few
    regions), that can be fairly simple
  • Whenever a region waits, we dispatch another one
  • Dispatcher is like a micro-level scheduler
  • Can have different dispatch and initiator
    priorities
  • Storage limitations addressed with Virtual Memory
  • Allowed increase in number of initiators
  • Increasing degree of multiprogramming ..
  • .. and need for more complex dispatching
    theres no point in dispatching a process
    without (real) memory

28
Online Systems
  • Expands scale of allocation and dispatching
    problems
  • Jobs become very long-running
  • so we cant pre-assign all resources and hold
    until job end
  • Total number of jobs grows greatly
  • Jobs need to sleep when user isnt interacting
  • Each interaction is like a mini job step
  • Two basic approaches to managing this
  • Treat each user as a process and tune the
    dispatcher
  • Unix, VM/370, MVS/TSO and VMS work this way
  • Treat each user as a thread on a timesharing
    process
  • CICS and IMS work this way
  • High-performance web servers too

29
Time-sharing
  • Consider a system with 100 on-line users
  • Average think time 10 seconds(so overall arrival
    rate is 10 interactions per second)
  • Average CPU demand 50msec per interaction
  • Therefore load should be 50 plus overhead
  • If load is homogeneous, round-robin dispatching
    is OK
  • But what if a few users need 1 sec the rest 10
    msec?
  • Queue will build up if long task is allowed to
    complete
  • OK, lets preempt task on expiry of a time-slice
    suspend task (take the CPU away) if it takes too
    long, and make it wait until next time round

30
Life Cycle of an Interaction
User hits ENTER
Task runs until interrupted for some reason
Task Created
Executable Task
Dispatched
Running
Finishes
On Ready Queue
Terminated
Time expires
Enters WAIT
Requeued on Event
Blocked
31
Dispatching Tasks
  • Lets assume a Ready Queue of tasks waiting to
    run
  • OS adds tasks to queue according to a priority
    pattern (cheapest is FIFO, but we may want to
    improve on that)
  • Dispatch the task at the head of the queue
  • When it gives up control, dispatch new head of
    queue
  • May want to maintain queue of blocked tasks too
  • What if task doesnt give up control?
  • Need to interrupt it at end of time slice to let
    others run(Windows 3.x didnt do this except for
    DOS tasks)
  • Can return it directly to Ready Queue (with risk
    that itll consume too much CPU time)
  • Maybe we should favour short interactions over
    long

32
Graphical Operating Systems
  • Development of Windows

33
Graphical Operating Systems
  • WIMP concepts
  • Windows, Icons, Menus, Pointer
  • invented late 70s at Xerox Palo Alto Research
    Center
  • First marketed in Lisa machine very expensive
  • Later in Xerox Daisy still too costly to
    succeed
  • Apple made success of the idea in Macintosh
    1984
  • Massive advertising campaign
  • Successful first with DTP and designers
  • IBM Microsoft followed with 16-bit OS/2 V.1
    (1987)
  • Full WIMP approach in 1988
  • IBM Went it alone for OS/2 V2 incompatible
    interface

34
Early Flavours of Windows
  • Windows 1 and 2
  • Never really made it Lisa ideas done less well
    than Mac
  • OS/2 Version 1 (Microsoft/IBM collaboration)
  • Windows 386
  • Exploited 80386 virtual machine code Win2
    interface
  • Windows 3.0 (1990) The breakthrough
  • Picked up OS/2 V1 user interface, with simpler
    API
  • Added Windows 386 process mgt (multiple DOS
    boxes)
  • Still no pre-emption of ill-behaved Windows
    processes
  • Windows 3.1
  • Enhanced Win3.0 without major architecture
    changes
  • Added some new GUI controls

35
Flavours of Windows since 1993
  • Windows for Workgroups (3.1 and 3.11)
  • Added networking capabilities
  • Introduced (some) 32-bit code, such as file I/O
  • You could bolt-on IP network support free but
    not trivial
  • Windows 95 New user interface
  • Integrated IP networking
  • Much more 32-bit code
  • Long file-names (and file-types, unlike OS/2)
  • Still not fully pre-emptive multitasking, but
    improved capability to detect and abort
    ill-behaved processes
  • Windows 98 Minor changes from W95
  • Adds Internet-like interface to Windows 95
  • Meant to be last of the line of Windows 3.x
    successors

36
Windows NT
  • NT V3.x
  • Microsofts OS/2 successor, with full Windows GUI
  • Kernel based on experiences with VMS (at Digital)
  • Data permissions RACF style, by user and group
  • 32-bit, with some limitations on use of 16-bit
    applications
  • NT V4.0
  • Enhanced from NT V3.51
  • Added Windows 95 user interface
  • Improved tolerance of 16-bit applications
  • providing they dont try to access the hardware
    directly
  • Full pre-emptive multitasking Windows processes
    get time-sliced, so ill-behaved ones cant hog
    the processor

37
Current Windows Flavours
  • Windows 2000 Intended to unify NT and 9x
    families
  • To avoid duplicate development effort
  • Replaced NT for professionals and large small
    Servers
  • But Domestic version didnt run all W98 software,
    so
  • Millennium Edition (of Windows 98)
  • Stopgap because 98/NT integration wasnt complete
  • Windows XP finally did unite NT and 9x families
  • Comes in versions for different purposes
  • XP Home edition,
  • Professional edition for corporate clients
  • Servers
  • Similar approach with Windows Vista (2007)

38
Summary of Week 2
  • Computer does simple things, in sequence
  • Instruction counter contains address of next
    instruction to run
  • Operating systems package these into useful
    facilities
  • Including ability to run programs concurrently
  • Processor is a resource that the Operating System
    uses
  • OS treats each program as a task
  • Gives it a bit of time on the processor
  • Then passes the processor to another task
  • OS gets control back through interrupts
  • Voluntary (supervisor call by user program)
  • External (such as disk I/O completion, timer
    interrupt)

39
Checkpoint Questions
  • Describe differences between B2C B2B e-Commerce
  • 1.
  • 2.
  • What are the main functions of an Operating
    System?
  • 1. 2.
  • 3. 4.
  • What does the Instruction Counter point to?
Write a Comment
User Comments (0)
About PowerShow.com