What do we need to do automated testing - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

What do we need to do automated testing

Description:

... the set of Points of Control and Observation (PCOs) Test controller ... For the purposes of security testing, all of these PCOs could be a point of attack. ... – PowerPoint PPT presentation

Number of Views:65
Avg rating:3.0/5.0
Slides: 24
Provided by: alanwi8
Category:
Tags: automated | need | pcos | testing

less

Transcript and Presenter's Notes

Title: What do we need to do automated testing


1
Introduction
  • What do we need to do automated testing?
  • Test script
  • Actions to send to system under test (SUT).
  • Responses expected from SUT.
  • How to determine whether a test was successful or
    not?
  • Test execution system
  • Mechanism to read test script, and connect test
    case to SUT.
  • Directed by a test controller.

2
Test Architecture (1)
  • Includes defining the set of Points of Control
    and Observation (PCOs)

Test Execution System
Test controller
Test script
PCO
  • A PCO could be
  • a particular method to call
  • a device interface
  • a network port
  • etc.

SUT
3
Test Architecture (2)
  • The test architecture will affect the test script
    because it may be significant as to which PCO is
    used for an action or response.

Test controller
Test controller
m
m
PCO 1
PCO 2
PCO
SUT
SUT
4
Potential PCOs
  • Determining the PCOs of an application can be a
    challenge.
  • Potential PCOs
  • Direct method call (e.g. JUnit)
  • User input / output
  • Data file input / output
  • Network ports / interfaces
  • Windows registry / configuration files
  • Log files
  • Temporary files or network ports
  • Pipes / shared memory

5
Potential PCOs (2)
  • 3rd party component interfaces
  • Lookup facilities
  • network Domain Name Service (DNS), Lightweight
    Directory Access Protocol (LDAP), etc.
  • local / server database lookup, Java Naming and
    Directory Interface (JNDI), etc.
  • Calls to
  • remote methods (e.g. RPC)
  • Operating System
  • For the purposes of security testing, all of
    these PCOs could be a point of attack.

6
Distributed Test Architecture (1)
  • May require several local test controllers and a
    master test controller

Master Test controller
Local Test controller
Local Test controller
PCO
PCO
SUT Component 1
SUT Component 2
7
Distributed Test Architecture (2)
  • Issues with distributed testing
  • Establishing connections at PCOs
  • Synchronization
  • Where are pass/fail decisions made?
  • Communication among test controllers

8
Choosing a test architecture
User
mouse clicks / keyboard
Browser
HTTP / HTML
Web Server
SQL
Data base
9
Choosing a Test Architecture
  • Testing from the users point of view
  • Need a test tool to simulate mouse events, or
    keyboard input
  • Need to be able to recognize correct web pages
  • Small web page changes might require large
    changes to test scripts.
  • Testing without the browser
  • Test script would send HTTP commands to web
    server, and check HTTP messages or HTML pages
    that are returned.
  • Easier to do, but not quite as realistic.

10
Test Scripts
  • What should the format of a test script be?
  • tool dependent?
  • a standard test language?
  • a programming language?

11
Test Script Development
  • Creating test scripts follows a parallel
    development process, including
  • requirements
  • creation
  • debugging
  • configuration management
  • maintenance
  • documentation
  • Result they are expensive to create and maintain

12
When to automate? (1)
  • The benefits of test automation need to be
    greater than the (expensive!) costs of
    automation.
  • General rule of thumb it is expected that tests
    will have to be run many times
  • regression testing
  • configuration testing
  • conformance testing
  • clean room development process
  • capacity / stress testing
  • performance measurements

13
When to automate (2)
  • Automated testing is especially beneficial if the
    tests need to be re-executed quickly
  • Frequent recompiles
  • Large number of tests
  • Using an agile development process
  • An automated test can be duplicated to create
    many instances for capacity / stress testing.

14
When NOT to automate
  • Initial functional testing
  • Automated testing is more likely to find bugs
    introduced by changes to code or the execution
    environment, rather than in new functionality.
  • Automated test scripts may not be ready for first
    software release.
  • Situations requiring human judgement to determine
    if system is functioning correctly.

15
Capture and Playback
  • For user interface testing, one approach to
    automating tests is, after the system is working,
    record the input supplied by the user, and
    capture the system responses.
  • When the next version of the software needs to be
    tested, play back the recorded user input and
    check if the same responses are detected as are
    stored in the capture file.
  • Benefits relatively simple approach, easy to do
  • Drawbacks
  • very difficult to maintain
  • specific to one environment

16
Making the automation decision (1)
  • Will the user interface of the application be
    stable or not?
  • To what extent are oracles available?
  • To what extent are you looking for delayed-fuse
    bugs (memory leaks, wild pointers, etc.)?
  • Does your management expect to recover its
    investment in automation within a certain period
    of time? How long is that period and how easily
    can you influence these expectations?
  • Are you testing your own companys code or the
    code of a client? Does the client want (is the
    client willing to pay for) reusable test cases or
    will it be satisfied with bug reports and status
    reports?
  • Do you expect this product to sell through
    multiple versions?

17
Making the automation decision (2)
  • Do you anticipate that the product will be stable
    when released, or do you expect to have to test
    Release N.01, N.02, N.03 and other bug fix
    releases on an urgent basis after shipment?
  • Do you anticipate that the product will be
    translated to other languages? Will it be
    recompiled or re-linked after translation (do you
    need to do a full test of the program after
    translation)? How many translations and
    localizations?
  • Does your organization make several products that
    can be tested in similar ways? Is there an
    opportunity for amortizing the cost of tool
    development across several projects?

18
Making the automation decision (3)
  • How varied are the configurations (combinations
    of operating system version, hardware, and
    drivers) in your market? (To what extent do you
    need to test compatibility with them?)
  • What level of source control has been applied to
    the code under test? To what extent can old,
    defective code accidentally come back into a
    build?
  • How frequently do you receive new builds of the
    software?
  • Are new builds well tested (integration tests) by
    the developers before they get to the tester?

19
Making the automation decision (4)
  • To what extent have the programming staff used
    custom controls?
  • How likely is it that the next version of your
    testing tool will have changes in its command
    syntax and command set?
  • What are the logging/reporting capabilities of
    your tool? Do you have to build these in?

20
Making the automation decision (5)
  • To what extent does the tool make it easy for you
    to recover from errors (in the product under
    test), prepare the product for further testing,
    and re-synchronize the product and the test (get
    them operating at the same state in the same
    program).
  • In general, what kind of functionality will you
    have to add to the tool to make it usable?
  • Is the quality of your product driven primarily
    by regulatory or liability considerations or by
    market forces (competition)?
  • Is your organization subject to a legal
    requirement that test cases be demonstrable?

21
Making the automation decision (6)
  • Will you have to be able to trace test cases back
    to customer requirements and to show that each
    requirement has associated test cases?
  • Is your company subject to audits or inspections
    by organizations that prefer to see extensive
    regression testing?
  • If you are doing custom programming, is there a
    contract that specifies the acceptance tests? Can
    you automate these and use them as regression
    tests?
  • What are the skills of your current staff?

22
Making the automation decision (7)
  • Do you have to make it possible for
    non-programmers to create automated test cases?
  • To what extent are cooperative programmers
    available within the programming team to provide
    automation support such as event logs, more
    unique or informative error messages, and hooks
    for making function calls below the UI level?
  • What kinds of tests are really hard in your
    application? How would automation make these
    tests easier to conduct?

23
Suggested reading
  • Henk Coetzee , Best Practices in Software Test
    Automation, (2005) on line at http//www.testfo
    cus.co.za/Feature20articles/july2005.htm
  • C. Kaner, Architectures of Test Automation.
    (2000). On line at
  • http//www.kaner.com/testarch.html
  • C. Kaner, Improving the maintainability of
    automated test suites, Software QA, Vol. 4, No.
    4 (1997). On line at
  • www.kaner.com/pdfs/autosqa.pdf
  • J. Bach, Test automation snake oil, Proceedings
    of 14th Intl conference on Testing Computer
    Software (revised 1999). On line at
  • www.satisfice.com/articles/test_automation_snake_o
    il.pdf
Write a Comment
User Comments (0)
About PowerShow.com