Test Execution and Automation - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Test Execution and Automation

Description:

Controllability & Observability. GUI input (MVC 'Controller') Program Functionality ... provides limited control and graphical output provides limited observability ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 29
Provided by: facstaff4
Category:

less

Transcript and Presenter's Notes

Title: Test Execution and Automation


1
Test Execution and Automation
  • Lecture 7
  • Reading Ch. 17

2
Generating Test Cases
identify
Functionalspecification
Independentlytestable feature
identify
derive
Representativevalues
Model
derive
generate
Test case
Test casespecifications
3
Generating Test Cases
  • Test design often yields test case
    specifications, rather than concrete data
  • Ex a large positive number, not 420023
  • Ex a sorted sequence, length gt 2, not Alpha,
    Beta, Chi, Omega
  • Other details for execution may be omitted
  • Generation creates concrete, executable test
    cases from test case specifications

4
Issues Generating and ExecutingTest Cases
  • Need to deal with incomplete code
  • Requires building scaffolding
  • Need to be able to control the input and observe
    relevant output
  • Another use for scaffolding
  • Want to automate this process as much as possible
  • test execution
  • checking pass/fail
  • other parts of the testing process

5
Test Harness (according to Wikipedia)
  • A test harness or automated test framework is a
    collection of software and test data configured
    to test a program unit by running it under
    varying conditions and monitoring its behavior
    and outputs.
  • A test harness has two main parts
  • test execution engine
  • test script repository
  • Test harnesses allow for the automation of tests.
    They can call functions with supplied parameters
    and print out and compare the results to the
    desired value. The test harness is a hook to the
    developed code, which can be tested using an
    automation framework.
  • The typical objectives of a test harness are to
  • Automate the testing process.
  • Execute test suites of test cases.
  • Generate associated test reports.

6
Controllability Observability
Example We want automated tests, but interactive
input provides limited control and graphical
output provides limited observability
GUI input (MVC Controller)
Program Functionality
Graphical output (MVC View)
7
Controllability Observability
GUI input (MVC Controller)
Test driver
API
Program Functionality
Log behavior
A design for automated test includes provides
interfaces for control (API) and observation
(wrapper on output).
Capture wrapper
Graphical output (MVC View)
7
8
Generic or Specific Scaffolding?
  • How general should scaffolding be?
  • We could build a driver and stubs for each test
    case
  • ... or at least factor out some common code of
    the driver and test management
  • ... or further factor out some common support
    code, to drive a large number of test cases from
    data
  • ... or further generate the data automatically
    from a more abstract model
  • Need to balance costs and re-use
  • Just as for other kinds of software

9
DISCUSSION QUESTIONS
10
Oracles
  • Did this test case succeed or fail?
  • Manually check the output humans can make
    mistakes.
  • No use running 10,000 test cases automatically if
    the results must be checked by hand!
  • Need to use an automated test oracle.
  • Ideal approach is a comparison-based oracle
    with actual output value.
  • Can be difficult to implement.
  • Partial oracles may be used to test a particular
    property or part of the output.
  • Does not fully check the output.
  • Could use multiple partial oracles.

11
Oracle Tradeoffs
  • Completeness (how much output is compared?)
  • Accuracy (are the answers correct?)
  • Independence of the oracle from the subject under
    test
  • Performance
  • Usability of results
  • Correspondence of oracle through changes in the
    subject under test

12
Comparison-based Oracles
  • With a comparison-based oracle, we need predicted
    output for each input
  • Oracle compares actual to predicted output, and
    reports failure if they differ
  • Fine for a small number of hand-generated test
    cases
  • hand-written unit test cases

13
True Oracles
  • Reproduces all relevant results using an
    independent mechanism.
  • For mathematical functions, could use a different
    algorithm.
  • Key issues
  • Oracle could produce incorrect answers
  • Oracle could be as complex as the subject under
    test
  • Performance
  • Dependence with subject under test

14
Checking Oracles / Partial Oracles
  • In some cases, it is possible to test something
    using a simple check.
  • For example, division can be checked using
    multiplication
  • Often these oracles are partial oracles because
    they only check a part of the output or a
    specific property.
  • Examples
  • Finding an optimal bus route from A to B Check
    to see if route is valid, does not check if route
    is optimal.
  • Sorting an array Check to see if array is
    increasing, does not check if array contains same
    elements as input.
  • Inserting into a list Check to see if added
    element is in the list, does not check if rest of
    list is intact.
  • Can be implemented as self-checks in the code via
    assertions or other annotations.

15
Consistent Oracles
  • Use the results from a run from one test for
    future tests.
  • Results can come from
  • Simulator
  • Equivalent product
  • Earlier version
  • Key issues
  • How do you check the initial version?
  • Response to changes in the system
  • The correspondence of the prior result to the new
    result.

16
Using Sampling in Oracles
  • Verify a selected sample of values.
  • Stochastic oracles randomly select the input
  • Sampling oracles selected interesting cases
  • Need to compute the answers for these inputs
  • Manually (like a comparison-based oracle)
  • Separate algorithm (like a true oracle)
  • Can use a partial checking oracle
  • A heuristic oracle computes the remaining values
    (between the selected values) using a simpler
    algorithm

17
Automating Test Execution
  • Designing test cases and test suites is creative
  • Like any design activity A demanding
    intellectual activity, requiring human judgment
  • Executing test cases should be automatic
  • Design once, execute many times
  • Test automation separates the creative human
    process from the mechanical process of test
    execution

18
Test Automation SEARCH
  • From How to Automate Testing The Big Picture
    by Keith Stobie and Mark Bergman
  • Setup Setup the software to bring it to a point
    where the test can be executed.
  • Execution Execution of the test.
  • Analysis Analyze the execution to determine if
    the test passed or failed.
  • Reporting Display or store the analysis.
  • Cleanup Returns the software to a known state so
    the next test can be run.
  • Help Help system assists in documenting and
    maintaining the tests.

19
SEARCH Automating Setup
  • Need to establish pre-conditions necessary for
    the test
  • Program needs to be in state X.
  • Variable Y needs to be initialized with Z.
  • Need to run with operating system X version V.
  • Server to communicate with must be in state X.
  • For unit testing, unit testing framework can help
    with setup.
  • Some setup can be done by the test itself.
  • Nontrivial for testing with different
    configurations.
  • Want to use imaging and virtual machines.

20
SEARCH Automating Execution
  • Simple way to automate execution is via scripts.
  • Most useful for command-line driven programs.
  • For GUIs, can use capture / replay
  • Manually and record enter the inputs once.
  • Then replay the inputs again.
  • More on this later.
  • Better way is to use a test harness.
  • Good harnesses are configurable and extensible.
  • The harness will typically also Analyze the
    execution and Report the results.

21
Sample Test Harness Architecture
Test Harness
Configuration Data
Test Cases
Test Case Library (.dll)
contains
Logging
22
SEARCH Automating Analysis
  • Need to determine if the test passed or failed.
  • Oracles are typically used to determine whether
    the test passed or failed.
  • There may be more results than simply pass or
    fail
  • Skip test was skipped because the preconditions
    were not met.
  • Abort test was aborted due to some external
    failure input files not available.
  • Block test was blocked from running due to a
    known issue.
  • Warn test passed but emitted warnings need
    further examination.

23
SEARCH Automating Reporting
  • Basically, log the results of each test into a
    log file or database.
  • Can also log other information such as coverage.
  • For large test suites, it is desirable to have
    some post-processing that automates the parsing
    of the log files.
  • Statistics
  • List of failing tests along with relevant
    information to help debugging

24
SEARCH Automating Cleanup
  • Cleanup is to reset the program back to the
    original state before the test.
  • One way is to start everything over from scratch.
  • Could be time consuming if setup is expensive.
  • Potential issues
  • Memory leaks
  • Corruption between subsequent tests

25
SEARCH Help
  • Automated tests are often code too.
  • As such, they need to be treated as code
  • Documented (including limitations)
  • Maintainable
  • Well structured
  • According to the book How We Test at Microsoft
  • Automated tests for many Microsoft products have
    more lines of code than the products they test.
  • More than 100,000 computers at Microsoft are
    dedicated to automated testing.

26
When to use automation?
  • What factors do we need to consider when
    automating?

27
Manual Testing
  • Some tests are better left to humans to test and
    evaluate.
  • Graphics
  • Response time
  • Sound
  • Selecting inputs especially in a GUI
  • Challenges
  • Expensive to run multiple times.
  • Testers may miss steps hard to trace the
    failure (logging could help).
  • Testers may miss deviations from normal execution.

28
Capture and Replay
  • Using capture and replay can help with testing
    GUIs.
  • Reduces the repetition of human testing
  • Capture a manually run test case, replay it
    automatically
  • Uses a consistent oracle behavior same as
    previously accepted behavior
  • How do you capture events?
  • Record every keystroke and mouse activity.
  • Easily invalidated with small changes to the GUI.
  • Record a list of event handlers.
  • Does not depend on the layout of the GUI.
  • Could miss low-level input-related faults.
Write a Comment
User Comments (0)
About PowerShow.com