Dimensions of Testing - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

Dimensions of Testing

Description:

[7] Robert Binder, Object Magazine 1995 http://www.rbsc .com/pages/myths.html ... of Software Testing Tools, http://www.soft.com/News/TTN-Online/ttnjan98.html ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 27
Provided by: dian183
Category:

less

Transcript and Presenter's Notes

Title: Dimensions of Testing


1
Dimensions of Testing 6
  • EE599 Software VV
  • Winter 2006
  • Diane Kelly Terry Shepard

2
Dimensions of Testing Part 6
3
Test Automation
  • Simply throwing a tool at a testing problem will
    not make it go away.
  • Dorothy Graham, The CAST Report

4
Test Automation Outline
  • Some automation facts
  • Manual Testing vs. Automation
  • Tool support for life-cycle testing types of
    tools
  • The promise of test automation
  • Common problems of test automation
  • Process Issues
  • Avoiding the Pitfalls of Automation
  • Building Maintainable Tests
  • Evaluating Testing Tools
  • Choosing a tool to automate testing
  • Testing Tool Market
  • references 21,22,23

5
Some automation facts
  • Automation of one test typically costs about 5
    times a single manual test execution
  • range is roughly 2 to 10, can be as much as 30
  • Savings are as high as 80 - eventually
  • Testing is an interactive cognitive process
  • automation is best applied to a narrow spectrum
    of testing
  • not applied to the majority of the test process
  • All testing needs human interaction
  • tools have no imagination

6
Manual Testing vs. Automation (1)
  • Testing and test automation are different skills
  • good testers have a nose for defects
  • good automators are skilled at developing test
    scripts
  • Tool support is most effective after test design
    is done
  • there is more payback in automating test
    execution and comparison of results than in
    automating test case generation, coverage
    measurement, and other test design activities

7
Manual Testing vs. Automation (2)
  • Dont evaluate manual testing against automated
    testing for cost justification
  • manual testing and automated testing are two
    different processes
  • treat test automation as one part of a
    multifaceted test strategy
  • Dont decide on automation simply on the basis of
    saving money
  • testers typically dont end up with less work to
    do

8
Tool support for life-cycle testing types of
tools
  • test case generators, test data generators
  • e.g. derive test input from a specification
  • e.g. extract random records from a database
  • test management plans, tracking, tracing,
  • static analysis
  • coverage
  • configuration managers
  • complexity and size measurers
  • dynamic analysis
  • performance analyzers
  • capture-replay
  • debugging tools used as testing tools
  • network analyzers
  • simulators
  • capacity testing
  • test execution and comparison
  • compilers
  • ...

9
The promise of test automation What are the
potential benefits? (1)
  • run more tests more often
  • run tests that are difficult or impossible
    manually
  • e.g. simulate 200 users
  • e.g. check for events with no visible output in
    GUIs
  • better use of resources
  • testers are less bored, make fewer mistakes, and
    have more time
  • to design more test cases
  • to run those tests that must be run manually
  • use CPU cycles more effectively

10
The promise of test automation What are the
potential benefits? (2)
  • consistency and repeatability of tests
  • increased reuse of tests leads to better test
    design and better documentation
  • meeting quality targets in less time
  • increased confidence/quality/reliability
    estimation
  • reduce regression testing costs

11
Common problems of test automation
  • unrealistic expectations
  • poor testing practice
  • automating chaos just gives faster chaos
  • expectation that automation will increase defect
    findings
  • false sense of security
  • maintenance of automated tests fragility issues
  • technical problems
  • organizational issues
  • test automation is an infrastructure issue, not a
    project issue

12
Process Issues
  • Which tests to automate first?
  • do not automate too much too fast
  • Selecting which tests to run when
  • subset of test suites
  • Order of test execution
  • Test status
  • pass or fail
  • Designing software for automated testing
  • Synchronization
  • Monitoring progress of automated tests
  • Processing possibly large amounts of test output
  • Tailoring your regime around test tools

13
Avoiding the Pitfalls of Automation (1)
  • Get your test strategy clear first before
    contemplating automation
  • Tests have to be debugged
  • test automation is software development and must
    be done with same care
  • Test automation can encourage a proliferation of
    useless test cases
  • evaluate your test suites and clean them up
  • Hardest part of automation is interpreting
    results
  • human effort required here

14
Avoiding the Pitfalls of Automation (2)
  • Test automation (esp. test case generation) can
    lead to
  • a set of weak shallow tests
  • tests that ignore interesting bugs
  • the tester spending a lot of time on extraneous
    activities related to the tools being used
  • Is it useful to repeat same tests over and over?
  • Study at Borland over 80 of bugs were found
    manually

15
Building Maintainable Tests (1)
  • Dont let the test suite become too big
  • before adding any new test ask what is the test
    contribution to
  • defect finding capability
  • likely maintenance cost
  • Ensuring test designers and test builders limit
    their use of disc space
  • large amounts of test data have an adverse impact
    on test failure analysis and debugging effort
  • Keep functional test cases as short in time and
    as focused as possible

16
Building Maintainable Tests (2)
  • Design tests with debugging in mind
  • What would I like to know when this test fails?
  • Start cautiously when designing tests that chain
    together
  • if possible, use snapshots to restart a chain
    of test cases after one fails
  • Adopt a naming convention for test elements
  • Document test cases
  • overview of test items
  • annotations in scripts

17
Building Maintainable Tests (3)
  • Limit number of complex test cases
  • difficult to understand even for minor changes
  • effort needed to automate and maintain may wipe
    out any savings
  • Use flexible and portable formats for test data
  • time taken to convert data often is more
    acceptable than the cost of maintaining large
    amounts of data in a specialized format

18
Evaluating Testing Tools (1)
  • Capability
  • having all the critical features needed
  • Reliability
  • working a long time without failures
  • Capacity
  • handling industrial environments
  • Learnability
  • having a reasonable learning curve or support for
    learning

19
Evaluating Testing Tools (2)
  • Operability
  • offering ease of use of interface
  • Performance
  • advantages in turn-around time versus manual
    testing
  • Compatibility
  • ease of integration with application environment
  • Non-intrusiveness
  • not altering the behaviour of the software under
    test

20
Choosing a tool to automate testing
  • Introduction to Chapters 10 and 11 of 23
  • Where to start in selecting tools
  • YOUR requirements
  • NOT the tool market
  • The tool selection project
  • The tool selection team
  • Identifying your requirements
  • Identifying your constraints
  • Build or buy?
  • Identifying what is available on the market
  • Evaluating the short listed candidate tools
  • Making the decision

21
Testing Tool Market (Ovum - www.ovum.com)
  • In 1999, was 450M, growing at 30 per year
  • Dominant players (with 60 of the total market)
    were (1999)
  • Mercury Interactive
  • Rational
  • Compuware
  • Segue
  • OVUM currently provides evaluation on the
    following
  • Compuware QACenter 2003
  • Empirix Empirix testing tools
  • Mercury Interactive Mercury Interactive testing
    tools
  • Rational Software TestStudio 2003
  • SDT Unified TestPro
  • Telelogic Telelogic testing tools

22
References 1
  • 1 C.A.R. Hoare, How did software get so
    reliable without proof?, Formal Methods Europe
    96, Keynote speech
  • 2 D. Hamlet and R. Taylor, Partition Testing
    does not Inspire Confidence, IEEE Transactions
    on Software Engineering, Dec. 1990, pp. 1402-1411
  • 3 Edward Kit, Software Testing in the Real
    World Improving the Process, Addison Wesley,
    1995
  • 4 Brian Marick, The Craft of Software Testing,
    Prentice Hall, 1995
  • 5 Boris Beizer, Software Testing Techniques,
    2nd Edn, Van Nostrand Reinhold, 1990
  • 6 T.J. Ostrand and M.J. Balcer, The
    Category-Partition Method for Specifying and
    Generating Functional Tests, Communications of
    the ACM 31, 6, June 1988, pp. 676-686
  • 7 Robert Binder, Object Magazine 1995
    http//www.rbsc.com/pages/myths.html
  • 8 Robert Poston, Specification-Based Software
    Testing, IEEE Computer Society, 1996

23
References 2
  • 9 James Bach, General Functionality and
    Stability Test Procedure, http//www.testingcraft.
    com/bach-exploratory-procedure.pdf
  • 10 Bill Hetzel, The Complete Guide to Software
    Testing, 2nd ed., 1988, WileySons
  • 11 Cem Kaner, Jack Falk, Hung Quoc Nguyen,
    Testing Computer Software, 2nd ed., 1993, Van
    Nostrand Reinhold
  • 12 Andrew Rae, Phillippe Robert, Hans-Ludwig
    Hausen, Software Evaluation for Certification,
    1995, McGraw Hill
  • 13 William Perry, Effective Methods for
    Software Testing, 1995, John Wiley Sons
  • 14 John McGregor and David Sykes, A Practical
    Guide to Testing Object-Oriented Software,
    Addison Wesley, 2001, ISBN 0-201-32564-0 393 pp.
  • 15 G.J. Myers, The Art of Software testing,
    Wiley, 1979
  • 16 Cem Kaner, Jack Faulk, and Hung Quoc Nguyen,
    Testing Computer Software, 2nd Edition, Van
    Nostrand, 1993

24
References 3
  • 17 DO-178B Software Considerations in Airborne
    System and Equipment Certification
  • 18 IEEE 829 - 1998, Standard for Software Test
    Documentation
  • 19 Mark Fewster and Dorothy Graham, Software
    Test Automation Effective Use of Test Execution
    Tools, ACM Press, 1999
  • 20 www.sei.cmu.edu/cmm/cmms/cmms.html
  • 21 James Bach, Test Automation Snake Oil,
    First published Windows Tech Journal, 10/96 (see
    articles at http//www.satisfice.com/
  • 22 Robert Poston A Guided Tour of Software
    Testing Tools, http//www.soft.com/News/TTN-Online
    /ttnjan98.html
  • 23 Fewster and Graham, Software Test
    AutomationEffective Use of Test Execution Tools,
    ACM Press/Addison Wesley, 1999

25
References 4
  • 24 W.J. Gutjahr, Partition Testing vs. Random
    Testing The Influence of Uncertainty, IEEE TSE,
    Sep/Oct 99, pp. 661-674, v. 25, n.5
  • 25 Robert V. Binder, Testing Object Oriented
    Systems Models, Patterns, and Tools,
    Addison-Wesley, 2000 (see http//www.rbsc.com)
  • 26 Capers Jones, Software quality analysis and
    guidelines for success, International Thomson
    Computer press, 1997
  • 27 Andrew Rae, Phillippe Robert, Hans-Ludwig
    Hausen, Software Evaluation for Certification,
    1995, McGraw Hill
  • 28 Shari Lawrence Pfleeger, Software
    Engineering Theory and Practice, 2nd ed.,
    Prentice Hall, 2001
  • 29 Hong Zhu, Patrick Hall, John May, Software
    Unit Test Coverage and Adequacy, ACM Computing
    Surveys, Vol.29, No.4, Dec.1997
  • 30 IEEE Std 610.12 - 1990, IEEE Standard
    Glossary of Software Engineering Terminology

26
References 5
  • 31 James Whittaker,What is Software Testing?
    And Why is it so Hard?, IEEE Software,
    Jan./Feb.2000, pp.70-79
  • 32 Jeffrey Voas, Gary McGraw, Software Fault
    Injection, Wiley Computer Publishing, 1998
  • 33 Ron Patton, Software Testing, Sams
    Publishing, 2001
  • 34 Dorothy Graham, Requirements and Testing
    Seven Missing-Link Myths, IEEE Software,
    Sept/Oct 2002, pp. 15-17
Write a Comment
User Comments (0)
About PowerShow.com