Title: Dimensions of Testing
1Dimensions of Testing 1
- CFICSE
- October 2002
- Diane Kelly Terry Shepard
2Dimensions of Testing Part 1
3Why Dimensions of Testing?
- Testing tends to be viewed as an activity that
testers do - implicit is the thought that all testers have
more or less the same skill set - body of knowledge is too big for this to be true
- categorization helps to plan resource use for
testing - also a basis for training and education
- Help make better informed decisions
4OutlinePart 1
- Definitions of testing
- Testing is hard!
- Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
5Definitions of Testing (1)
- 15 Myers
- The Process of executing a program with the
intent of finding errors - 10 Hetzel
- "Testing is any activity aimed at evaluating an
attribute or capability of a program or system
and determining that it meets its required
results."
6Definitions of Testing (2)
- IEEE Standard 610.12-1990 30
- The process of operating a system or component
under specified conditions, observing or
recording the results, and making an evaluation
of some aspect of the system or component - IEEE Standard 829-1998 18
- The process of analyzing a software item to
detect the differences between existing and
required conditions (that is, bugs) and to
evaluate the features of the software item.
7Not Testing (IEEE 610.12 30)
- Static Analysis - e.g.
- proof of correctness
- style checking
- Dynamic Analysis that does not require execution
of the code e.g. - reachability analysis dead code
- checks for memory leaks in C
8Secondary Definitions
- finding
- issue identified in a product by inspection
- defect
- a finding that needs corrective action
- fault
- a defect that may cause a failure
- failure
- a behaviour of executing code that a tester
thinks is wrong - application of these terms requires judgment
- decisions vary, depending on the situation and on
individual background - disagreements are common!
9Testing is Hard!
- Case is made by James Whittaker in 31
- hits budget and schedule limits
- may include heavy debugging
- may even include product (re-)development
- bugs slip through to customers
- Testing evaluates more than just correctness
10Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
11Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
12 Testing Process Overview
- What are the Testing Activities?
- When are they performed?
- Stages of Testing
- Regression testing is part of every stage
- Parts to be Tested
- Effects of Development Process
- Testing process diversity
- Examples and Issues
- Place of Debugging in the Testing Process
13 Testing Activities
- Test planning and design
- Verification of test plans, designs and cases
- Test case selection, creation, and verification
- Planning expected results oracles,
- Data preparation
- Execution and recording of results
- Analysis of results
- Coverage measurement
- Postprocessing volume reduction,
- Wrap-up
14Stages of Testing
- The testing process must organize the stages of
testing, which are essentially determined by the
parts to be tested e.g. - unit
- increment
- build
- integration
- system
- release
- installation
- user interface
15Parts to be Tested
- Any part of a system can be tested
- choice of part constrains choices in other
dimensions - Units are typically the smallest parts tested
- a unit can be a function, a class, or a work
assignment - Some parts of the software may not be tested at
all - e.g. exception handlers, debug code, commercial
components, library functions - For some testing
- stubs and drivers may have to be written (and
tested) - PUT Part Under Test
16Effects of Development Process on Testing Process
(1)
- Impact of choice of development process e.g
- waterfall
- incremental
- XP
- safety critical
- Impact of development process implementation
choices e.g. - daily builds
- metrics
- tool support (e.g. RUP Rational Unified Process)
- change and configuration management,
- Impact of design approach e.g. OO vs. procedural
17Effects of Development Process on Testing Process
(2)
- Development process normally includes unit
testing by developers - developers often have limited time and expertise
for unit testing - expected quality and actual quality of units have
major impact on the testing process - Testers must protect themselves
- in normal cases, testers should talk to
developers as early as possible in the process - in the worst case, testers become developers
18Effects of Development Process on Testing Process
(3)
- Resource limitations can be severe during testing
- load, performance and platform testing during
system testing are especially demanding - Limitations include one or more of
- Budget
- Schedule
- Hardware
- Testers
- Debuggers
- Fixers
- Users
19Testing Process Diversity Examples and Some
Issues
- Examples of decisions that affect process
- Microsoft testing buddies
- test then code (XP)
- operational analysis (reliability testing)
-
- Some issues that must be considered in making
testing process decisions - defect tracking style
- inspection choices e.g. test designs, plans,
cases, - geographic distribution of a project
20Debugging in the Testing Process
- Debugging is a separate activity from testing
- It has a special place in the testing process
- testing reveals the need to do debugging
- fine grain testing is often used in debugging
- depends on code review to choose additional tests
- debugging and testing are usually interwoven
- e.g. in defect tracking systems, defects are
logged as a result of testing, and debugging
supplies more information about each defect
21Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
22Qualities (Ilities)
- All the ilities can affect testing, but some have
a special relationship to testing e.g. - testability, esp. observability and
controllability - failure rate testing stops if it is too high
- Assessing a property (ility) of a PUT may be a
purpose of testing e.g. correctness, reliability,
usability, performance, - may affect choices of strategies or techniques
- only appropriate at certain stages
- assessment of some ilities, such as reliability
or performance, is difficult without testing
automation
23Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose of testing
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
24 Context/environment (1)
- Circumstances in which a part is tested
- Testing in the development environment
- simulation
- Testing in the deployment environment(s)
- actual CPU, OS, devices,
- There may be version and release issues
- Languages of expression
- scripting languages OS shell scripts, tcl, perl,
- of the program
- language version, compiler,
- of the test implementation
- can be a proprietary language e.g. SQA Basic for
Rational Robot - Operating system/hardware
- MS windows, Risc UNIX, Mac O/S, Vax VMS
- will affect availability of tools
25 Context/environment (2)
- Application domain, e.g.
- military weapons systems
- household appliances
- military logistics
- financial
- scientific,
- Shrink wrapped vs. single customer
- Distributed or not
- e.g. Internet or web based
- Part of a product family or not
- Organizational context
- e.g. software architecture may match the
architecture of the organization - Role of standards in testing
26Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose of testing
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
27 Purposes of testing ...
- Why is the testing being done?
- find failures (not bugs!)
- certification
- safety criticality
- usability
- performance
- acceptance
- load
- porting
- compatibility
- security
28 More purposes of testing...
- Why is the testing being done?
- fault tolerance and recovery
- measure reliability
- estimate defects remaining
- decide when to release
- assess problems in software development process
- confirm that functionality is not broken
following modification - all the rest of the ilities!
29Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose of testing
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
30Definitions Strategy, Techniques,
- Strategy outlines in broad terms how to use
testing to assess the extent to which one or more
goals for a product have been met - Techniques implement strategies, provide the
data needed to do the assessment - Some people use other terminology
- Test planning and design captures the choices
of strategy and techniques - Tactics a narrower version of strategy?
- Approach some mixture of the above?
31 Strategy/techniques
- Both strategy and techniques guide how test cases
are chosen - Distinction between strategy and technique is
soft - Next two slides contain some examples
- some of the examples could be on either slide
32Examples of Strategies
- Exploratory testing
- Requirements based test design
- Design based testing
- Testing in the small
- Testing in the large
- More to come
33Examples of Strategies that are close to
Techniques
- alpha/beta testing
- testing most used part of the code vs. buggiest
part - tactic?
- testing GUI vs. testing functionality
- design testing (requires an executable design)
- laboratory testing usability,
- operational profiles
- specification based testing
34Examples of Techniques
- Ad hoc
- Partitions boundaries, intervals,
- Data flow
- Combinational decision tables,
- State based mode transition,
- Test Scenarios
- Guerilla testing
- smoke tests
- Fault injection, mutation testing
- White box, black box (interface testing), gray
box - more to come
35Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose of testing
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
36Automation
- What aspects of testing can be automated?
- Coverage tools
- Data generation
- Capture/playback
- Setup, clean-up, reinitialization
- Push-button execution of test cases
- Data filters
- Post analysis of output
- Configuration management
- Oracles
- Simulation of the environment
- Recovery and restart
37Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose of testing
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
38 Adequacy
- How is the decision to stop testing made?
- Coverage
- Limits set by mgmt
- User acceptance e.g. Early adopters
- Contractual
- Reliability threshold
- Fault detection rate threshold
- Estimates of bugs left
- Meeting a standard e.g. mc/dc
- run out of time and money
39Dimensions of Testing
- Process
- Qualities
- Context/environment
- Purpose
- Strategy/techniques
- Automation
- Adequacy
- Cross cutting issues
40 Cross cutting issues
- These are issues that have an impact on all the
other dimensions of testing - Only two are considered
- scale
- skills
- roadmap for software test engineering
- http//www.sdmagazine.com/articles/2001/0102/0102b
/0102b.htm
41Examples of Scale Issues
- Size of project staff complement
- communication issues
- Size of customer base
- determining customer usage patterns
- Size/complexity of code (KLOC/McCabe)
- adequacy of testing
- Size of data sets / database
- sufficient regression testing
- Length of runtime
- turnaround time for tests
- Extent of output data
- finding errors in output
42Skills of a Tester (1)
- Not all testers are developers
- e.g. testers at Corel who are photographers
(Corel Draw testing - end user testing) - e.g. testing buddies at Microsoft
- Testers may also be developers
- Testers must have
- ability to select good test cases
- ability to withstand the most severe deadline and
resource pressures - ability to organize and analyze large volumes of
data - ability to sort out complex version control issues
43Skills of a Tester (2)
- Inexperienced testers often fail to report
- timing problems
- transient bugs
- what they cant quickly replicate
- what they think can be attributable to their
misunderstanding - what they think the reader of the report might
consider minor - lots of other problems
- Kaner et al 16
44Roadmap for Software Test Engineering
(sdmagazine, Feb01)
- Software engineering skills
- understanding the rules of software engineering
- computer programming
- operating system level knowledge
- test development must be done with same care as
application development - communication skills
- organizational skills
- hands-on experience
- attitude
- destructive creativity
- detective skills
- understanding the product as the sum of its parts
- appreciating the customers perspective
- requirements change
- skeptical but not hostile attitude
- ability to be the bearer of bad news and remain
objective - an eagerness to embrace new technologies
45More Details
- Strategies and Techniques
- Regression Testing
- Adequacy
- Automation