Title: Testing II
1Testing II
EE491.10
Maj JW Paul
Modified from a series of lectures by Dr Terry
Shepard Diane Kelly
2Marking Scheme (almost final)
- Critiques 20
- critiques, participation, leading discussion
- Lab reports 15
- end of class estimates final reports
- Final part 1 (Oct 24) 15
- Final part 2 (Dec 11) 50
3Skills of a Tester
4Skills of a Tester
- Testers must have
- ability to select good test cases
- ability to withstand the most severe deadline and
resource pressures - ability to organize and analyze large volumes of
data - ability to sort out complex version control
issues
5Non-programmer testers
- Not all testers are developers
- testers at Corel who are photographers
- testing buddies at Microsoft
- most beta-testing
- Why choose these individuals?
- smart ignoramus
- lack of resources
6Caveat - inexperienced testers
- often fail to report
- timing problems
- transient bugs
- what they cant quickly replicate
- what they think can be attributable to their
misunderstanding - what they think the reader of the report might
consider minor - (Kenner et al, p. 245)
7sdmagazine
Roadmap for Software Test Engineering Feb 01
Skills
Attitude
- Software engineering
- understanding the rules of software engineering
- computer programming
- operating system level knowledge
- test development must be done with same care as
application development - communication
- organizational
- experience
- destructive creativity
- detective
- sum of small is big picture
- flexible
- see customers perspective,
- accept requirements change
- diplomat
- skeptical but not hostile
- be the bearer of bad news
and remain objective - an eagerness to embrace new technologies
8Skills of Inspectors
- Detective
- Proof reader
- Expert
Skills of Testers
9Qualities
10All ilities can affect testing
- Must be goal of testing
- Some have direct relationship
- testability, observability, controllability...
- May be specific purpose of testing
- correctness, reliability, usability,
performance... - may affect strategies or techniques
- Some require automated testing
- reliability, performance...
11More on testability
- Affects design and coding
- what is high testability
- good control and observation
- being able to get to a particular state in the
code, and then having some observable effects in
that state - ease with which it is possible to select an
adequate set of test scenarios
(no longer a widely used definition)
12Context/Environment
13When is testing done?
- language issues - version, compiler, tool
availability/automation, OS - simulation
- actual CPU, OS, devices,
- version and release issues
- what is role of standards in testing?
- in the development environment
- in the deployment environment(s)
14Type of software
- Application domain
- military (logistics, weapons, C3I), financial,
scientific, commercial (OAS, household
appliances), - Shrink wrapped vs single customer
- Distributed (internet/network)
- Part of a product family
- Manufacturer - organization
- availability of testing skills
15Strategies/Techniques
16Why does testing work? Hoare 1
- the number paths of interest is much less than
the size of the state space - most of state space is unreachable
- only interested in a small fraction of reachable
- defects are bimodal
- find them quickly because they are on common
paths - or dont find them for a long time - most software is reasonable predictable
- aka trustworthiness - not the same as reliability
All strategies are based on the fact that testing
works!!
17Definitions
outlines in broad terms how to use testing to
assess the extent to which one or more goals for
a product have been met
Strategy
Guides choice of test cases
dimensions of testing impacts choice(s)
implement strategies, provide the data needed to
do the assessment
Techniques
captures the above choices more than test cases,
choosing test purpose how to meet it, start
early, capture all dimensions,
Test Planning and Design
18Examples of Strategies
- Exploratory testing
- Requirements based test design
- Design based testing
- Testing in the small
- Testing in the large
- balance is a strategic decision
- is it a strategy a manager chooses it?
19Examples of Techniques
- Ad hoc
- Partitions boundaries, intervals,
- Data flow
- Combinational decision tables,
- State based mode transition,
- Test Scenarios
- guerilla testing
- smoke tests
- fault injection, mutation testing
- white box, black box, gray box
20Is it a Strategy or a Technique?
- alpha/beta testing
- testing most used part of the code vs. buggiest
part (tactic?) - testing GUI vs testing functionality
- basing testing on requirements scenarios
- design testing (requires executable design)
- laboratory testing usability,
- operational profiles
- specification based testing
21Four levels of testing Kit 3
starts with requirements or before (200
payback) testing part of resources/planning
starts with functional design (150 payback)
Partial testing
Endgame testing
validation oriented
audit of plans, procedures, products checking
after the fact
Audit-level testing
22Strategies/Techniques
- The impact of
- Dimensions of Testing
23Scale Issues aka bigger means...
staff increases communication issues
project
customer base
variable usage patterns
code (complexity)
adequacy of testing
input data/database
regression testing
output data
finding errors in output
runtime
turnaround time for tests
24Effect of Process
- If development is
- incremental/iterative
- based on contract or acceptance testing
- risk driven
testing will match may be focus of process
- Three kinds of risk 14
- Project lack of experience, right skill set
- Business time to market pressure, profitability
- Technology new language, new platform
25Effect of Context/Environment
safety critical (business/mission)
test all to some criteria
executable design
can test the design
hw/sw/os
availability of tool support
shrink wrap beta testing single user ??
nature of market
management
resource allocation, priorities
26Effect of Purpose
- Focus on ilities of the system
- safety, performance, functionality,
reliability... - Focus on users expectations (tolerance)
- test for acceptable error rate on common paths
- test customers most important scenarios
- Focus on maintenance issues
- portability, versioning, installability, support
- Focus on management issues
- business risk, economics, testing policies
27Effect of Automation
- some automation is always present
- choosing a high degree of automation is a
strategic decision - increases costs but more thorough testing
- can be high risk (automation ? better tests)
- success depends on
- management support
- money, resources, tools
- staff with the right skill set
- shift to an automatable testing process
28Effect of Adequacy
- Traversal can be breadth or depth first
- Coverage may mean
- All requirements have been touched once
- Is this a good strategy?
- Strategy based on reaching reliability targets
may lead to infeasible amounts of testing - might use coverage to cover checklists
29Why are dimensions of testing important to the
strategy?
outlines in broad terms how to use testing to
assess the extent to which one or more goals for
a product have been met
30Testing success
- To implement successful testing program, must
know the limitations - If a certain type/style is required - may have to
change something else in order to get it to work