RC's STAR96 Tutorial - PowerPoint PPT Presentation

1 / 118
About This Presentation
Title:

RC's STAR96 Tutorial

Description:

This course provides the essential framework for successful test management. ... 'I've got it too, Omar... a strange feeling like we've just been going in circles.' 15 ... – PowerPoint PPT presentation

Number of Views:83
Avg rating:3.0/5.0
Slides: 119
Provided by: andriapa
Category:

less

Transcript and Presenter's Notes

Title: RC's STAR96 Tutorial


1
Test Management Planning Fundamentals
Rick Craig Software Quality Engineering
Central Ohio Quality Assurance Association
2
Test Management
  • What Every Test Manager Needs to Know
  • This course provides the essential framework for
    successful test management. It identifies the key
    management issues and explores alternative
    strategies and approaches. The emphasis is on
    tight integration of project and test and
    evaluation activities, a master overall test
    plan, and ongoing tracking of test assets and
    results.
  • Successful test management requires the same
    approach as successful project management
  • 1. Develop a sound strategy
  • 2. Keep in close touch with the situation
  • 3. Identify and aggressively manage the critical
    issues.
  • 4. Modify the strategy as needed, based on
    situational feedback.
  • The trick to good test management is knowing
    the components of effective test strategies and
    feedback mechanisms and recognizing the critical
    issues as they surface.
  • Who Should Attend?
  • The course is intended for anyone who must
    organize or guide project test and evaluation
    efforts. This includes project managers and team
    leaders as well as anyone responsible for a
    testing group or assigned to lead a test
    improvement effort.

3
Topical Outline
  • I. Introduction Leadership, Support Control
  • What is Software? Quality? Management?
  • What should management be doing?
  • What is modern test all about? Whats different
    and changing?
  • II. Management Responsibilities
  • What is the test managers responsibility?
  • How should you organize? What should be the
    developers job?
  • Who makes good testers? How do you get them?
  • What kind of training and tools do you have to
    provide?
  • III. Test Strategies Plans
  • What is a good test plan? What should be in
    master and level plans?
  • How are project plans and test plans different?
    The same?
  • What are the critical test strategy issues?
  • What deliverables should be required from each
    project phase and level?
  • What are the critical success factors driving
    testing in the large (system acceptance)?
  • What are the critical susccess factors driving
    integration and component testing?
  • What should you do about testing in the small?
  • What about maintaining and updating the test
    plans?
  • IV. Test Reporting Testware Control

4
Introduction
5
Choose One
  • The overall quality of our software systems and
    product is
  • A. OutstandingOne of the best
  • B. AcceptableBut should be better
  • C. Must be improved
  • D. A mystery to me
  • The time, effort and money we spend trying to
    achieve high software quality is
  • A. Just about right
  • B. Too much, and needs to be reduced
  • C. Too little, and needs to be increased
  • D. A mystery to me

6
Software Testing ManagementReport Card
  • Quality LEADERSHIP
  • 1. Clear policies, vision and direction? _______
  • 2. Organization and team in place to get
    there? _______
  • 3. Motivated and properly rewarded staff? _______
  • Quality SUPPORT
  • 4. Modern testing methodologies in place? _______
  • 5. Integrated quality systems in place? _______
  • 6. Sufficient skilled resources
    infrastructure? _______
  • 7. Established management technical
    processes? _______
  • Quality CONTROL
  • 8. Modern testing methodologies in place? _______
  • 9. Integrated quality systems in place? _______
  • 10. Prevention detection of quality
    problems? _______

7
Quality
  • The totality of features and characteristics of
    a product or service that bear on its ability to
    meet stated or implied needs.
  • International Standard Quality Vocabulary
    (ISO8402)

QUALITY IS
1. Meeting REQUIREMENTS (stated
implied) 2. SMILES
8
  • We define quality as conformance to
    requirements. Requirements must be clearly
    stated. Measurements determine conformance
    nonconformance detected is the absence of
    quality.
  • Philip B. Crosby

9
Three Reasons Why Software Quality is Difficult
  • 1. Dependence on REQUIREMENTS

How should (must?) the software behave?
?
NEEDS
REQUIREMENTS
SPECS
GOALS
DESIRES
Most failures relate (at least partly) to unclear
or fuzzy requirements. You cant validate or test
what you dont know!
10
2. Dependence On Design
  • ANALYZING COMPLEX AGGREGATES
  • Interfacing hardware, applications, personnel

Coexisting software
Communication Interfaces
You cant effectively test what you dont
understand
11
3. Sensitivity to Tiny Flaws
  • The Venus Probe Bug
  • What it should have been DO 3 I1 , 3
  • What it was DO 3 I1
    . 3

Minor Defects Can Produce Major Failures
12
Managing for Quality
  • Means
  • Active participation of everyone involved
    (Managers, Developers, Technicians, Quality
    People, Customers, etc.)

QUALITY
Budget
Schedule
13
Setting a Vision
  • Important to have a direction

Where You Want To Be Tomorrow
Where You Are Today
MEASUREMENT
14
Quality Team
Ive got it too, Omar a strange feeling like
weve just been going in circles.
  • Accountability Responsibility Must Be Clear

15
Provide a Partner
  • Test Evaluation is a major specialty area
  • Test Evaluation needs technical leadership
  • Many forms of leadership work
  • Should be specialized but NOT independent

16
What Are Your Quality Goals
J J J J
Perfect
J J J
World Class
J J
Competitive
J
Saleable
17
Goal Status
Are your quality goals communicated?
Are your quality goals believed?
Are the goals measurable?
Are they being measured?
Are they being met?
18
Goals vs. Priorities
  • What Are Your PRIORITIES?

19
Quality Culture
  • What Do
  • customers
  • managers
  • developers
  • (not) say (not) do

Attitudes impact performance
20
Preventative Testing
  • 1. A new (REVOLUTIONARY??) way of thinking
  • 2. Using tests to INFLUENCE and CONTROL software
    designs and development

Preventative testing is built upon the
observation that one of the most effective ways
of specifying something is to describe (in
detail) how you would recognize (test) the
something if someone ever gave it to you. Bill
Hetzel
21
  • The defect that is prevented doesnt need repair,
    examination, or explanation. The first step is
    to examine and adopt the attitude of defect
    prevention. This attitude is called,
    symbolically Zero Defects.
  • Philip Crosby
  • Quality is Free The Art of Making Quality Certain

22
Key Elements of Preventative Testing
  • 1. Test are used as requirements models
  • 2. Testware design leads software design
  • 3. Defects are detected earlier or prevented
    altogether
  • 4. Defects are systematically analyzed
  • 5. Testers and developers work together

23
Preventative Testing
  • The Systematic Test Evaluation Process (STEP)
    Methodology
  • An Overview

24
Preventative Testing
Testing Process
  • STEP Architecture

Testing Level
Level
Level
Phase
Phase
Phase
25
Preventative Testing
26
Changing Test Objectives
27
A Short Test?
  • Federal Drug Administration
  • You CANT test quality into your software
  • Software Quality Engineering
  • You MUST test quality into your software

28
Management Responsibility
29
  • Techniques of test management are very similar
    to those of software management and in fact,
    project management in general
  • Rick Craig

30
Management Responsibility
  • Actions Required
  • LEADERSHIP
  • Responsibilities
  • Goals and policies
  • Rewards and Motivation
  • SUPPORT
  • Methodology
  • Training
  • Tools
  • Resources
  • CONTROLS
  • Problem Tracking
  • Test Effectiveness Tracking
  • Feedback

31
Responsibilities from a Relationship Perspective
End User
System Development
System Designer
Quality Assurance
Outside Contractor
Auditors
Project Management
32
The Testing Organization
  • How SHOULD you organize for testing
    effectiveness?
  • Coordinator
  • Support Unit
  • Test Unit
  • Independent Test Unit
  • Quality Assurance
  • Independent VV
  • No BEST Answers
  • A function of your ENVIRONMENT, CULTURE and
    INDUSTRY as well as the PROJECT or application

33
Are Testers Different?
Test Design FAILURE PESSIMISM
System Design SUCCESS OPTIMISM
MIND SETS
  • Imagining ways to go wrong, rather than right
  • Searching for weakness to balance the strengths

34
Providing Training
  • Training must be an integral strategy
  • Publishing a procedure WILL NOT be sufficient
  • Telling people what to do DOES NOT mean they will
    do it.
  • Must Sell and Train Thoroughly
  • Programmers and Analysts
  • Project leaders and managers
  • Testers
  • Need to show how methods apply to specific
    projects
  • Need to consider motivation

35
Test Tools
  • DEFINITION Automates some aspect of the
    testing activity
  • Tools are proliferating rapidly
  • Test Manager expected to be knowledgeable
  • Proper selection is a critical success factor
  • Proper support is a critical success factor

36
Test Tool Realities
  • Testers are highly interested in tools but do not
    want to apply effort to use them properly.
  • Testers know nothing happens by magic but want to
    believe test tools will solve all testing
    problems.
  • Tools use must be taught on an ongoing basis.
    Benefits and requirements of each tool need to be
    understood by everyone who requires training.
  • Training must be followed up with assistance and
    support. Help should be available by phone.
  • Tools must be integrated into routine procedures
    and processes. This includes simplified job
    control, online interfaces, etc.

37
Tool Selection
  • Integration
  • How does the tool support your testing process?
  • Will it impact procedures and standards?
  • Relationship to other tools?
  • Understanding Training
  • What training is required?
  • Is the tool easy to use?
  • What is the setup and initiation requirement?
  • Scope of Use
  • Organization-wide
  • Project-wide
  • Team or selected individuals
  • Total Costs
  • License or purchase cost Analyzing selecting
  • Installation Familiarization training
  • Operating cost Costs of misuse
  • Maintenance support Procedures modification

38
Test Strategies Plans
39
Test Planning
  • 1. Planning Fundamentals
  • What is Planning?
  • Planning for Risk Management
  • Planning Dynamics
  • 2. Master Planning
  • What is a Test Level?
  • What is a Strategy?
  • Strategy Issues and Influences
  • What is a Master Test Plan?
  • 3. Detailed Planning Issues
  • Acceptance Level
  • System Level
  • Build Level
  • Unit Level

40
FundamentalsSection 1
  • Objectives
  • Recognize the importance of distinguishing test
    planning from test design and specification
  • Appreciate the influence of risk and scope
    selection on the planning process

41
Understanding Planning
What is PLANNING? Developing an atmosphere and
process for achieving defined objectives What is
a PLAN? A description of the understandings and
process Key elements Resources, risks, strategy,
responsibilities, controls
  • 1. Test planning CANT be separated from project
    planning.
  • All important test planning issues are also
    important project planning issues.
  • 2. Test planning SHOULD be separated from test
    design.
  • Software design is not considered part of
    software planning and test design should not be
    considered part of test planning.

42
Importance of Planning
Criticality of Test Plan
Complexity of Activity ( of groups
individuals, communication
coordination history,
...)
43
Scope of TE Plans
  • Product vs. Software
  • Evaluation vs. Test
  • All Levels vs. Some Level(s)
  • Single Plan vs. Multiple Plans

Clarity Communication of scope can be a
critical success factor
44
Planning Dynamics
  • Communication, coordination, and control are
    critical success factors on larger projects.
  • The plan should be an echo of previous oral
    agreements.
  • The process is more important than the document.
  • Effective plans influence behavior and are
    influenced by reality (i.e., have multiple
    revisions during the project).
  • Continually plan the work and work the plan.

45
Master Planning
  • Objectives
  • Understand what establishing a TE strategy and
    master plan entails and how it relates to project
    planning
  • Understand the effective use of test levels and
    other critical planning issues

46
Master Test Plan
Document that guides controls all testing
efforts.
  • What is it?
  • Why have it? Primary means for test manager to
    exert influence
  • Defines the overall test strategy and high-level
    plans
  • also known as Master Test Evaluation Plan
    (TEMP)
  • Verification Validation Plan (VVP)
  • Software Quality Assurance Plan (SQAP)

47
Master vs. Detail Plans
Test Methodology, Standards, Guidelines
Project Plan
MASTER Test Plan
System Test Requirements
Build Test Requirements
Acceptance Test Requirements
ACCEPTANCE Level Test Plan
SYSTEM Level Test Plan
BUILD Level Test Plan
Acceptance test objectives Acceptance test
designs Acceptance test procedures Acceptance
test reports
Build test objectives Build test designs Build
test procedures Build test reports
System test objectives System test designs System
test procedures System test reports
48
Test Plan Outline
  • 1. Test plan identifier
  • 2. Introduction
  • 3. Test items
  • 4. Software risk issues
  • 5. Features to be tested
  • 6. Features not to be tested
  • 7. Approach
  • 8. Item pass/fail criteria
  • 9. Suspension criteria resumption requirements

10. Test deliverables 11. Remaining testing
tasks 12. Environmental needs 13. Staffing
training needs 14. Responsibilities 15. Schedule 1
6. Planning risks contingencies 17. Approvals
Derived from ANSI Standard 829
For Master or Level-Specific
49
Test Strategy
What is a STRATEGY? The overall APPROACH chosen
to achieve a GOAL or AIM A critical success factor
  • Strategy That set of decisions (i.e., choices)
    that each have significant potential to have a
    major impact on success

50
Strategic Thinking
  • 1) Knowing what matters
  • 2) Seeing best options
  • 3) Choosing best options

Basic strategic thinking is learned in the school
of hard knocks (hopefully other peoples).
51
Potential Test Strategy Issues
  • Selection of test objectives metrics
  • Understanding, prioritizing and managing risks
  • Defining and relating testing levels
  • Relationship of testing to other evaluation
    activities
  • Use of standards and specific techniques
  • Staffing choices
  • Relationship between organizations
  • Time, cost and quality tradeoffs
  • Allocation of responsibilities
  • Acquisition and allocation of resources
  • Use of tools and automation

52
Influences on Strategy
TEST REQUIREMENTS Contractual Provisions VV or
QA Requirements Other Evaluation Activities
APPLICATION Status of software Nature of
system Acquisition strategy
TEST ENVIRONMENT Development lab Alpha or
Beta Production Duplicate Actual or Simulated
RESOURCES Money Time People Skills
STRATEGY
TEST OBJECTIVES Demonstrate usability Make it
break Check interfaces Prevent defects
TEST TOOLS Simulators Capture/Playback Test Beds
RISKS Quality Requirements Impact of
Failure Likelihood of defects
53
Potential Software Risks
  • Mission Criticality
  • Scope of Use
  • Environment Accessibility Criticality
  • Usable Requirements
  • Security Requirements
  • Interface Complexity
  • Technical Complexity
  • Component History

54
Potential Planning Risks
  • Delivery Dates
  • Staff Availability
  • Budget
  • Environment Options
  • Tool Inventory
  • Acquisition Schedule
  • Participant Buy-In
  • Training Needs

55
Measuring Effectiveness
  • Defects found and missed
  • Age of defects found
  • Number of rework cycles
  • Delivered quality or user satisfaction
  • Cost of failure and rework
  • Cost saved by failures avoided
  • Test rework (test defects and reruns)
  • Cost of testing (as a percentage)
  • Test documentation (scope and quality)
  • Test coverage
  • Defect finding rate
  • ????

Values Distributions Trends
56
Budgeting for Test
  • Resources Required Depend On
  • Quality of the requirements
  • Quality of design implementation
  • Thoroughness of unit reviews and testing
  • Nature of the software application
  • Testing strategy and methodology
  • Testing tools and automation
  • Starting Guideline is 25
  • Does not include debugging or software rework
  • Assumes a modest amount of test rework
  • Must increase for complex or critical systems

57
Coordination Control Tool
  • Defines tasks and activities
  • Should be used to track completion and status
  • Must be kept current

EXPECTATIONS ARE IMPORTANT Recognize as a
WORKING document Responsibility to
MAINTAIN Responsibility to INFORM
58
Plan Review Evaluation
Three Suggested Review Points
INITIAL REVIEW Confirm test strategy and
approach LEVEL REVIEWS Detailed review for each
test level (completed as the level plan is
refined) TEST COMPLETION Evaluate effectiveness
and lessons learned
Issues Approach strategy Organizations
projects Responsibilities Tasks
people Timeliness Feasible realistic Completenes
s Clear and effective Risks Interrelationships
59
Standardized Process
  • Master Test Plan provides STRUCTURE for a
    SPECIFIC project
  • Goal is an ONGOING Standard Test PROCESS

METHODOLOGY TECHNIQUES TOOLS TRAINING EXPERTISE
POLICY STANDARDS METRICS PROCEDURES GUIDES
60
Detailed Level Planning
  • Objectives
  • Understand specific distinctions between master
    and detail plans
  • Learn the STEP detail planning process and how to
    produce, evaluate and maintain effective test
    plans

61
Planning Phase Process
Project Info
Overall Resources
Software Info
Level Specific Resources
DEVELOP Master Plan
DEVELOP Detailed Plans
Clarifications
Master Plan
Level Specific Plan
  • 1. Identify prioritize product risk issues
  • 2. Determine resource constraints
  • 3. Develop overall testing strategy
  • 4. Determine resource, staffing, training
    rqmts
  • 5. Identify responsibilities
  • 6. Specify an overall schedule
  • 7. Complete a draft master test plan
  • 8. Review revise master plan

1. Identify prioritize feature risks 2. Refine
testing strategy 3. Specify special resource
tool rqmts 4. Specify a detailed
schedule 5. Complete a draft detailed test
plan 6. Review revise detailed plan
62
Understanding Levels
  • What is a Test Level?
  • A division of the testing effortspecified
    responsibility for specified software
    component(s) or aggregate(s) in specified test
    environment(s)
  • What is a Test Environment?
  • Collection of hardware, communications, software,
    data bases, procedures and personnel used for the
    execution of a test.
  • Why Have Levels?
  • Reduce risk
  • Start execution earlier
  • Manageability and control

63
Typical Test Levels
(Complete) Scope of Software Configuration Under
Test (Incomplete)
Customers Users Testers Test Team Development
Team Programmers
Acceptance
System / Product
Subsystem / Build
Function / Integration
Unit / Component
(Artificial) Environmental
(Real)
Reality
64
Start from the Top Levels
65
Acceptance Test Planning
  • PURPOSE Acceptance testing is that specialized
    testing performed to demonstrate a systems
    readiness for operational use and customer
    acceptance.
  • PURPOSE Focus is on user requirements
  • Follows systems testing and is expected to work
  • A final stage of confidence building
  • A protection to insure production readiness
  • Helps to break into parts
  • User Acceptance user readiness and buy-off
  • Production Acceptance Operational and corporate
    readiness and buy-off

66
Need to Map Requirements
Requirement Spec Sections 3.5.1.3.2 3.5.1
.4.7 3.6.4.2.1 3.8.2.7.1
Test Section Test Case 3 Test Case
5 Test Case 9
67
Acceptance Level Issues
  • Who are the accepting parties?
  • User/Customer Marketing Third Party
  • What is to be accepted?
  • Software and/or Hardware and/or Procedures
  • Users ability to use the system
  • What are the acceptance criteria?
  • What is the release strategy?
  • Pilot Phased Parallel
  • Who is responsible for test specification and
    conduct?
  • Accepting partiesExternal companySystem TestQA
  • What is a reasonable budget and duration?
  • How is this testing related to system test?
  • What retesting is needed when changes are made?

68
Typical Customer Responsibilities
  • 1. Requirements Definition
  • 2. Acceptance Test Plan
  • 3. Scenario-based Test Design
  • 4. Realistic Test Data

5. Test Execution 6. Documentation Review 7.
Test Output Review 8. Stopping Decision
69
Acceptance Test The Glue
  • A set of tests, that is successfully executed,
    certify a system meets the users expectations.
  • Based on the requirements specifications, i.e.,
    High Level Tests
  • Built before a single line of code is developed
  • Approved by the user prior to development
  • Sample test cases serve as models of the
    requirements
  • The acceptance test set serves as a model of the
    system
  • Changes, if necessary, must be negotiated

70
System Test Planning Introduction
  • OBJECTIVE Develop confidence that a system
    is ready for acceptance
    testing.
  • ENDS When system turned over for
    acceptance or until moved
    into
    production.
  • The bulk of testing in the large
  • Resources
  • Planning
  • Effort
  • Impact

71
System Test Planning Considerations
  • Changing Environment
  • Corrections to defects found
  • New code integration
  • Devices and supporting equipment
  • Files and data
  • Large Numbers of Test Cases
  • Hundreds and even thousands not uncommon
  • Starts with functional tests
  • Plus cases with INTENT to create failures
  • Plus cases to STRESS and break the system
  • Focus on Reliability Operations
  • Will the system support operational use?
  • Security, backup, recover, etc.
  • Remember
  • Requirements may be wrong

72
System Test Planning
  • The Test Set Deliverables
  • Byproduct of every systems test should be the
    test set
  • A Key Deliverable
  • TEST SET Subset of the systems test cases to be
    saved for testing future modifications.
  • Easy to develop as by product
  • Difficult to justify later

73
System Level Issues
  • What are the goals of system test?
  • Breaking the system
  • Demonstrating readiness for use
  • Understanding performance limits
  • Testing user procedures
  • Which features are of greatest concern?
  • Are there any implementation-based test
    objectives?
  • What test data and tools are available?
  • How does test environment compare to operating
    environment?
  • What is a reasonable budget and duration?
  • How is system test organized and controlled?
  • What is a reasonable regression strategy?
  • How is system test coordinated with acceptance
    and integration?
  • How is software released to system test?

74
System Test Domains
  • Capacity Testing - Testing capacity limits on
    such variables as input volume or number of
    concurrent processes.
  • Concurrence Testing - Testing the simultaneous
    execution of multiple processes
  • Conversion Testing - Testing conversion
    procedures
  • - Testing the system on its range of permissible
    HW configurations
  • Installation Testing - Testing installation
    procedures
  • Interface Testing - Testing interfaces to
    existing systems
  • Performance Testing - Testing of response time,
    throughput rates, and total job run times

Hardware Configuration Testing
75
System Test Domainscontd
  • Recovery Testing - Testing restart/recovery
    procedure
  • Reliability Testing - Testing extended periods of
    normal processing
  • Resource Usage - Testing consumption of CPU and
    both main and secondary storage
  • Runability Testing - Testing operator procedures
  • Security Testing - Attempting to breach security
    mechanisms
  • Sensitivity Testing - Testing adaptation to
    changing loads
  • Software - Testing the range of permissible
    software system configurations
  • Usability Testing - Analyzing the human-factors
    aspects of the user interfaces including the user
    procedures

Testing
Configuration Testing
76
Build Test
  • Build A single component or group of
    interacting components
  • Build Scheme A progression of builds that
    accumulate product functionality until the entire
    product has been assembled.
  • Build Test Testing individual builds in a
    cost-effective build scheme. Testing of a build
    may focus on individual functions, functional
    interactions, or both.

77
Functional Integration
  • Functional subassembly is a set of components
    that accomplish a specific function
  • Start with a functional subassembly
  • Use incomplete components, stubs, and drivers as
    needed to conduct testing

78
Build Level Issues
  • What modules/objects should be assembled and
    tested as a group?
  • Functional subassemblies
  • Critical features first
  • How much testing is appropriate?
  • Interfaces only
  • All invoking functions and features
  • Stress and break conditions
  • Are there any implementation-based test
    objectives?
  • How much scaffolding code or test support code is
    required?
  • How will problems be isolated?
  • How is the testing coordinated with system and
    unit testing?

79
Planning for Testing in the Small
  • ObjectiveDetermine that the module or code
    segment is ready or qualified to be integrated.
  • Key Considerations
  • Coordination with system or component level
    testing
  • Inspection vs. walkthroughs vs. tests
  • Documentation and reporting
  • Changes and modifications
  • Reuse of tests and tests drivers
  • Black box vs white box orientation
  • Insuring the testing has been completed
    (independence)
  • Recommended General Guidelines
  • A programmer responsibility
  • Some independent checks and audits
  • Test design and results fully documented
  • Black box oriented with instrumented coverage
    checking
  • One generic module level test plan
  • Closely Aligned to Development
  • Strategy Selected is a Testing Issue

80
What is a Unit?
  • What factors drive the choice?
  • Physical Structure
  • Function
  • Risk
  • Cost

The software that is unit tested
(i.e. whatever you want it to be!)
81
Unit Test Definition
  • Unit Testing Unit testing is the validation of
    a program module independently from any other
    portion of the system. The unit test is the
    initial test of a module. It demonstrates that
    the module is both functionally and technically
    sound and is ready to be used as a building block
    of application systems. It is often accomplished
    with the aid of stub and drive modules which
    stimulate the activity of related modules.

82
Unit Testing Practices
  • Unit test plans and specifications are used 24
  • Unit tests and defects are tracked and
    analyzed 21
  • Unit test summary reports are produced 14
  • Code coverage is analyzed and tracked 18
  • Unit level test sets are saved and
    maintained 23
  • Data was collected from the 1995 STAR conference.

83
Why Is Unit Testing Not Done?
  • Hawthorne Effect
  • Attitude
  • Schedule
  • Training
  • Procedure/Tools/Training

84
What Do We Miss?
  • Programmers Expertise
  • Knowledge of Problematic Areas
  • Re-use of Tests
  • Confidence in Unit Tests
  • Finding Bugs Early

85
What Can Be Done?
  • Education/Buy-in
  • Develop Standards and a Minimum Set of
    Requirements for Unit Testing
  • Tools
  • Identify and Describe Source Code Control
  • Metrics
  • Re-use of Unit Testware
  • Reviews, Walkthroughs and Inspections
  • Auditing
  • Buddy Testing

86
Testing in the Small
  • A programmer responsibility
  • BUT
  • You must provide training
  • Purpose of testing activity and why it is
    difficult
  • Analyzing programs to identify test cases
  • What are good and bad testing?
  • How to create test case specifications
  • How to define test execution and evaluation
    procedures
  • What records and documentation to retained
  • The importance of retesting and the concept of
    the test data set
  • AN ONGOING NEED

87
Test PlanningSummary
  • Master TE Plan coordinates all testing
  • Defines levels and strategies
  • Establishes resources and responsibilities
  • Defines or points to procedures
  • Coordinates with the project plan
  • Test designs are not test plans
  • Tests are planned in reverse order of execution
  • The test plan is a control tool
  • Buy in is critical
  • Should be reviewed and updated throughout project
  • Strategy issues are diverse and complex
  • The right strategy is crucial
  • Planning begins and ends with RISK

88
Regression Modification Testing
  • Two Basic Questions to Answer
  • 1. Does the modification provide for all new
    features and desired requirements?
  • Has anything been left out?
  • Has the change been applied to every part of the
    system?
  • 2. Does the rest of the system still work?
  • Is anything else adversely affected?
  • How could this change create a problem?
  • Modification Testing Answers Question 1
  • Regression Testing Answers Question 2

89
Testing ChangesStrategies Available
  • 1. FULL REGRESSION (save test sets and rerun for
    each change) Resource requirements are largemay
    be prohibitiverequires ongoing test set
    maintenance
  • 2. PARTIAL REGRESSION (save test sets and rerun
    subsets) Still requires ongoing test set
    maintenanceadded risk of lost functionalityprobl
    em of selecting what subset to run
  • 3. SELECTIVE RETEST (Construct special test set
    for each change)resource requirements are
    prohibitive in most caseshigh riskmay be
    appropriate when test design is data driven
  • 4. LIMITED RELEASE (Dont test and release in
    controlled status)testing performed by the
    usermust be able to move back to old version
    quicklyhigh risk
  • The KEY to SUCCESSBATCH or group changes
    together.
  • Makes FULL REGRESSION possible
  • But COMPLEXITY and difficulty to debug increases.
  • SIZE of the change is KEY decision

90
9 Reasons Why Planning Isnt Successful
  • 1. Responsibility wrongly placed
  • 2. Functional isolation
  • 3. Lack of understanding
  • 4. Senior Management inflexibility
  • 5. Attempting too muchtoo soon
  • 6. Confusing financial results project with
    planning
  • 7. Obsession with detail
  • 8. Failure to keep the plan current
  • 9. Refusal to operate the plan

91
Test Planning Summary
  • The cornerstone of the testing managers
    influence is the Master Test Plan document.
  • Structure Resources
  • Responsibilities Tests
  • Procedures Controls
  • Information Tools
  • Crucial to plan
  • Crucial to control
  • Focus is on usability
  • A good test plan is an essential ingredient for
    success

92
Test Execution Status Reporting
93
Test Execution Evaluation
  • 1. Executing Tests
  • Execution issues and tools
  • Checking test adequacy stopping criteria
  • 2. Tracking Evaluating Results
  • Logging reporting
  • Results analysis

94
Execution Issues
  • Are execution support tools known understood?
  • How can execution logging be automated?
  • How will outcome be determined?
  • How should test incidents be handled?
  • How should defects be tracked?
  • How will code coverage be determined?
  • When should testing stop?
  • How should status be reported?
  • ??

95
Executing the Tests
  • Run Tests
  • Execute predefined procedures
  • Explore capabilities and behavior
  • Log execution activities
  • Record incidents, results and issues
  • Investigate incidents and issues
  • Review testware
  • Investigate Incidents
  • Rerun the test (repeatable problem?)
  • Determine if defect in tests or in the software
  • Look for related defects
  • Classify and record incident details
  • Review Update Testware
  • Evaluate change risk/impact
  • Group changes together
  • Revise data and procedures as appropriate

Test Support Environment
Test Procedures
Test Incidents
Defect Information
EXECUTE Tests
Test Data
Run Info (to check)
Software (To Be Tested)
96
Test Execution Flow
Debug Testware
Testware Defects
Yes
Any Failures
No
Check Adequacy
Run Tests
Ready Testware
Yes
Software Defects
Debug Software
97
Execution Tools Aids
  • Simulations
  • Hardware, Network, Software (e.g. Drivers
    Stubs)
  • Database/File Comparators
  • (Capture)Replay Facilities
  • Data Manipulation Tools
  • Database/File Auditors Profilers
  • Execution Tracers Dynamic Analyzers
  • Debuggers

98
Test Management Tools
Report
Test Case/Proc Bank
Required Results
Selector Device
Run Preparer
Data Procedures
Comparator and Analyzer
System to be Tested
Incidents
Test Results Report
Actual Results
99
Checking Test Adequacy
  • Analyze Achieved Coverage
  • Measure code coverage dynamically as the tests
    are run
  • Tabulate coverage results (requirements, design
    and code)
  • Consider need for additional tests
  • Analyze Defects Found
  • Compare defect results to expectations
  • Reanalyze risk and test design assumptions
  • Consider need for additional tests
  • Add Additional Tests
  • Specify added tests if required
  • Implement added tests and execute
  • Reevaluate achieved coverage and defect results

Test Plans Specifications
Defect Information
Execution Results
CHECK Adequacy
Project Information
Check Results
100
Code Coverage Analyzer
Step 1 Instrument
Step 2 Trace
Test Data
Program Source
Instrumented Object
Execute
Instrument
Normal Program Output
Instrumented Source
Prior Trace Data
Trace Data
(Optional)
Analyze
Step 3 Analyze
Cumulative Coverage
Coverage Report
101
Prediction?
SITUATION 2
SITUATION 1
Defects Found
Defects Found
Test Days
Test Days
What do results so far predict?
102
Stopping CriteriaRevisited
  • Abnormal
  • Resource Exhaustion
  • Schedule
  • Budget
  • System Access
  • Patience
  • Project Redirection
  • Normal
  • Test Set Exit Criteria
  • Remaining Defects Estimation Criteria
  • Defect History of Past Software
  • Defect History of Current Item
  • Software Complexity
  • Combination of these
  • Diminishing Return Criteria
  • Cost to Detect Next Defect
  • Combined Criteria

There is no single, valid, rational criterion
for stopping. Furthermore, given any set of
applicable criteria, how each is weighed depends
very much on the product, the environment, the
culture, and the attitude to risk. Boris Beizer
103
Standard Reports
  • Test Log
  • A chronological record of relevant details about
    the execution of tests.
  • Test Incident (Anomaly) Report
  • The detailed description of a testing event that
    requires investigation and may require
    correction.
  • Status Reports
  • A record of the current status of testing
    activities and results used to evaluate progress
    and to understand the status of the testware and
    software
  • Summary Report
  • A document summarizing testing activities and
    results, evaluating the testing activity, and
    evaluating the tested software components and
    documents.

104
Logging
  • Test logs provide test visibility and support
    analysis auditing
  • STEP requires for procedures (cases may also be
    logged)
  • Automation is essential

Test Procedure TP100 CALL LOGGER (Proc Tester
Test Env etc) Procedure Step 1 Procedure Step
2
Time Proc Tester Test Env Comment 1115
TP100 BH System Test Execution
initiated 1131 TP100 BH System
Test Normal termination
105
Test Incident Report Contents
5. Investigation Investigator's
names Special isolation strategies Total
isolation time 6. Defects Software Testware
Documentation 7. Disposition
  • 1. Identifier
  • 2. Incident Summary
  • References to
  • Test procedure
  • Test case specification
  • Test log
  • 3. Impact
  • 4. Description
  • Inputs Procedure step
  • Required results Environment
  • Actual results Attempts to repeat
  • Anomalies Testers' names
  • Date and time Observers' names

106
Incident Summary
IR SUBSYSTEM TEST IMPACT STATUS
  • 0001 IRM C45 MINOR CLOSED
  • 0002 PLAN F87 MAJOR OPEN
  • 0003 UNIT B33 CRITICAL CLOSED
  • 0521 CONTROL F88 MINOR SUSPENDED

107
Defect Density
  • is currently the most used industry quality
    measure (measuring the absence of quality by
    counting problems)

BUT What is a defect? How do we count them?
108
What is a Defect Anyway?
  • Four Definitions
  • 1. A defect is a deviation from the specification
  • Bill Hetzel, Program Test Methods, 1972
  • 2. A defect occurs whenever the software does not
    behave the way the user reasonably expects it to
  • Glen Myers, Art of Software Testing, 1979
  • 3. A defect is something that, if not corrected,
    will result in an authorized problem analysis
    report (APAR) or result in a defect condition in
    a later inspection or test stage or is in
    nonconformance to a documented specification or
    requirement.
  • IBM Rochester, MN
  • 4. A defect includes any deviation from the
    specification and also errors in the
    specification. (Once the spec is accepted any new
    features added or old features deleted are
    considered defects.)
  • Bob Grady, Software Metrics, 1989

109
Defect Counting Problems
  • When to start counting
  • Start of development
  • Completion of the work product
  • Formal configuration management
  • Start of test
  • Which activities are considered defect finding?
  • Inspections
  • Test Executions
  • Test Design and Development
  • Informal review
  • Special analyses
  • What about omissions?
  • Forgotten requirements
  • Omitted features
  • How to treat severity and impact
  • How to treat confusion and dissatisfaction

110
How to Categorize Defects
1. Severity of Impact

2. Component Involved 3. Defect Type 4. Phase
Created 5. Author 6. Age 7. Degree of Isolation
Difficulty 8.
111
Status Report Examples
CATEGORY COUNTS PASSING 1464 74
FAILING 191 10 CRITICAL 6 MAJOR 9
MINOR 176 HOLDING 37 2 INVALID 7 1
NOT TESTED 264 13 TOTAL 1963
112
Status Plot Example
Planned
Specified
Tests
Implemented
Executed
Time
113
Summary Report Contents
  • 1. Identifier
  • 2. References
  • Items (with revision numbers)
  • Environment
  • References
  • 3. Variances (deviations)
  • From test plans or specs
  • Reasons for deviations
  • 4. Summary of incidents
  • Resolved incident reports
  • Defect patterns
  • Unresolved incident reports

5. Adequacy assessment Evaluation of coverage
Identify uncovered attributes 6. Summary
of activities System/CPU usage Staff
time Elapsed time 7. Software evaluation
Limitations Failure likelihood 8. Approvals
114
Configuration Management
115
Configuration Change Control
  • Software Configuration Management (SCM)
  • The discipline of identifying, recording and
    reporting the software aggregates in a system at
    discrete points in time for the purpose of
    controlling changes to these aggregates and
    maintaining their integrity and traceability
    throughout the life cycle.
  • Mechanism to control and track software changes
    is a MUST
  • changes in the environment
  • changes in the system
  • changes in the tests
  • changes in the documents
  • BASELINE must be established and controlled
  • Generally accomplish through a CM system
  • Set of administrative procedures
  • Change controls
  • Automated version controls

116
Testing Configuration Management
Documentation Testware (old) Software
  • Test Documentation
  • Testware (new)
  • Data
  • Procedures
  • Support
  • Software

117
Software/Testware Configuration Mgmt--Summary
  • Good software CM is critical to the success of a
    project
  • Principle of software CM apply to testware
  • There are many testware components to manage

118
Summary
Write a Comment
User Comments (0)
About PowerShow.com