Title: Introduction to Software Inspection EE599 Software V
1Introduction to Software InspectionEE599
Software VVWinter 2006
- Diane Kelly and Terry Shepard
- Royal Military College of Canada
- Electrical and Computer Engineering
- kelly-d_at_rmc.ca
- ext. 6031
2Outline
- Rationale for inspection
- Fagan style inspections
- What makes inspections hard
- New approaches
- Mechanics of inspections
- process
- techniques
- Experiments with TDI at RMC
3Rationale for Inspections
- The original argument payback on cost of fixes
(e.g. debugging) leading to improved productivity - according to work done by Glen Russell at BNR
(now Nortel Technologies) in 1991, 65 to 90 of
operational (execution-time) defects are detected
by inspection at 1/4 to 2/3 the cost of testing
13 - Other arguments for inspection
- better control of process
- higher quality
- reduced defect rates
- reduced cost of finding defects
- payback on fixing soft issues (Votta 3)
4Industry Standard Inspection
- First described by Michael Fagan 1976 30
- subsequent paper in 1986 31
- Fagan defines inspection
- defines a process
- defines roles for the participants
- suggests feedback to improve the development
process - Fagan approach to inspections has become an
industry standard - Several variations
- eg. Gilb and Graham - next slide
5Inspection process from Gilb and Graham -
variation on Fagan
6Fagan-style inspections
- Entry and exit criteria
- Preparation
- Moderator
- Meeting 3-6 people
- Controlled rate typically 100 to 200 loc/hr
- rates that are too fast or too slow reduce
effectiveness - Controlled meeting length max 2 hours
- Controlled number of meetings per day max 2
- Track and report defects found, time spent
- rate of finding defects per hour, rate per kloc
- Classify defects
7What makes inspections hard?
- Mentally demanding
- Specific skills needed may be in short supply
- Role of moderator requires special people skills
- meeting dynamics
- possible tension between inspectors and product
author - Time pressures
- squeezed by tight schedules
- inspection interval problem
- Attitudes
- perceived as uninteresting work
- not glamorous, support role only
- Lack of ownership
- who owns the inspection?
- Unclear goals, multiple goals
8New Approaches
- Research and industry practice suggest variations
on - goals
- meetings
- techniques
- size of teams
- entry criteria
- exit criteria
- when to inspect
- activities to include/exclude
- ...
- There are better approaches than the commonly
used Fagan method. 25
9Mechanics Inspect for Specific Goals - Examples
- Set an inspection goal to identify usability
issues - Ask questions such as
- What are these identifiers used for?
- What is the default choice in these switch
statements? - Have a numerical techniques expert check one
module that does a matrix inversion - Have a junior programmer check the entire product
for meaningful identifiers - Have a meeting focused solely on consistency
issues of the user interface
10Mechanics Inspect for Specific Goals Particular
Approaches 6,7
- Set objectives of inspection ahead
- Inspector skills are matched to specific aspects
of the inspection - Each inspector focuses on a specific purpose
- inspectors actively work with the document
- questionnaires dont have passive questions
requiring simple yes/no answers - Inspectors may either
- examine specific modules -or-
- examine entire product for one property
- Small meetings focused on specific aspects
- meetings may not raise new findings
11Mechanics Types of Inspectors 6
- Suggestions for finding good inspectors
- specialists
- application area
- hardware
- operating system
-
- potential users of system
- those familiar with design methodology used
- those who enjoy finding logical inconsistencies
and are skilled at doing so
12Mechanics what to inspect
- All software artifacts can be inspected
- In some cases, code inspections can be replaced
by design, architecture or requirements
inspections - In other cases, code inspections can replace unit
testing - Inspection and testing are complementary
activities - Architecture and design inspections have high
payback (XP challenges this current debate) - Start small even one page can contain 50 or more
defects (eg. early requirements inspection)
13Mechanics when to inspect (1)
- Inspection is possible at any time
- Based on your goals
- Fit inspections into your existing process
- even an immature organization can implement an
effective inspection program (eg. 24) - Dont allow the inspection to become a bottleneck
- do what you can in parallel with other activities
- Look at what resources are available
14Mechanics when to inspect (2)
- Determine the criteria
- entry criteria
- code after clean compile, before testing?
- documents after spell checking, after syntax
checking? - anything can I comprehend what youve given me?
- re-inspection criteria
- more than 5 errors per page?
- exit criteria
- all findings addressed?
- inspection tasks completed?
- metrics at acceptable levels?
15Mechanics tools
- Comments about tools here are focused on code
- use tools to make inspection more productive
- reduce tedious trivia burden
- for code clean up, restructuring
- enforce code styles
- e.g. enforce comment style consistency
- some style errors slip past
- automatic generation of documentation
- e.g. call trees, flowcharts, UML diagrams, ...
- find some syntactic errors
- tools cannot find errors that result in correct
syntax - tools can get confused in complex code
- report false positives
- miss real issues
- still need someone to read the code
- asynchronous inspections virtual meetings
- can reduce inspection intervals, but no synergy
- current research on Computer Supported
Collaborative Work (CSCW) may improve inspections
16Does Every Inspection Need a Fagan-style Meeting?
- Report on an experiment Votta 5
- investigates the need for large inspection
meetings - two-person meetings are enough?
- Report on an experiment Porter Votta 26
- collection meetings produce no net improvement
in terms of meeting losses and meeting gains - Report on an experiment Johnson Tjanjono 17
- shows that the number of defects found does not
increase as a result of meetings, but that
meetings help to detect false positives
17Is there a case for more than one meeting?
- An Experiment to Assess the Cost-Benefits of Code
Inspections in Large Scale Software Development
(Porter, Siy, Toman, Votta 8) - varied the the number of reviewers, the number of
teams inspecting the code unit, and the
requirement of fixing defects between first and
second teams inspections - two teams per inspection with fix was dropped
part way through experiment as infeasible - results on two teams in parallel are
inconclusive, where the total number of people is
the same (eg. 2 teams of 1 vs. 1 team of 2) - one of the conclusions
- structural changes to the inspection process do
not always have the intended effect significant
improvements to the inspection process will
depend on the development of defect detection
techniques
18Unstructured Techniques (1)
- Ad-hoc is common
- often based on paraphrasing
- no focus on particular issues
- no guidance on how to consider the product
- no explicit use of inspectors expertise
- not repeatable
- no record of inspectors thought processes and
actions - does the beginning of the product get more
attention than the end? - are conceptually difficult parts ignored?
19Unstructured Techniques (2)
- Checklists are popular but have several
shortcomings 2, 9 - checklists are based on past experience
- some categories will not have been found yet
- depends on inspectors using individual expertise
to find issues in missing categories - inspectors may not look for anything beyond what
is in checklist - checklists may be lengthy and hard to use
- checklist categories may not be effective or
productive - categories need to be prioritized
- some categories may not provide enough guidance
- checklist for one project may not work on another
- some checklists should be personalized
- checklist effectiveness needs to be evaluated
- constant improvement needed
20Unstructured Techniques (3)
- Checklists are effective when 12, 33
- well established history of the product exists
- product is predictable
- inspection goals are accompanied by a well
documented method of operationalization - eg. style manual, standards
- providing a source of accumulated wisdom to a
junior member of the team - an inspector is working independently and
- the inspector is not experienced and
- the checklist provides a relatively complete tour
of the product under inspection
21Overview of Structured Inspection Techniques
(University of Maryland 10)
- Systematic
- inspector knows how to inspect the document
- Specific roles
- inspector is responsible for a specific role with
a specific focus - Distinct
- each role is distinct in the sense of minimizing
overlap - coverage of different defect classes is achieved
by having multiple roles
22Scenario Based Reading
- A technique that is used individually in order
to analyze a product or a set of products. Some
concrete instructions are given to the reader on
how to read or what to look for in a document.
(University of Maryland, Notes on Perspective
based Scenarios 10) - defect-based and perspective-based scenarios
(Porter, Votta, Basili 11) - focus on defect classes and use questionnaire to
guide inspectors - focus on a role (perspective) and use
questionnaire to guide inspectors (may also
produce a product associated with that role) - can prepare a scenario for any inspection
23Task Directed Inspection (TDI)
- What is TDI?
- inspectors produce a usable product for future
evolution of the software system - inspectors expertise is matched to the task
- used with a light-weight inspection process
- focus on the work of the individual rather than
that of the team - based on observation that creating careful
documentation often shows up defects in product
being documented
24Experiments with TDI at RMC
- Experiments in 2000 to 2002
- Purpose of the experiments
- Concerns to be addressed in experimentation
- Elements of the experiments
- Design of the experiments
- Results and Conclusions
- Experiments in 2003/2004
25Purpose of TDI Experiments at RMC (2000 to 2002)
- evaluate the effectiveness of TDI
- can it guide the inspector to identify subtle
defects in the work product? - different levels of understanding involved in
finding defects - low level style issues
- high level logic errors in a complex calculation
- compare use of TDI to an ad hoc technique
- observe the use of TDI in a situation very
different from where it was first developed
26Concerns to be Addressed by Design of
Experiment(Confounding factors)
- natural variation in participants performance
- maturation of subjects as experiment progresses
- one detection technique biasing the use of
another detection technique - variations in the instrument, i.e., the pieces of
code selected for inspection - outside influences
27Elements of the Experiment(2000 to 2002)
- inspection techniques
- paraphrasing
- TDI - method description
- TDI - test plan
- code to be inspected
- CAVMLC - civil engineering code
- three pieces
- equivalent length
- equivalent complexity
- equivalent application content
- participants
- experimental design
28Design of Experiment (2000)
29Results from Experiment 2000 - Report in CASCON
Proceedings 2000
30Results from Experiment 2000 - Report in CASCON
Proceedings 2000
31Analysis of Resultsfor Experiments 2000 to 2002
- for the experienced students
- using TDI resulted in proportionally more
findings in the Logical category - using TDI didnt make any difference in the
proportion of findings in the Comparative
category - for the inexperienced students
- using TDI may have helped increase the proportion
of findings in the Comparative category - using TDI seemed to hinder identification of
findings in the Logical category
32Conclusions from all three Experiments
- TDI provides positive results in level of
understanding of the code for experienced
inspectors - tasks must be appropriate for the inspectors
background and experience - there is a strong interaction between inspection
technique and inspectors preferred style of
working
33Experiment 2003/2004
- Assumption
- TDI works!
- Previous focus
- work of individual
- no effects of group meetings considered
- Purpose for 2003/2004
- what effects do meetings have when using TDI?
34Elements of the Experiment 2003/2004
- inspection technique
- TDI - high level design
- code to be inspected
- chemical engineering - radiation dose on air crew
- two pieces of code
- content of both is computational based
- participants
- experimental design
- partial factorial
- repeated measures
- individual versus groups
- warm up in first week
35References 1
- Phillip M. Johnson, Reengineering Inspection,
Communications of the ACM, Feb. 1998, Vol 41, No
2, pp 49-52 - Oliver Laitenberger, Khaled El Emam, Thomas
Harbich, An Internally Replicated
Quasi-Experimental Comparison of Checklist and
Perspective-Base Reading of Code Documents, IEEE
Transactions in Software Engineering, May 2001,
Vol. 27, No. 5, pp.387-421 - Lawrence G. Votta, Does the Modern Code
Inspection Have Value? Presentation at the NRC
Seminar on Measuring Success Empirical Studies
of Software Engineering March 1999
http//www.cser.ca/seminar/ESSE/slides/ESSE_Votta.
pdf
36References 2
- Tom Gilb and Dorothy Graham, Software Inspection,
Addison Wesley, 1993. see also http//www.result-p
lanning.com/ - Votta, Lawrence G. Does Every Inspection Need a
Meeting?, SIGSOFT93 - Proceedings of 1st ACM
SIGSOFT Symposium on Software Development
Engineering, ACM Press, New York, 1993, pp
107-114 - Parnas, David L., David M. Weiss Active Design
Reviews Principles and Practice, Proceedings
8th International Conference on Software
Engineering, Aug. 1985
37References 3
- Knight, J.C., E.A. Myers An Improved Inspection
Technique, Communications of the ACM, Nov. 1993,
Vol 36, No 11, pp. 51-61 - Porter, Adam A., Harvey P. Siy, Carol A. Toman,
Lawrence G. Votta An experiment to Assess the
Cost-Benefits of Code Inspctions in Large Scale
Software Development, IEEE Transactions on
Software Engineering, Vol 23, No 6, June 1997, pp
329-346 - Chernak, Y. A Statistical Approach to the
Inspection Checklist Formal Synthesis and
Improvement IEEE Transactions on Software
Engineering, 22(12)866-874, Dec. 1996
38References 4
- 10 University of Maryland, Notes on Perspective
based Scenarios online http//www.cs.umd.edu/p
rojects/SoftEng/ESEG/manual/pbr_package/node8.html
Nov. 1999 - 11 Porter, Adam A., Lawrence G. Votta, Victor R.
Basili Comparing Detection Methods for Software
Requirements Inspections A Replicated
Experiment, IEEE Transactions on Software
Engineering, Vol 21, No 6, June 1995, pp 563-575 - 12 Kelly, Diane and Terry Shepard Task-Directed
Software Inspection Technique An Experiment and
Case Study, Proceedings IBM CASCON, Nov. 2000 - 13 Glen W. Russell, "Experience with Inspection
in Ultralarge-Scale Developments", IEEE Software,
Jan. 1991, pp.25-31
39References 5
- 14 R. Chillarege, et al., "Orthogonal Defect
Classification - A Concept for In-Process
Management", IEEE Transactions on Software
Engineering, v. 18 n.11, Nov 92, pp. 943-956 - 15 Kent Beck, Extreme Programming Explained
Culture Change, Addison Wesley, 1999 - 16 Robert B. Grady, Successful Software Process
Improvement, Prentice Hall, 1997 - 17 Philip M. Johnson and Danu Tjahjono, Does
Every Inspection Really Need A Meeting?,
Journal of Empirical Software Engineering, 4, 1,
pp 9-35, Jan. 1998
40References 6
- 18 David A. Wheeler, Bill Brykczynski, and
Reginald N. Meeson Jr., Software Inspection An
Industry Best Practice, IEEE CS Press, 1996 - 19 Gregory Abowd, Len Bass, Paul Clement, Rick
Kazman, Linda Northrop, Amy Zaremski
Recommended Best Industrial Practice for
Software Architecture Evaluation, Technical
Report, CMU/SEI-96-TR-025, January 1997 - 20 Stan Rifkin, Lionel Deimel Program
Comprehension Techniques Improve Software
Inspections A Case Study, Proceedings IWPC 00,
IEEE 2000 - 21 Steve McConnell, Quantifying Soft Factors,
IEEE Software, Nov/Dec 2000, pp 9-11
41References 7
- 22 Terry Shepard, Margaret Lamb, and Diane
Kelly, More Testing Should be Taught,
Communications of the ACM, June 2001, 44 (6), pp.
103-108 - 23 Michael Cusumano, Richard Selby, Microsoft
Secrets, Simon Shuster Inc., 1995 - 24 Edward Kit, Software Testing in the Real
World - improving the process, Addison-Wesley,
1995 - 25 Robert L. Glass, Inspections - Some
Surprising Findings, Communications of the ACM,
April 1999, Vol. 42, no. 4, pp.17-19
42References 8
- 26 Adam Porter, Lawrence Votta, Comparing
Detection Methods for Software Requirements
Inspections A Replication Using Professional
Subjects, Empirical Software Engineering
Journal, 1997 - 27 Victor Basili, Scott Green Oliver
Laitenberger, Filippo Lanubile, Forrest Shull,
Sivert Sorumgard, Marvin Zelkowitz, The
Empirical Investigation of Perspective-Based
Reading, Empirical Software Engineering An
International Journal, 1(2), 1996, pp.133-164 - 28 Oliver Laitenberger, Thomas Bell, An
Industrial Case Study to examine a
non-traditional Inspection Implementation for
Requirements Specifications, Proceedings 8th
IEEE Symposium on Software Metrics, June 2002,
pp.97-106
43References 9
- 29 Michael Fredericks, Victor Basili, Using
Defect Tracking and Analysis to Improve Software
Quality A DACS State-of-the-Art Report, Rome
NY, 1998 - 30 M.E. Fagan, Design and Code Inspections to
reduce Errors in Program development, IBM
Systems Journal, Vol. 15, No. 3, 1976, pp.182-211 - 31 M.E. Fagan, Advances in Software
Inspections, IEEE Transactions on Software
Engineering, Vol.12, No. 7, July 1986,
pp.744-751 - 32 Watts S. Humphrey, A Discipline for Software
Engineering, Addison Wesley, 1995
44References 10
- 33 Stefan Biffl, Michael Halling, Investigating
the Influence of Inspector Capability Factors
with Four Inspection Techniques on Inspection
Performance, Eight IEEE Symposium on Software
Metrics, 2002, pp. 107-117 - 34 Grady, Robert B. (1996). Software Failure
Analysis for High-Return Process Improvement
Decisions. Hewlett-Packard Journal, 47(4)
(August) - 35 Boris Beizer, Software Testing Techniques,
2nd ed., Van Nostrand Reinhold, NY, 1990 - 36 http//www.pmforum.org/library/glossary/
PMG_D00.htm
45References 11
- 37 Robert Grady, Deborah Caswell Software
Metrics Establishing a Company-Wide Program,
Prentice-Hall, 1986 - 38 IEEE Std 1044 Classification for Software
Anomalies - 39 David Card, Learning from our Mistakes with
Defect Causal Analysis, IEEE Software Jan-Feb
1998, pp.56-63 - 40 http//www.research.ibm.com/softeng/ODC/DETODC.
HTM and FAQ.HTMconcepts
46References 12
- 41 Barry Boehm, Victor Basili Software Defect
Reduction Top 10 List IEEE Computer, January
2001, pp.135-137 - 42 Ram Chillarege, Inderpal Bhandari, Jarir
Chaar, Michael Halliday, Diane Moebus, Bonnie
Ray, Man-Yuen Wong Orthogonal Defect
Classification - A Concept for In-Process
Measurements IEEE Transactions on Software
Engineering, Vol.18, No.11, Nov. 1992, pp.943-956 - 43 Jarir Chaar, Michael Halliday, Inderpal
Bhandari, Ram Chillarege In-Process Evaluation
for Software Inspection and Test, IEEE
Transactions on Software Engineering, Vol.19,
No.11, Nov. 1993, pp.1055-1070