SE 468 Software MeasurementProject Estimation - PowerPoint PPT Presentation

1 / 79
About This Presentation
Title:

SE 468 Software MeasurementProject Estimation

Description:

SE 468. Software Measurement/Project Estimation. Dennis Mumaugh, Instructor ... Usually a 'dog-and-pony show' 'Walkthroughs' Developer technique (usually informal) ... – PowerPoint PPT presentation

Number of Views:52
Avg rating:3.0/5.0
Slides: 80
Provided by: DennisL65
Category:

less

Transcript and Presenter's Notes

Title: SE 468 Software MeasurementProject Estimation


1
SE 468 Software Measurement/Project Estimation
  • Dennis Mumaugh, Instructor
  • dmumaugh_at_cdm.depaul.edu
  • Office Loop, Room CDM 430, X26770
  • Office Hours Monday, 400-530

2
Administrivia
  • Comments and feedback
  • New references in reading list
  • General Establishing a Measurement Program
  • Lecture 6 Implementing Phase Containment
    Effectiveness Metrics at Motorola

3
Midterm Exam
  • Midterm Exam results

4
SE 468 Class 7
  • Topics
  • Software Metrics
  • In-Process Metrics for Testing
  • The Fagan Inspection Process
  • Software Design Quality
  • Complexity Metrics and Models
  • Other metrics
  • Exploiting Measures Part I
  • Reading
  • Kan chapters 10-11, pp. 106-110
  • Articles on the Class Page

5
Assignment 4
  • Defect removal analysis
  • Consider the data listed in the following matrix
    for a product of size 45KLOC
  • Calculate the defect removal rate, defect
    injection rate, and the defect escape rate for
    every phase
  • Which phase is the most effective in removing
    defects?
  • Calculate the overall defect removal
    effectiveness.
  • Do you think reviews and inspections were
    effective? Explain.
  • Look at the references. Especially the paper
  • Implementing Phase Containment Effectiveness
    Metrics at Motorola
  • Due November 2, 2009.

6
Thought for the Day
  • .. there is no particular reason why your friend
    and colleague cannot also be your sternest
    critic.
  • Jerry Weinberg
  • One good test result is worth one thousand
    expert opinions.
  • Wernher von Braun

7
Static and dynamic verification
  • Software testing. Concerned with exercising and
    observing product behavior
  • The system is executed with test data and its
    operational behavior is observed
  • Software inspections. Concerned with analysis of
    the static system representation to discover
    problems (static verification)
  • May be supplemented by tool-based document and
    code analysis

8
In-process Metrics for Testing
9
In-Process Metrics for Software Testing
  • Test Progress (S Curve)
  • Planned progress over time (test cases completed
    successfully per unit time)
  • Number of test cases attempted
  • Number of test cases completed successfully
  • Product Size Over Time
  • Lines of code, etc.
  • Fully tested and functional (released)
  • Use to look for scope creep
  • CPU Utilization During Test
  • Measurement of system stress
  • Testing Defect Arrivals Over Time
  • Testing Defect Backlog Over Time Fix backlog
    and Backlog Management Index (BMI)
  • Number of problems closed divided by Number of
    problem arrivals

10
In-Process Metrics for Software Testing
  • Fix response time and Fix responsiveness
  • Mean time to fix
  • Percent delinquent fixes
  • Fixes exceeding response time divided by fixes
    delivered on time
  • Fix quality
  • Percentage of defective fixes
  • System Crashes and Hangs
  • Stress and stability measure
  • Mean Time to Unplanned IPL (or reboot) MTTF
  • Software reliability measure

11
In-Process Metrics for Software Testing
  • Critical Problems Show Stoppers
  • Number of critical problems over time
  • Types of problems and resolution
  • In-Process Metrics and Quality Management
  • Use calendar time
  • Use ship date as reference and week as unit of
    time
  • Indicate good and bad for metrics
  • Some metrics require immediate management
    attention
  • Metrics drive improvement

12
In-Process Metrics for Software Testing
  • Possible Metrics for Acceptance Testing to
    Evaluate Vendor-Developed Software
  • Percentage of test cases attempted
  • Number of defects per test case
  • Number of failing test cases without defect
    records
  • Success rate
  • Persistent failure rate
  • Defect injection rate
  • Code completeness
  • How Do You Know Your Product Is Good Enough to
    Ship
  • System stability
  • Defect volume
  • Outstanding critical problems
  • Feedback from early customers (Beta testing and
    controlled release)
  • Other quality factors

13
Design and Code Reviews
  • By doing design and code reviews, you will see
    more improvement in the quality and productivity
    of your work than you will see from any other
    single change you make to your personal software
    process. (Watts Humphrey)

14
Software inspections
  • The systematic study of a product artifact (e.g.,
    specification document, design document, source
    program) for the purpose of discovering flaws in
    it.
  • These involve people examining the source
    representation with the aim of discovering
    anomalies and defects.
  • Inspections do not require execution of a system
    so may be used before implementation.
  • They may be applied to any representation of the
    system (requirements, design, configuration data,
    test data, etc.).
  • They have been shown to be an effective technique
    for discovering program errors.

15
Inspections Reviews
  • Benefits of Inspection
  • Inspection is more cost effective than testing
  • Types of Inspection
  • How to conduct an inspection
  • who to invite
  • how to structure it
  • Some tips and thoughts

16
Inspections
  • Many studies have shown that
  • Finding errors early in a project costs less to
    fix than finding errors later in a project.
  • The factors of 1 time unit for a specification
    error, 10 time units for a design error, 100 time
    units for a coding error, to 1000 time units for
    an integration testing error.
  • These numbers have not been formally validated,
    but the basic principle that later errors are
    harder to fix seems to be sound.
  • All of the techniques are based on the principle
    that thinking about an artifact results in better
    results than simply random testing of the
    artifact.

17
Software Inspections (code reviews)
  • gt60 of program errors can be detected in code
    review Fagan86
  • gt90 if more formal approach used (e.g. Clean
    room process) Mills87

18
Benefits of formal inspection
  • For applications programming
  • Most reviewed programs run correctly first time
  • Compare 10-50 attempts for test/debug approach
  • Data from large projects
  • Data from Bell-Northern Research
  • Inspection cost 1 hour per defect.
  • Testing cost 2-4 hours per defect.
  • Post-release cost 33 hours per defect.
  • Error reduction by a factor of 5 (10 in some
    reported cases)
  • Improvement in productivity 14 to 25
  • Percentage of errors found by inspection 58 to
    82
  • Cost reduction of 50-80 for VV (even including
    cost of inspection)

19
Benefits of formal inspection
  • Effects on staff competence
  • Increased morale, reduced turnover
  • Better estimation and scheduling (more knowledge
    about defect profiles)
  • Better management recognition of staff ability

20
Inspection success
  • Why are reviews more effective for finding
    defects in systems/subsystems (i.e., before
    acceptance testing)?
  • Many different defects may be discovered in a
    single inspection. In testing, one defect may
    mask another so several executions are required.
  • They leverage domain/programming knowledge (reuse
    domain and programming knowledge) so reviewers
    are likely to have seen the types of error that
    commonly arise.
  • Inspectors are skilled programmers
  • Preparing a product for peer review induces the
    author to carefully review the code. Many
    errors/defects are found before the inspection as
    a result.
  • Common practice code reviews and then acceptance
    testing
  • Reviews can also help with development of tests

21
Reviews, Inspections, Walkthroughs
  • Note these terms are not widely agreed
  • Formality
  • Informal from meetings over coffee, to regular
    team meetings
  • Formal scheduled meetings, prepared
    participants, defined agenda, specific format,
    documented output
  • Management reviews
  • e.g. Preliminary design review (PDR), critical
    design review (CDR),
  • Used to provide confidence that the design is
    sound
  • Attended by management and sponsors (customers)
  • Usually a dog-and-pony show
  • Walkthroughs
  • Developer technique (usually informal)
  • Used by development teams to improve quality of
    product
  • Focus is on finding defects

22
Reviews, Inspections, Walkthroughs
  • Note these terms are not widely agreed
  • (Fagan) Inspections
  • A process management tool (always formal)
  • Used to improve quality of the development
    process
  • Collect defect data to analyze the quality of the
    process
  • Written output is important
  • Major role in training junior staff and
    transferring expertise
  • Personal reviews where you examine your own
    products
  • Done by the engineer before a formal technical
    review (or inspection)
  • You examine your own products
  • Objective is to find and fix as many errors as
    possible before you implement, inspect, compile,
    or test the program
  • As PSP data will show, a little time spent
    carefully reviewing your code can save the much
    longer amount of time that would have been spent
    debugging and fixing the program during compile
    and test

23
Reading technologies
  • All of the following techniques are an
    improvement over simply testing a program
    however, some are more effective than others.
    Several related concepts
  • Walkthroughs The developer of the artifact
    describes its structure at a meeting. Attendees
    look for flaws in the structure. Weakness
    reviewers do not understand deep structure so
    error finding is weak.
  • Code reviews An individual who is not the
    developer of the artifact reads the text of the
    artifact looking for errors and defects in
    structure. Quite effective since reader does not
    have the same preconceived notion of what the
    artifact does.
  • Review A meeting to discuss an artifact less
    formal than an inspection. A traditional
    checkpoint in software development

24
(Fagan) Inspections
  • Developed by Michael Fagan at IBM in 1972.
  • Two approaches toward inspections
  • Part of development process used to identify
    problems
  • Part of quality assurance process used to find
    unresolved issues in a finished product
  • Fagan inspections are the former type. Goal is
    to find defects A defect is an instance in
    which a requirement is not satisfied.

25
Inspection process
  • Planning Author gives moderator an artifact to
    be inspected. Materials, attendees, and schedule
    for inspection meeting must be set. High level
    documentation given to attendees in addition to
    artifact.
  • Overview Moderator assigns inspection roles to
    participants.
  • Preparation Artifact given to participants
    before meeting to allow for study. Participants
    spend significant time studying artifact.
  • Inspection A defect logging meeting for finding
    defects (Dont fix them at meeting Dont assign
    blame Dont have managers present )
  • Rework The author reworks all defects.

26
Observations about inspections
  • Costly Participants need many hours to prepare
  • Intensive Limit inspections to no more than 2
    hours
  • No personnel evaluation limits honesty in
    finding defects
  • Cannot inspect too much perhaps 250
    non-commented source lines/hour. More than that
    causes discovery rate to drop and the need for
    further inspections to increase.
  • Up to 80 of errors found during testing could
    have been found during an inspection.
  • ATT/Lucent study more defects found in
    preparing for the inspection meeting than found
    in meeting.

27
Inspection pre-conditions
  • A precise specification must be available.
  • Team members must be familiar with the
    organizations standards.
  • Syntactically correct code or other system
    representations must be available.
  • An error checklist should be prepared.
  • Management must accept that inspection will
    increase costs early in the software process.
  • Management should not use inspections for staff
    appraisal e.g., finding out who makes mistakes.

28
Constraints
  • Size
  • Enough people so that all the relevant expertise
    is available
  • Min 3 (4 if author is present)
  • Max 7 (less if leader is inexperienced)
  • Duration
  • Never more than 2 hours
  • Concentration will flag if longer
  • Outputs
  • All reviewers must agree on the result
  • Accept or re-work or re-inspect
  • All findings should be documented
  • Summary report (for management)
  • Detailed list of issues

29
Constraints
  • Scope
  • Focus on small part of a design,
  • Not the whole thing
  • Fagan recommends rates
  • 130-150 SLOC per hour
  • Timing
  • Examine a product once its author has finished it
  • Not too soon
  • Product not ready - find problems the author is
    already aware of
  • Not too late
  • Product in use - errors are now very costly to
    fix

30
Choosing Reviewers
  • Possibilities
  • Specialists in reviewing (e.g. QA people)
  • People from the same team as the author
  • People invited for specialist expertise
  • People with an interest in the product
  • Visitors who have something to contribute
  • People from other parts of the organization
  • Exclude
  • Anyone responsible for reviewing the author
  • i.e. Line manager, appraiser, etc.
  • Anyone with known personality clashes with
    other reviewers
  • Anyone who is not qualified to contribute
  • All management
  • Anyone whose presence creates a conflict of
    interest

31
Roles
  • Formal Walkthrough
  • Review leader
  • Chairs the meeting
  • Ensures preparation is done
  • Keeps review focused
  • Reports the results
  • Recorder
  • Keeps track of issues raised
  • Reader
  • Summarizes the product piece by piece during
    the review
  • Author
  • Should actively participate (may be the reader)
  • Other reviewers
  • Task is to find and report issues

32
Guidelines
  • Prior to the review
  • Schedule Formal Reviews into the project
    planning
  • Train all reviewers
  • Ensure all attendees prepare in advance
  • During the review
  • Review the product, not its author
  • Keep comments constructive, professional and
    task-focused
  • Stick to the agenda
  • Leader must prevent drift
  • Limit debate and rebuttal
  • Record issues for later discussion/resolution
  • Identify problems but dont try to solve them
  • Take written notes
  • After the review
  • Review the review process

33
Opening Moments
  • Dont start until everyone is present
  • Leader announces
  • We are here to review product X for purpose Y
  • Leader introduces the reviewers, and explains the
    recording technique
  • Leader briefly reviews the materials
  • check that everyone received them
  • check that everyone prepared
  • Leader explains the type of review
  • Note The review should not go ahead if
  • some reviewers are missing
  • some reviewers didnt receive the materials
  • some reviewers didnt prepare

34
Structuring the inspection
  • Checklist
  • Uses a checklist of questions/issues
  • Review is structured by issue on the list
  • Walkthrough
  • One person presents the product step-by-step
  • Review is structured by the product
  • Round robin
  • Each reviewer in turn gets to raise an issue
  • Review is structured by the review team
  • Speed review
  • Each reviewer gets 3 minutes to review a chunk,
    then passes to the next person
  • Good for assessing comprehensibility!

35
Roles
  • Fagan Inspection
  • Moderator
  • Must be a competent programmer
  • Should be specially trained
  • Could be from another project
  • Designer
  • Programmer who produced the design being
    inspected
  • Coder/implementor
  • Programmer responsible for translating the design
    to code
  • Tester
  • Person responsible for writing/executing test
    cases

36
Fagan Inspection Process
  • Overview
  • Communicate and educate about product
  • Circulate materials
  • Rate 500 SLOC per hour
  • Preparation
  • All participants perform individually
  • Review materials to detect defects
  • Rate 100-125 SLOC per hour
  • Inspection
  • A reader paraphrases the design
  • Identify and note problems (dont solve them)
  • Rate 130-150 SLOC per hour
  • Rework
  • All errors/problems addressed by author
  • Rate 16-20 hours per 1000 SLOC
  • Follow-up
  • Moderator ensures all errors have been corrected
  • If more than 5 reworked, product is re-inspected
    by original inspection team

37
Tactics for problematic review meetings
  • Bebugging
  • Put some deliberate errors in before the review
  • With prizes for finding them!
  • Control meeting
  • Use talking stick
  • Money bowl
  • If a reviewer speaks out of turn, he/she puts 25c
    into the kitty
  • Alarm
  • Use a timer to limit speechifying
  • Issues blackboard
  • Appoint someone to keep an issues list, to be
    written up after the review
  • Stand-up review
  • No tables or chairs!

38
Phased Inspections
  • Phased inspections proposed by Parnas Weiss and
    Knight Meyers involve conducting several brief
    reviews rather than one large review
  • Depending on the product, these approaches could
    improve the front-end defect removal
    effectiveness
  • People like them because they give the
    sub-conscious a chance to work on the problem

39
Review Measures
  • Four explicit review measures are
  • Size of the program being reviewed use LOC for
    code and number of pages for design
  • Review time in minutes
  • Number of defects found
  • Number of defects that were found later
  • From these basic measures, you can derive several
    other useful ones

40
Derived Measures
  • Review yield of total defects found during
    the review
  • Defects found per KLOC or page
  • Review rate LOC or page per hour
  • Phase Containment Effectiveness of total
    defects found in phase
  • The middle two are instant measures that can be
    used to immediately judge the quality of the
    review
  • The other two are long term effectiveness
    measures that cant be determined until later
    defects are known

41
Automated static analysis
  • Static analyzers are software tools for source
    text processing.
  • They parse the program text and try to discover
    potentially erroneous conditions and bring these
    to the attention of the team.
  • They are very effective as an aid to inspections
    - they are a supplement to but not a replacement
    for inspections.
  • They may show where to look, e.g. code with a
    high complexity measure.

42
Static analysis checks
  • Fault class Static analysis check
  • Data faults Variables used before
    initialization
  • Variables declared but never used
  • Variables assigned twice but never
    used between assignments
  • Possible array bound violations
  • Undeclared variables
  • Control faults Unreachable code
  • Unconditional branches into loops
  • Input/output faults Variables output twice
    with no intervening assignment
  • Interface faults Parameter type mismatches
  • Parameter number mismatches
  • Non-usage of the results of functions
  • Uncalled functions and procedures
  • Storage management faults Unassigned pointers
  • Pointer arithmetic

43
Stages of static analysis
  • Control flow analysis Checks for loops with
    multiple exit or entry points, finds unreachable
    code, etc.
  • Data use analysis Detects uninitialized
    variables, variables written twice without an
    intervening assignment, variables which are
    declared but never used, etc.
  • Interface analysis Checks the consistency of
    routine and procedure declarations and their use
  • Information flow analysis Identifies the
    dependencies of output variables. Does not detect
    anomalies itself but highlights information for
    code inspection or review
  • Path analysis Identifies paths through the
    program and sets out the statements executed in
    that path. Again, potentially useful in the
    review process
  • Both the above stages generate vast amounts of
    information. They must be used with care.

44
Use of static analysis
  • Particularly valuable when a language such as C
    or C is used that has weak typing and hence
    many errors are undetected by the compiler. That
    is why lint was invented.
  • Less cost-effective for languages like Java that
    have strong type checking and can therefore
    detect many errors during compilation.

45
Code Reviews More Efficient
  • In contrast, debugging is quite different
  • You start with some unexpected behavior and have
    to find the cause
  • In one operating system example, 3 experienced
    engineers worked for 3 months to find a subtle
    system defect that was causing persistent
    customer problems
  • At the same time they found this defect, as an
    experiment, the same code was being inspected by
    a different team of five engineers
  • Within two hours, this inspection team not only
    found this defect, but also 71 others

46
Personal Reviews
  • You examine your own products
  • Objective is to find and fix as many errors as
    possible before you implement, inspect, compile,
    or test the program
  • As PSP data will show, a little time spent
    carefully reviewing your code can save the much
    longer amount of time that would have been spent
    debugging and fixing the program during compile
    and test

47
Personal Reviews
  • Engineers who regularly produce programs that run
    correctly the first time, take pride in the
    quality of their product
  • They carefully review their programs before they
    compile or test them because they want a quality
    product
  • Recognize that you must spend the time to
    personally review it, and rework it until
    satisfied with its quality BEFORE compiling and
    testing
  • Contrast this with the more common approach where
    once the program is coded, the engineer
    immediately tries to compile it.

48
Personal Reviews
  • Your PSP reviews will net you a greater return
    than any other single thing you can do
  • Learn to produce something you would be proud to
    publish
  • The three basic principles of personal reviews
    are to establish review goals, follow a
    disciplined process, and measure and improve the
    process
  • Develop a personal review checklist
  • Decide whether to review your code before or
    after compile Humphrey recommends before
  • Its essential to find current or instant
    measures that correlate with long term yield so
    you have data to improve the quality of your
    reviews
  • Measures like LOC per hour and defects found per
    hour are most useful

49
Key Points
  • Program inspections are very effective in
    discovering errors.
  • Program code in inspections is systematically
    checked by a small team to locate software
    faults.
  • The purpose of reviews and inspections is to
    ensure the programs you produce are of the
    highest quality
  • Principal reviews are inspections, walk-throughs,
    and personal reviews
  • Design and code reviews are much more efficient
    ways to find and fix defects than testing
  • Static analysis tools can discover program
    anomalies which may be an indication of faults in
    the code.
  • Software metrics can be used to indicate areas of
    the code that might need special attention during
    code inspection or review.

50
Software Design Quality
  • How to tell if a design is a good design.

51
Software Design Quality
  • What is software quality?
  • How can it be measured?
  • How can it be measured before the software is
    delivered?
  • Some key quality factors
  • Some measurable indicators of software quality

52
Measurable Predictors of Quality
  • Simplicity
  • The design meets its objectives and has no extra
    embellishments
  • Can be measured by looking for its converse,
    complexity
  • Control flow complexity (number of paths through
    the program)
  • Information flow complexity (number of data items
    shared)
  • Name space complexity (number of different
    identifiers and operators)
  • Modularity
  • Different concerns within the design have been
    separated
  • Can be measured by looking at
  • Cohesion (how well components of a module go
    together)
  • Coupling (how much different modules have to
    communicate)

53
Software Size
  • One of the basic complexity measures of a
    system is its size. This is usually used to
    estimate the build effort.
  • Measures of software size include length,
    functionality and complexity.
  • Length is LOC or KLOC
  • Problem The variation in developers code
    compactness which can be around 51 in
    difference. Some standards alleviate this
    problem.
  • Function Points measure the functionality of
    software. They were devised by Albrecht (1979)
    and are language independent.
  • A full specification is needed to estimate
    function points and criticism shows that FP
    counts can vary by up to 2000 depending on how
    one attributes weights.
  • Functionality and complexity measures are
  • Cyclomatic complexity
  • Fan-In/Fan-Out
  • Span (of control) and Span (of variables)

54
Halsteads Software Science
  • Halstead (1977) attempted to understand the
    complexity of a system, and part of this measured
    system length.
  • n1 unique operators
  • n2 unique operands
  • N1 operators
  • N2 operands
  • Length (N) N1 N2
  • Estimated Length (?) n1 log n1 n2 log n2
  • The higher the Length, the larger the
    maintenance.

55
Software Complexity
56
Cyclomatic Complexity
  • Based upon the concept that the number of test
    cases is based upon the number of linearly
    independent circuits in a program, McCabe
    suggested that the cyclomatic number might be a
    good approximation of the complexity of a program
    from the point of view of test effort.
  • The cyclomatic number, V(G), of a graph G with n
    vertices and e edges, and p connected components
    is
  • V(G) e - n 2xp
  • For a program, this is equivalent to the number
    of predicates 1
  • In a strongly connected graph G, the cyclomatic
    number is equal to the maximum number of linearly
    independent circuits (paths). Thus if we view a
    program as a graph which represents its control
    structure, the cyclomatic complexity of a program
    is equal to the cyclomatic number of the graph
    representing its control flow.
  • McCabe states that the CC determines the number
    of test cases required to cover the code, or
    execute all code edges.

57
Cyclomatic Complexity
  • if (( A gt 1) and (B 0)) then X X/A 
  • if (( A 2) or (X gt 1)) then X X 1

a
Pr 1
St 1
b
c
St 2
Pr 2
d
58
Cyclomatic Complexity
  • The code segment has 2 compound predicates, so CC
    3
  • Possible paths are
  • abcd
  • bcd
  • abd
  • bd
  • CC tells us that choosing 3 out of the 4 will
    give us code
  • (statement) coverage because
  • 1 2 3 4 or 1 4 2 3 etc.
  • That is, the sub paths are basis sets of paths.

59
Problems with CC
  • It is based on graph theory! Basis sets do not
    equal path execution. You cant exercise some
    sub-paths and remove other sub-paths.
  • Problems with nesting
  • repeat until x gt100
  • if ( end) then abort
  • and
  • while (xgt y)
  • while (x gt z)
  • Each have a CC of 3.
  • If nesting is a problem so is recursion. A call
    to a recursive function would only count as a CC
    of 1.

60
Problems with CC
  • The predicates themselves need to be expanded out
  • if (( A gt 1) and (B 0)) then X X/A 
  • if (( A 2) or (X gt 1)) then X X 1
  • really has a CC of 5
  • if ( A gt 1) then if (B 0 ) then X X/A
  • if (A 2) then X X1
  • if (X gt 1) then X X1

61
Cyclomatic Complexity
  • How to use Cyclomatic Complexity?
  • Conjecture is that modules with high complexity
    have more defects and are harder to maintain.
  • Calculate the CC for each function or method
  • Numbers greater than 10 are suspect a modified
    scatter chart is useful for this
  • Switches have large CC
  • Examine the code for modules with high CC to
    determine why the number is so high.
  • Candidate for refactoring
  • May even need new design

62
Module Tasking
  • Card and Agresti (1988) suggested that there are
    more factors in assessing the work of a piece of
    code.
  • They used a count of the module inputs and
    outputs to give a measure of the task of a module
    as well as the number of called (server) modules.
  • S size of a module
  • V sum of I/O
  • F modules called
  • S V/(F 1)
  • S is biased towards larger modules and all I/O is
    considered equal.

63
Chidamber and Kemerer
  • Chidamber and Kemerer gave a set of six OO design
    metrics in the 1990s to compare system complexity
  • Depth of Inheritance Tree (DIT)
  • Number of Children (NOC)
  • Coupling Between Classes (CBO)
  • Response for a Class (RFC)
  • Weighted Methods per Class (WMC)
  • Lack of Cohesion Metric (LCOM)
  • These metrics have altered definitions depending
    on which paper you read!
  • We will discuss this more next time.

64
Span of Control
  • The number of modules invoked by a module defines
    its span of control.
  • Modules with large spans of control cost more to
    develop and maintain and generally give rise to
    more faults.
  • Coupling is an indication of the interconnections
    between modules, functions or classes. High
    coupling indicates higher interdependence (shared
    variables, calls)
  • Loose coupling is when components act
    independently of each other.

65
Fan-In and Fan-Out
  • Constantine and Yourdon(1979) measure the
    coupling of a system using the fan-in and fan-out
    (FIFO). FIFO is effectively the number of lines
    into and out of a module at the design level.
  • Fan-In is a measure of module dependencies, those
    modules to which the one investigated is a
    server.
  • A high Fan-In suggests re-use.
  • Fan-Out suggests that the module investigated
    requires many servers and devolves its work
    elsewhere.
  • Complexity Length (Fan-in Fan-out)2

66
FIFO problems
  • At what design level is FIFO investigated?
  • It is difficult to be comparative between systems
    when some developers use FIFO at a higher level
    of granularity than others. That is, at high
    level design or low level design (CRC level).

67
FIFO
Module FI FO FIFO a 0 3 0 b 1 3 3 c 2 2 4
d 1 0 0 e 2 1 2 f 3 2 6 g 2 0 0
68
Ripple Analysis
  • FIFO should show up poor or problematic design.
    Sinks and sources have zero FIFO.
  • If a module is altered, the effect of that
    change, its Span of control can be assessed
    through a cumulative FIFO.
  • The ripple effect is the name given to the effect
    of code changes on other parts of a system. In
    the previous diagram a change to module B may
    affect module C because B writes to module F
    which writes to C.
  • Measures of this data flow are Information Flow
    metrics.

69
Information Flow
  • A module can have many data flows into and out of
    it.
  • The greater the number of flows between modules,
    they higher they are coupled. Flows may be
    through parameter passing or use of globals or
    shared files.
  • Local Information Flows occur when a module
    passes a parameter to another module. Here A
    passes x to B.
  • ltA,x,Bgt

x
A
B
70
Information Flows
  • If module B returns a result y back to A, the
    local Information flow is ltB,y,Agt.

y
A
B
C
DS
Global Flows occur when a module writes to a
global data structure and another reads from it.
ltA,DS,Cgt states that A writes to the data
structure, C reads from it.
71
Example
ltA,x,Bgt ltA,y,Cgt
ltA,z,Dgt ltB,xx,Egt ltC,yy,Bgt
ltC,yy,Dgt ltD,zz,Agt ltE,File,Bgt
ltE,File,Agt
72
Information Flow Calculation
  • Fan-In and Fan-Out does not distinguish between
    local and global flows. The IF4 metric is a
    summation of all data flows.
  • To calculate the information flow, calculate FIFO
    as before and then compute IF4 as ?(FixFo) 2
  • Module Fi Fo FIFO IF4
  • A 2 3 6 36
  • B 3 1 3 9
  • C 1 2 2 4
  • D 2 1 2 4
  • E 1 1 1 1
  • Total IF4 54

73
Variable Span
  • A (variable) SPAN is the number of statements
    between two textual references to the same
    identifier (variable)
  • X Y
  • Z Y
  • X Y
  • X-span
    Y-span
  • SPAN (X) count of statements between first
    and last statements (assuming no intervening
    references to X) Y has two spans
  • For n appearances of an identifier in the source
    text , n - 1 spans are measured
  • All appearances are counted except those in
    declare statements
  • If SPAN gt 100 statements, then one new item of
    information must be remembered for 100 statements
    till read again

74
Variable Span
  • COMPLEXITY
  • SPANS at any point (take max, average,
    median)
  • OR
  • Statements a variable must be remembered (on
    the average) average span
  • VARIATION
  • Do a live/dead variable analysis compiler
    technique
  • Complexity proportional to variables alive at
    any statement
  • Complexity measure of module M is
  • Where ni spans of size S(ni ) and N is the
    number of statements

75
Variable Span
  • Variable span has been shown to be a reasonable
    measure of complexity.
  • In a study performed by Elshoff, variable span
    has been shown to be a reasonable measure of
    complexity. For commercial PL/1 programs, he
    showed that a programmer must remember
    approximately 16 items of information when
    reading a program. This argues for the difficulty
    of program understanding.

76
Key points
  • Program inspections are very effective in
    discovering errors.
  • Program code in inspections is systematically
    checked by a small team to locate software
    faults.
  • Static analysis tools can discover program
    anomalies which may be an indication of faults in
    the code.
  • Software metrics can be used to indicate areas of
    the code that might need special attention during
    code inspection.

77
Next Class
  • Topic
  • Object Oriented Metrics
  • OO Metrics
  • Lessons Learned for O-O Projects
  • Availability Metrics
  • Exploiting Measures Part II
  • Reading
  • Kan chapters 12-13, pp. 334, 
  • Articles on the Class Page
  • Assignment 4 Defect Removal Effectiveness
  • Due Monday November 2, 2009

78
Journal Exercises
  • A criticism of the cyclomatic complexity measure
    is that a nested loop has the same complexity as
    two sequential loops. Comment.
  • Can you think of any additional complexity
    measures?

79
Exploiting Measures
  • Exploiting Measures Part I
Write a Comment
User Comments (0)
About PowerShow.com