Software Engineering - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Software Engineering

Description:

Introduce the necessity for software metrics. Differentiate between process, project and product metrics ... Don't use metrics to appraise individuals ... – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Slides: 31
Provided by: jdp
Category:

less

Transcript and Presenter's Notes

Title: Software Engineering


1
Software Engineering
Software Metrics James Gain (jgain_at_cs.uct.ac.za) h
ttp//people.cs.uct.ac.za/jgain/courses/SoftEng/
2
Objectives
  • Introduce the necessity for software metrics
  • Differentiate between process, project and
    product metrics
  • Compare and contrast Lines-Of-Code (LOC) and
    Function Point (FP) metrics
  • Consider how quality is measured in Software
    Engineering
  • Describe statistical process control for managing
    variation between projects

3
Measurement Metrics
Against Collecting metrics is too hard ... its
too time consuming ... its too political ...
they can be used against individuals ... it wont
prove anything
For In order to characterize, evaluate, predict
and improve the process and product a metric
baseline is essential. Anything that you need to
quantify can be measured in some way that is
superior to not measuring it at all Tom Gilb
4
Terminology
  • Measure Quantitative indication of the extent,
    amount, dimension, or size of some attribute of a
    product or process. A single data point
  • Metrics The degree to which a system, component,
    or process possesses a given attribute. Relates
    several measures (e.g. average number of errors
    found per person hour)
  • Indicators A combination of metrics that
    provides insight into the software process,
    project or product
  • Direct Metrics Immediately measurable attributes
    (e.g. line of code, execution speed, defects
    reported)
  • Indirect Metrics Aspects that are not
    immediately quantifiable (e.g. functionality,
    quantity, reliability)
  • Faults
  • Errors Faults found by the practitioners during
    software development
  • Defects Faults found by the customers after
    release

5
A Good Manager Measures
process
process metrics
project metrics
measurement
product metrics
product
What do we
use as a
Not everything that can be counted counts, and
not everything that counts can be counted. -
Einstein
basis?
size?
function?
6
Process Metrics
  • Focus on quality achieved as a consequence of a
    repeatable or managed process. Strategic and Long
    Term.
  • Statistical Software Process Improvement (SSPI).
    Error Categorization and Analysis
  • All errors and defects are categorized by origin
  • The cost to correct each error and defect is
    recorded
  • The number of errors and defects in each category
    is computed
  • Data is analyzed to find categories that result
    in the highest cost to the organization
  • Plans are developed to modify the process
  • Defect Removal Efficiency (DRE). Relationship
    between errors (E) and defects (D). The ideal is
    a DRE of 1

7
Project Metrics
  • Used by a project manager and software team to
    adapt project work flow and technical activities.
    Tactical and Short Term.
  • Purpose
  • Minimize the development schedule by making the
    necessary adjustments to avoid delays and
    mitigate problems
  • Assess product quality on an ongoing basis
  • Metrics
  • Effort or time per SE task
  • Errors uncovered per review hour
  • Scheduled vs. actual milestone dates
  • Number of changes and their characteristics
  • Distribution of effort on SE tasks

8
Product Metrics
  • Focus on the quality of deliverables
  • Product metrics are combined across several
    projects to produce process metrics
  • Metrics for the product
  • Measures of the Analysis Model
  • Complexity of the Design Model
  • Internal algorithmic complexity
  • Architectural complexity
  • Data flow complexity
  • Code metrics

9
Metrics Guidelines
  • Use common sense and organizational sensitivity
    when interpreting metrics data
  • Provide regular feedback to the individuals and
    teams who have worked to collect measures and
    metrics.
  • Dont use metrics to appraise individuals
  • Work with practitioners and teams to set clear
    goals and metrics that will be used to achieve
    them
  • Never use metrics to threaten individuals or
    teams
  • Metrics data that indicate a problem area should
    not be considered negative. These data are
    merely an indicator for process improvement
  • Dont obsess on a single metric to the exclusion
    of other important metrics

10
Normalization for Metrics
  • How does an organization combine metrics that
    come from different individuals or projects?
  • Depend on the size and complexity of the projec
  • Normalization compensate for complexity aspects
    particular to a product
  • Normalization approaches
  • Size oriented (lines of code approach)
  • Function oriented (function point approach)


11
Typical Normalized Metrics
  • Size-Oriented
  • errors per KLOC (thousand lines of code), defects
    per KLOC, R per LOC, page of documentation per
    KLOC, errors / person-month, LOC per
    person-month, R / page of documentation
  • Function-Oriented
  • errors per FP, defects per FP, R per FP, pages of
    documentation per FP, FP per person-month

Project LOC FP Effort (P/M) R(000) Pp. doc Errors Defects People
alpha 12100 189 24 168 365 134 29 3
beta 27200 388 62 440 1224 321 86 5
gamma 20200 631 43 314 1050 256 64 6
12
Why Opt for FP Measures?
  • Independent of programming language. Some
    programming languages are more compact, e.g. C
    vs. Assembler
  • Use readily countable characteristics of the
    information domain of the problem
  • Does not penalize inventive implementations
    that require fewer LOC than others
  • Makes it easier to accommodate reuse and
    object-oriented approaches
  • Original FP approach good for typical Information
    Systems applications (interaction complexity)
  • Variants (Extended FP and 3D FP) more suitable
    for real-time and scientific software (algorithm
    and state transition complexity)

13
Computing Function Points
Analyze information domain of the application and
develop counts
Establish count for input domain and system
interfaces
Assign level of complexity (simple, average,
complex) or weight to each count
Weight each count by assessing complexity

Assess the influence of global factors that
affect the application
Grade significance of external factors, F_i, such
as reuse, concurrency, OS, ...
FP SUM(count x weight) x C where
complexity multiplier C (0.650.01 x N) degree
of influence N SUM(F_i)
Compute function points
14
Analyzing the Information Domain
15
Taking Complexity into Account
  • Complexity Adjustment Values (F_i) are rated on a
    scale of 0 (not important) to 5 (very important)
  • Does the system require reliable backup and
    recovery?
  • Are data communications required?
  • Are there distributed processing functions?
  • Is performance critical?
  • System to be run in an existing, heavily utilized
    environment?
  • Does the system require on-line data entry?
  • On-line entry requires input over multiple
    screens or operations?
  • Are the master files updated on-line?
  • Are the inputs, outputs, files, or inquiries
    complex?
  • Is the internal processing complex?
  • Is the code designed to be reusable?
  • Are conversion and instillation included in the
    design?
  • Multiple installations in different
    organizations?
  • Is the application designed to facilitate change
    and ease-of-use?

16
Exercise Function Points
  • Compute the function point value for a project
    with the following information domain
    characteristics
  • Number of user inputs 32
  • Number of user outputs 60
  • Number of user enquiries 24
  • Number of files 8
  • Number of external interfaces 2
  • Assume that weights are average and external
    complexity adjustment values are not important.
  • Answer

17
Example SafeHome Functionality
Test Sensor
Password
SafeHome System
User
Sensors
Zone Setting
Zone Inquiry
Messages
Sensor Inquiry
User
Sensor Status
Panic Button
(De)activate
(De)activate
Monitor and Response System
Password, Sensors, etc.
Alarm Alert
System Config Data
18
Example SafeHome FP Calc
weighting factor
count
simple avg. complex
measurement parameter
9
3
number of user inputs
3
4
6

X






8
number of user outputs
2
4
5
7

X






number of user inquiries
2
6
3
4
6

X






1
7
number of files
7
10
15

X






22
4
number of ext.interfaces
5
7
10

X
52
count-total
1.11
complexity multiplier
58
function points
19
Exercise Function Points
  • Compute the function point total for your
    project. Hint The complexity adjustment values
    should be low ( )
  • Some appropriate complexity factors are (each
    scores 0-5)
  • Is performance critical?
  • Does the system require on-line data entry?
  • On-line entry requires input over multiple
    screens or operations?
  • Are the inputs, outputs, files, or inquiries
    complex?
  • Is the internal processing complex?
  • Is the code designed to be reusable?
  • Is the application designed to facilitate change
    and ease-of-use?

20
OO Metrics Distinguishing Characteristics
  • The following characteristics require that
    special OO metrics be developed
  • Encapsulation Concentrate on classes rather
    than functions
  • Information hiding An information hiding metric
    will provide an indication of quality
  • Inheritance A pivotal indication of complexity
  • Abstraction Metrics need to measure a class at
    different levels of abstraction and from
    different viewpoints
  • Conclusion the class is the fundamental unit of
    measurement

21
OO Project Metrics
  • Number of Scenario Scripts (Use Cases)
  • Number of use-cases is directly proportional the
    number of classes needed to meet requirements
  • A strong indicator of program size
  • Number of Key Classes (Class Diagram)
  • A key class focuses directly on the problem
    domain
  • NOT likely to be implemented via reuse
  • Typically 20-40 of all classes are key, the rest
    support infrastructure (e.g. GUI, communications,
    databases)
  • Number of Subsystems (Package Diagram)
  • Provides insight into resource allocation,
    scheduling for parallel development and overall
    integration effort

22
OO Analysis and Design Metrics
  • Related to Analysis and Design Principles
  • Complexity
  • Weighted Methods per Class (WMC) Assume that n
    methods with cyclomatic complexity
    are defined for a class C
  • Depth of the Inheritance Tree (DIT) The maximum
    length from a leaf to the root of the tree. Large
    DIT leads to greater design complexity but
    promotes reuse
  • Number of Children (NOC) Total number of
    children for each class. Large NOC may dilute
    abstraction and increase testing

23
Further OOAD Metrics
  • Coupling
  • Coupling between Object Classes (COB) Total
    number of collaborations listed for each class in
    CRC cards. Keep COB low because high values
    complicate modification and testing
  • Response For a Class (RFC) Set of methods
    potentially executed in response to a message
    received by a class. High RFC implies test and
    design complexity
  • Cohesion
  • Lack of Cohesion in Methods (LCOM) Number of
    methods in a class that access one or more of the
    same attributes. High LCOM means tightly coupled
    methods

24
OO Testability Metrics
  • Encapsulation
  • Percent Public and Protected (PAP) Percentage of
    attributes that are public. Public attributes can
    be inherited and accessed externally. High PAP
    means more side effects
  • Public Access to Data members (PAD) Number of
    classes that access another classes attributes.
    Violates encapsulation
  • Inheritance
  • Number of Root Classes (NRC) Count of distinct
    class hierarchies. Must all be tested separately
  • Fan In (FIN) The number of superclasses
    associated with a class. FIN gt 1 indicates
    multiple inheritance. Must be avoided
  • Number of Children (NOC) and Depth of Inheritance
    Tree (DIT) Superclasses need to be retested for
    each subclass

25
Quality Metrics
  • Measures conformance to explicit requirements,
    following specified standards, satisfying of
    implicit requirements
  • Software quality can be difficult to measure and
    is often highly subjective
  • Correctness
  • The degree to which a program operates according
    to specification
  • Metric Defects per FP
  • Maintainability
  • The degree to which a program is amenable to
    change
  • Metric Mean Time to Change. Average time taken
    to analyze, design, implement and distribute a
    change

26
Quality Metrics Further Measures
  • Integrity
  • The degree to which a program is impervious to
    outside attack
  • Summed over all types of security attacks, i,
    where t threat (probability that an attack of
    type i will occur within a given time) and s
    security (probability that an attack of type i
    will be repelled)
  • Usability
  • The degree to which a program is easy to use.
  • Metric (1) the skill required to learn the
    system, (2) the time required to become
    moderately proficient, (3) the net increase in
    productivity, (4) assessment of the users
    attitude to the system
  • Covered in HCI course

27
Quality Metrics McCalls Approach
Maintainability
Portability
Flexibility
Reusability
Testability
Interoperability
Correctness
Usability
Efficiency
Reliability
Integrity
McCalls Triangle of Quality
28
Quality Metrics Deriving McCalls Quality Metrics
  • Assess a set of quality factors on a scale of 0
    (low) to 10 (high)
  • Each of McCalls Quality Metrics is a weighted
    sum of different quality factors
  • Weighting is determined by product requirements
  • Example
  • Correctness Completeness Consistency
    Traceability
  • Completeness is the degree to which full
    implementation of required function has been
    achieved
  • Consistency is the use of uniform design and
    documentation techniques
  • Traceability is the ability to trace program
    components back to analysis
  • This technique depends on good objective
    evaluators because quality factor scores can be
    subjective

29
Managing Variation
  • How can we determine if metrics collected over a
    series of projects improve (or degrade) as a
    consequence of improvements in the process rather
    than noise?
  • Statistical Process Control
  • Analyzes the dispersion (variability) and
    location (moving average)
  • Determine if metrics are
  • (a) Stable (the process exhibits only natural
    or controlled changes) or
  • (b) Unstable (process exhibits out of control
    changes and metrics cannot be used to predict
    changes)

30
Control Chart
std. dev.
  • Compare sequences of metrics values against mean
    and standard deviation. e.g. metric is unstable
    if eight consecutive values lie on one side of
    the mean.

Mean
- std. dev.
Write a Comment
User Comments (0)
About PowerShow.com