Software Quality Engineering - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Software Quality Engineering

Description:

Software Quality Engineering Software Metrics-II – PowerPoint PPT presentation

Number of Views:311
Avg rating:3.0/5.0
Slides: 31
Provided by: Aft72
Category:

less

Transcript and Presenter's Notes

Title: Software Quality Engineering


1
Software Quality Engineering
  • Software Metrics-II

2
Software Metrics
  • Metrics are measures to provide feedback to the
    project mangers, developers, and programmers
    about quality of their work, project and products.

3
QA Questions
  • During the development process we ask
  • Will we produce a product that meets or exceed
    the quality attributes set requirements and
    expectations of the customer?
  • At the end of a process we ask
  • Have we produced a product that meets or exceeds
    the quality attribute set requirement

4
Role of QA Engineer
  • For each element of the customer quality
    attribute set,
  • you must select and possibly create specific
    measurements that can be applied repeatedly
    during the development process and
  • then again at its conclusion.
  • The results of such measurement can be used to
    determine progress towards a finally attainment
    of quality goals

5
Metrics
  • Measurements combined with desired results are
    referred as metrics
  • We have checklist and appraisal
    methods/activities to ensure the health of the
    process

6
Types of Software Metrics
  • Process Metrics can be used to improve the
    software development and maintenance process,
    e.g. patterns of defect removal, response time of
    a fix process, effectiveness of the defect
    removal process during development.
  • Product Metrics describe the characteristics of
    the product, such as its size, complexity,
    performance.
  • Project Metrics describe the characteristics of
    the project and its execution, such as, number of
    software developers, staffing pattern over the
    lifecycle of the project, cost and schedule.
  • Software Quality Metrics are the metrics that
    deal with quality aspect of the software process,
    product and project.
  • In-Process and End Product quality Metrics

7
Software Quality Engineering
  • The essence of software quality engineering is to
    investigate the relationship among in-process
    metrics, project characteristics, and end product
    quality and, based on the findings, to engineer
    improvements in both process and product quality.
  • In Customer-oriented SQA, the quality attribute
    set drives the metrics selection and development
    process.

8
Process Metrics
9
Defect Arrival Rate (DAR)
  • It is the number of defects found during testing
    measured at regular intervals over some period of
    time
  • Rather than a single value at set of values is
    associated with this metrics
  • When plotted on a graph,
  • the data may rise, indicating a positive defect
    arrival rate
  • It may stay flat, indicating a constant defect
    arrival rate
  • Or decrease, indicating a negative defect arrival
    rate.

10
Defect Arrival Rate (DAR)
  • Interpretation of DAR can be difficult a
    Negative DAR
  • may be indicating an improvement in product
  • To validate this interpretation, one must remove
    other possibilities, such as, decline in test
    effectiveness
  • New test may need to be designed to improve the
    test effectiveness
  • May indicate under staffing of the test
    organization
  • A plot of DAR over time span could be useful
    indicator

11
Test Effectiveness
  • Tests that always pass are considered ineffective
  • Such test form regression testing, if any of
    them fails a regression in quality of product has
    occurred.
  • Test effectiveness (TE) is measured as
  • TE Dn / Tn
  • Dn is the number of defects found by formal tests
  • Tn is the total number of formal tests
  • When calculated at regular intervals and plotted
  • If the graph rises over time, TE may be improving
  • If the graph is falling over time, TE may be
    waning
  • The interpretation should made in the context of
    other metrics being used in the process

12
Defects by Phase
  • Fixing a defect is early in the process is
    cheaper and easy.
  • At conclusion of each discrete phase of
    development process, a count of new defects is
    taken and plotted to observe the trend.
  • Defect by phase is a variation of DAR metrics
  • Domain of this metrics is the development phase,
    rather than regular interval.
  • Interpretation
  • A rising graph might indicate that the methods
    used for defect detection in earlier phases were
    not effective.
  • A decreasing graph may indicate the effectiveness
    of defect removal in earlier phases

13
Defect Removal Effectiveness (DRE)
  • DRE Dr / (Dr Dt) x 100
  • Dr is the number of defects removed prior to
    release
  • Dt is the total number of defects that remain in
    the product at release
  • Interpretation
  • Effectiveness of this metric depends on
    thoroughness and diligence with which your staff
    reports defects.
  • This metrics may be applied on phase-by-phase
    basis to gage the relative effectiveness of
    defect removal by phase.
  • Weak areas in the process may be identified to
    improvement
  • The result may plotted and trend may be observed
    and used to adjust the process.

14
Defect Backlog
  • It is count of the number of defects in the
    product following its release
  • It is usually metered at regular interval and
    plotted for trend analysis.
  • A more useful way to represent defect backlog is
    defect by severity, e.g., a month after release
    of your product, the backlog contains
  • 2 severity 1 defects
  • 8 severity 2 defects
  • 24 severity 3 defects
  • 90 severity 4 defects
  • Cased on this information, the PM may decide to
    shift resources to resolve severity 1 2 defects
  • Such a high improvement requests may also
    indicate review of the requirements gathering
    process

15
Backlog management index (BMI)
  • Problems arise after product release
  • New problems arrive that impact the net result of
    your teams efforts to reduce the backlog.
  • If the number of new problems a closed faster
    than the new one are opened, the team is winning
    otherwise it is losing ground.
  • BMI Dc / Dn
  • Dc number defect closed during some period of
    time
  • Dc number defect new defects that arrive during
    the same period of time
  • Interpretation if BMI is greater than 1, your
    team is gaining ground, otherwise it is losing
  • A trend observed in a plot may indicate the level
    of backlog management effort.

16
Fix Response Time
  • It is the average time it takes your team to fix
    a defect.
  • It may the elapsed time between the discovery of
    a defect and the development of a
    verified/unverified fix
  • A better metrics would be Fix response time by
    the severity of defect.
  • A percent of timely fixed defects is used as fix
    responsiveness measure and high value indicates
    the customer satisfaction

17
Percent Delinquent Fixes
  • Afix is delinquent if it exceeds your fix
    response criteria.
  • PDF (Fd / Fn ) 100
  • FD number of delinquent fixed
  • FN number of non-delinquent fixes and
  • Multiply by 100
  • The metrics is also better by severity since.

18
Defective Fixes
  • A defect for which a fix has been prepared that
    later turns out to be defective or worse, creates
    one or more additional problems is called a
    defective fix.
  • The count of such defective fixes is the metric
  • The new defects introduced by defective fixes
    must be tracked

19
Product Metrics
20
Defect Density
  • The general concept of defect rate is the number
    of defects over the opportunities for error (OFE)
    during a specific time frame.
  • Defect density is used to measure the number of
    defects discovered per some unit of product
    size, e.g., KLOC, FP
  • If a product has large number of defects during
    formal testing, customer will discover a
    similarly large number of defects while using the
    product and it converse is true as well.
  • The answers to question related to customer
    defect tolerance may help to select an acceptable
    value for the metric.
  • Phase-wise application of the metric may also be
    helpful

21
Defect by severity
  • It is a simple count of unresolved defects by
    severity
  • Usually measured at regular intervals and
  • Plotted to see any trend, showing progress
    towards acceptable value for each severity.
  • Movement away from those value may indicate that
    projects at risk of failing to satisfy the
    condition of the metric

22
Mean time between failure - MTBF
  • The MTBF metric is simple average of elapsed time
    between failures that occur during test.
  • This metric is defined in terms of the type of
    testing performed during the measurement period,
    e.g., moderate-stress testing, heavy stress
    testing
  • Minimum ship critera

23
Customer Reported problems
  • It is a simple count of the number of new (no
    duplicate) problems reported by customer over
    some interval.
  • When measured at regular intervals and plotted,
    trend identified would require investigation on
    the causes behind the trend
  • If and increase in CPR identified and a
    correlation or cause-effect analysis indicate a
    relationship between the CPR and the number
    end-users using the product, it may indicate that
    the product has a serious scalability problems
  • A profiling implementation may help to determine
    the usage patron of the end used for different
    features of the product

24
Customer Satisfaction
  • Customer Satisfaction metrics is typically
    measured through a customer satisfaction survey.
  • Questions are designed to be answered on a range
    responses, typically 1-5
  • questions should be designed to assess both the
    respondents subjective and objective perception
    of product quality

25
Beyond the Metrics
  • Does our metrics bucket suffice for our quality
    attribute set
  • We might have to create or alter certain metrics
  • Usability studies are conducted by independent
    labs that invite groups of end users to their
    test facility to evaluate the usability of
    product.
  • Checklist are an effective means by which to
    determine whether a product possesses very
    specific non-measurable attributes or attribute
    elements

26
Process for Metrics Definition
  • The attributes in the Quality Attributes Set are
    considered one by one
  • The attribute statement is divided into
    individual attribute elements
  • For Each Element, one has to see Is the element
    measurable or not?
  • If Not
  • One has to chose between various non-measurable
    QA options
  • E.g., usability Studies, Checklists, etc
  • If yes
  • Look in the Metrics Bucket that if any of the
    metrics can be used to measure the said attribute
    element/feature.
  • If no measure is available one has to define a
    new metrics
  • Some times some other metrics being used may
    suffice for the attribute element in question and
    now new metrics may be required.

27
Ease of Use
  1. Softwares customers prefer to purchase software
    products that dont require them to read the
    manual or use the on-line help facilities. They
    look for products with Graphical User Interfaces
    (GUIs) that look and feel like other products
    that they use regularly, such as their word
    processors and spreadsheet programs. Those
    programs have what they call intuitive user
    interfaces, which is another way of saying that
    they can learn the products by playing with it
    for a short period of time without consulting the
    manual.
  2. They also prefer products that have a GUI that is
    sparsely populated with buttons and pop-up (or
    pull-down) menus, leaving a large work area in
    which they can create their frequent masterpieces.

28
Metrics for ease of use
  • The attribute element 1 is not measurable
  • Therefore, usability studies
  • Specific questions may be designed for the user
    in the study groups
  • EG, NUTES
  • Metrics number of buttons/menus etc. on the
    interface
  • Other commonly used applications may used to
    determine an acceptable threshold value

29
Defect Tolerance
  • To Softwares customers, defect such as some
    typos in message strings and in help text as well
    as minor disparities between documented and
    actual behavior or function will be tolerated
    until the next release. On the other hand, they
    will not tolerate that alter or destroy their
    works in progress or that adversely affect their
    productivity such defects will likely drive them
    to abandonee the products in favor of a product
    that may be less robust but reliable. They
    consider defects such as general exceptions,
    hangs, data corruption, and long delays between
    operation to be intolerable defects.
  • Metrics number of defects by severity

30
Defect Impact
  • Softwares customers see themselves as highly
    productive people who prefer to work on several
    things at once. They often start several
    applications on their workstations
    simultaneously, jumping from on to another. Many
    of Softwares customers have had an experience
    where they noticed that whenever they jumped from
    their word processor to a particular vendors
    desktop publishing system, they had to wait
    several minutes for the view to redraw. The
    desktop publishing system developers decided to
    optimize memory usage, sacrificing view redrawing
    performance. They assumed that most users would
    not switch from application to application while
    using their product consequently view redrawing
    would be infrequent. To save memory, they decided
    to save the current view on disk, retrieving it
    whenever they needed to perform a redraw. This
    design decision saved a large amount of memory
    but sacrificed redrawing performance. Though some
    users might appreciate the designers effort to
    decrease memory usage, Softwares customers view
    the resulting poor performance of view redrawing
    as a major defect since it severely impacted
    their productivity.
  • No metrics may be requires as the other metrics
    number of defects by severity my be used
Write a Comment
User Comments (0)
About PowerShow.com