A Generic Quality Assessment Environment for ComponentBased Software Systems PowerPoint PPT Presentation

presentation player overlay
About This Presentation
Transcript and Presenter's Notes

Title: A Generic Quality Assessment Environment for ComponentBased Software Systems


1
A Generic Quality Assessment Environment for
Component-Based Software Systems
  • Cai Xia
  • March 13, 2001

2
Outline
  • Introduction
  • Objective and features
  • Metrics and Models
  • Prototype
  • Conclusion

3
Introduction
  • Component-based software development (CBSD) is
    to build software systems using a combination of
    components
  • The over quality of the final system greatly
    depends on the quality of the selected
    components.
  • Software metrics are designed to measure
    different attributes of a software system and
    development process. E.g., process metrics,
    static code metrics and dynamic metric.

4
Introduction
  • several different techniques have been developed
    to describe the predictive relationship between
    software metrics and the classification of the
    software components into fault-prone and non
    fault-prone categories
  • discriminant analysis
  • classification tree
  • pattern recognition
  • Bayesian belief network (BBN)
  • case-based reasoning (CBR)
  • regression tree models

5
Introduction
  • There are also some prototype or tools that use
    such techniques to automate the procedure of
    software quality prediction.
  • However, these tools address only one kind of
    metrics, and only one prediction technique for
    the overall software quality assessment.
  • We propose a Component-based Program Analysis
    and Reliability Evaluation (ComPARE) to combine
    different metrics and models to evaluate the
    quality of software systems in CBSD.

6
Architecture
7
Objective
  • To predict the overall quality by using process
    metrics, static code metrics as well as dynamic
    metrics.
  • To integrate several quality prediction models
    into one environment and compare the prediction
    result of different models
  • To define the quality prediction models
    interactively

8
Objective
  • To display quality of components by different
    categories
  • To validate reliability models defined by user
    against real failure data
  • To show the source code with potential problems
    at line-level granularity
  • To adopt commercial tools in accessing software
    data related to quality attributes

9
Progress and Dynamic Metrics
 
 
10
Static Code Metrics
11
Models
  • Summation Model
  • Product Model

mis are normalized as a value which is close to 1
12
Models
  • Classfication Tree Model
  • classify the candidate components into different
    quality categories by constructing a tree
    structure

13
Models
  • Case-Based Reasoning
  • A CBR classifier uses previous similar cases as
    the basis for the prediction. case base.
  • The candidate component that has a similar
    structure to the components in the case base will
    inherit a similar quality level.
  • Euclidean distance, z-score standardization, no
    weighting scheme, nearest neighbor.

14
Models
  • Bayesian Network
  • a graphical network that represents probabilistic
    relationships among variables
  • enable reasoning under uncertainty
  • The foundation of Bayesian networks is the
    following theorem known as Bayes Theorem

where H, E, c are independent events, P is the
probability of such event under certain
circumstances
15
BBN Example reliability prediction
16
Prototype
  • Selecting Metrics
  • Selecting Criteria
  • Model selection and definition
  • Model Validation
  • Display result (Display menu)

17
Prototype
  • GUI of ComPARE for metrics, criteria and tree
    model

18
Prototype
  • GUI of ComPARE for prediction display,
    risky source code and result statistics

19
Conclusion
  • Problem to certify the quality of individual
    components, as well as the overall system.
  • ComPARE a generic quality assessment environment
    for software components.
  • It collects metrics, integrates different
    reliability assessment models, and validates them
    against the failure data collected in real life.

20
Q A
Write a Comment
User Comments (0)
About PowerShow.com