Quality Assessment for CBSD: Techniques and A Generic Environment - PowerPoint PPT Presentation

About This Presentation
Title:

Quality Assessment for CBSD: Techniques and A Generic Environment

Description:

Existing Software Quality Assurance (SQA) techniques and methods have explored ... To display quality of components by different categories ... – PowerPoint PPT presentation

Number of Views:285
Avg rating:3.0/5.0
Slides: 29
Provided by: CSE
Category:

less

Transcript and Presenter's Notes

Title: Quality Assessment for CBSD: Techniques and A Generic Environment


1
Quality Assessment for CBSD Techniques and
A Generic Environment
  • Presented by Cai Xia
  • Supervisor Prof. Michael Lyu
  • Markers Prof. Ada Fu
  • Prof. K.F.
    Wong
  • May 11, 2001

2
Outline
  • Introduction
  • Software Metrics and Quality Assessment Models
  • Experiment setup
  • Prototype
  • Conclusion

3
Introduction
  • Component-based software development (CBSD) is
    to build software systems using a combination of
    components
  • CBSD aims to encapsulate function in large
    components that have loose couplings.
  • A component is a unit of independent deployment
    in CBSD.
  • The over quality of the final system greatly
    depends on the quality of the selected
    components.

4
Introduction
  • Existing Software Quality Assurance (SQA)
    techniques and methods have explored to measure
    or control the quality of software systems or
    process.
  • Management/process control
  • Software testing
  • Software metrics
  • Quality prediction techniques

5
Introduction
  • Due to the special feature of CBSD, conventional
    SQA techniques and methods are uncertain to apply
    to CBSD.
  • We propose a set of experiments to investigate
    most efficient and effective SQA approach
    suitable to CBSD.
  • Component-based Program Analysis and Reliability
    Evaluation (ComPARE)

6
Quality Assessment Techniques
 
  • Software metrics
  • Process metrics
  • Static code metrics
  • Dynamic metrics
  • Quality prediction model
  • Summation model
  • Product model
  • Classification tree model
  • Case-based reasoning method
  • Bayesian Belief Network

7
Progress and Dynamic Metrics
 
8
Static Code Metrics
9
Software Quality Assessment Models
  • Summation Model
  • Product Model

mis are normalized as a value which is close to 1
10
Software Quality Assessment Models
  • Classfication Tree Model
  • classify the candidate components into different
    quality categories by constructing a tree
    structure

11
Software Quality Assessment Models
  • Case-Based Reasoning
  • A CBR classifier uses previous similar cases as
    the basis for the prediction. case base.
  • The candidate component that has a similar
    structure to the components in the case base will
    inherit a similar quality level.
  • Euclidean distance, z-score standardization, no
    weighting scheme, nearest neighbor.

12
Software Quality Assessment Models
  • Bayesian Network
  • a graphical network that represents probabilistic
    relationships among variables
  • enable reasoning under uncertainty
  • The foundation of Bayesian networks is the
    following theorem known as Bayes Theorem

where H, E, c are independent events, P is the
probability of such event under certain
circumstances
13
BBN Example reliability prediction
14
Experiment Setup
  • Investigate applicability of existing QA
    techniques on CBSD
  • Adopt three different kinds of software metrics
  • Process metrics
  • static code metrics
  • dynamic metrics.
  • Use several quality prediction models
  • Classification tree model
  • Case-based reasoning
  • Bayesian Belief Network

15
Experiment Setup
16
Experiment Setup
  • Data Set
  • Real life project
  • --- Soccer Club Management System
  • Java language
  • CORBA platform
  • same specification
  • 20-25 set of programs by different teams
  • .

17
Experiment Setup
  • Experiment procedures
  • Collect metrics of all programs
  • Apply on different models
  • Design test cases
  • Use test results as indicator of quality
  • Validate the prediction results again test result
  • Adopt cross evaluation training data and test
    set
  • Select the most effective prediction model(s)
  • Adjust the coefficients/weights of different
    metrics in the final model(s)

18
An Experiment Example
 
19
An Experiment Example
 
20
Why ComPARE?
 
  • An environment integrates the functions
  • Automate the experiment procedure
  • Look into both the whole system and components

21
Why ComPARE?
 
ComPARE
Final System
22
Architecture of ComPARE
23
Features of ComPARE
  • To predict the overall quality by using process
    metrics, static code metrics as well as dynamic
    metrics.
  • To integrate several quality prediction models
    into one environment and compare the prediction
    result of different models
  • To define the quality prediction models
    interactively

24
Features of ComPARE
  • To display quality of components by different
    categories
  • To validate reliability models defined by user
    against real failure data
  • To show the source code with potential problems
    at line-level granularity
  • To adopt commercial tools in accessing software
    data related to quality attributes

25
Prototype
  • GUI of ComPARE for metrics, criteria and tree
    model

26
Prototype
  • GUI of ComPARE for prediction display, risky
    source code and result statistics

27
Conclusion
  • Problem statement conventional SQA techniques
    are uncertain to apply to CBSD.
  • We propose an environment of a set of experiments
    to investigate most efficient and effective
    approach suitable to CBSD.
  • As future work, we will conclude whatever
    approaches applicable to CBSD through experiments.

28
Q A
Write a Comment
User Comments (0)
About PowerShow.com