Software Testing - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Software Testing

Description:

Software bug occurs when one or more of the following five rules is true: ... down in glee, giving each other high-fives, and doing a little dance when they ... – PowerPoint PPT presentation

Number of Views:119
Avg rating:3.0/5.0
Slides: 37
Provided by: notesU
Category:

less

Transcript and Presenter's Notes

Title: Software Testing


1
Software Testing
  • Overview and Definition

2
Software Testing Background
  • Infamous software error case studies
  • What is bug?
  • Why do bug occur?
  • The cost of bug
  • What does software tester do?
  • What makes a good software tester?

3
Case Studies
  • Disneys Lion King (1994-1995)
  • CD-ROM game for children The Lion King Animated
    Storybook.
  • Intel Pentium Floating Point Division Bug (1994)
  • Try this (4195835/3145727)3145727-4195835 ?
  • The Y2K (Year 2000) Bug

4
What is bug?
  • Term for software failures defect, fault,
    problem, error, incident, anomaly, failure,
    inconsistency, feature, bug.
  • Software bug (formal definition)
  • Define problem product spec.
  • Software bug occurs when one or more of the
    following five rules is true
  • The software doesnt do something that the
    product specification says it should do.
  • The software does something that the product
    specification says it shouldnt do.
  • The software does something that the product
    specification doesnt mention.
  • The software doesnt do something that the
    product specification doesnt mention but should.
  • The software is difficult to understand, hard to
    use, slow or-in the software testers eyes.

5
Why do bugs occur?
  • There are several reasons specifications are the
    largest bug producer the spec doesnt thorough
    enough, constantly changing.
  • Design rush, changed, or not well communicated.
  • Coding error.

6
The cost of bug
7
What exactly does a software tester do??
  • Find bugs
  • Find them as early as possible
  • And make sure they get fixed

8
What makes a good software tester?
  • They are explorers
  • They are troubleshooters
  • They are relentless
  • They are creative
  • They are perfectionist
  • They exercise good judgment
  • They are tactful and diplomatic
  • They are persuasive

9
Software testing is fun!
  • A fundamental trait of software testers is that
    they simply like to break things. They live to
    find those elusive system crashes. They take
    great satisfaction in laying to waste the most
    complex programs. Theyre often seen jumping up
    and down in glee, giving each other high-fives,
    and doing a little dance when they bring a system
    to its knees. Its the simple joys of life that
    matter the most.

10
Definition of testing
  • Glen Myers
  • Testing is the process of executing a program
    with the intent of finding errors

11
Objective explained
  • Paul Jorgensen
  • Testing is obviously concerned with errors,
    faults, failures and incidents. A test is the act
    of exercising software with test cases with an
    objective of
  • Finding failure
  • Demonstrate correct execution

12
ISO Definition
  • Technical operation that consist of the
    determination of one or more characteristics of a
    given product, process or service according to a
    specified procedure

13
Software Testing
  • Process to measure the quality of a computer
    software in term of
  • Correctness, completeness, and security
  • Technical requirement
  • Capability, Reliability, Efficiency
  • Portability, maintainability, compatibility
  • Usability

14
A Testing Life Cycle
Fix
Error
Requirement Specs
Fault Resolution
Error
Fault
Fault Isolation
Design
Error
Fault
Coding
Fault Classification
Fault
incident
Testing
15
Terminology
  • Error
  • Represents mistakes made by people
  • Fault
  • Is result of error. May be categorized as
  • Fault of Commission we enter something into
    representation that is incorrect
  • Fault of Omission Designer can make error of
    omission, the resulting fault is that something
    is missing that should have been present in the
    representation

16
Cont
  • Failure
  • Occurs when fault executes.
  • Incident
  • Behavior of fault. An incident is the symptom(s)
    associated with a failure that alerts user to the
    occurrence of a failure
  • Test case
  • Associated with program behavior. It carries set
    of input and list of expected output

17
Cont
  • Verification
  • Process of determining whether output of one
    phase of development conforms to its previous
    phase.
  • Validation
  • Process of determining whether a fully developed
    system conforms to its SRS document

18
Verification versus Validation
  • Verification is concerned with phase containment
    of errors
  • Validation is concerned about the final product
    to be error free

19
Classification of Test
  • There are two levels of classification
  • One distinguishes at granularity level
  • Unit level
  • System level
  • Integration level
  • Other classification (mostly for unit level) is
    based on methodologies
  • Black box (Functional) Testing
  • White box (Structural) Testing

20
Test methodologies
  • Functional (Black box) inspects specified
    behavior
  • Structural (White box) inspects programmed
    behavior

21
Functional Test cases
22
Structural Test cases
23
When to use what
  • Few set of guidelines available
  • A logical approach could be
  • Prepare functional test cases as part of
    specification. However they could be used only
    after unit and/or system is available.
  • Preparation of Structural test cases could be
    part of implementation/code phase.
  • Unit, Integration and System testing are
    performed in order.

24
Unit testing essence
  • Applicable to modular design
  • Unit testing inspects individual modules
  • Locate error in smaller region
  • In an integrated system, it may not be easier to
    determine which module has caused fault
  • Reduces debugging efforts

25
Test cases and Test suites
  • Test case is a triplet I, S, O where
  • I is input data
  • S is state of system at which data will be input
  • O is the expected output
  • Test suite is set of all test cases
  • Test cases are not randomly selected. Instead
    even they need to be designed.

26
Black box testing
  • Equivalence class partitioning
  • Boundary value analysis
  • Comparison testing
  • Orthogonal array testing
  • Decision Table based testing
  • Cause Effect Graph

27
Why define equivalence classes?
  • Test the code with just one representative value
    from each equivalence class
  • as good as testing using any other values from
    the equivalence classes.

28
Equivalence Class Partitioning
  • How do you determine the equivalence classes?
  • examine the input data.
  • few general guidelines for determining the
    equivalence classes can be given

29
Equivalence Class Partitioning
  • If the input data to the program is specified by
    a range of values
  • e.g. numbers between 1 to 5000.
  • one valid and two invalid equivalence classes are
    defined.

5000
1
30
Example (cont.)?
  • There are three equivalence classes
  • the set of negative integers,
  • set of integers in the range of 1 and 5000,
  • integers larger than 5000.

5000
1
31
White-Box Testing
  • Statement coverage
  • Branch coverage
  • Path coverage
  • Condition coverage
  • Mutation testing
  • Data flow-based testing

32
Statement Coverage
  • Statement coverage methodology
  • design test cases so that every statement in a
    program is executed at least once.
  • The principal idea
  • unless a statement is executed, we have no way of
    knowing if an error exists in that statement

33
Program Analysis Tools
  • An automated tool
  • takes program source code as input
  • produces reports regarding several important
    characteristics of the program,
  • such as size, complexity, adequacy of commenting,
    adherence to programming standards, etc.

34
Program Analysis Tools
  • Some program analysis tools
  • produce reports regarding the adequacy of the
    test cases.
  • There are essentially two categories of program
    analysis tools
  • Static analysis tools
  • Dynamic analysis tools

35
Static Analysis Tools
  • Static analysis tools
  • assess properties of a program without executing
    it.
  • Analyze the source code
  • provide analytical conclusions.

36
Dynamic Analysis Tools
  • Dynamic program analysis tools require the
    program to be executed
  • its behaviour recorded.
  • Produce reports such as adequacy of test cases.
Write a Comment
User Comments (0)
About PowerShow.com