Requirements Engineering and Management INFO 627 - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

Requirements Engineering and Management INFO 627

Description:

Black box testing looks only for overall response of the system, ... Interface, procedure calls. Checking, errors, or validation. Data structure or content ... – PowerPoint PPT presentation

Number of Views:16
Avg rating:3.0/5.0
Slides: 50
Provided by: Gle780
Category:

less

Transcript and Presenter's Notes

Title: Requirements Engineering and Management INFO 627


1
Requirements Engineeringand ManagementINFO 627
  • System Validation and Change Management Glenn
    Booker

2
Validation
  • Even with meticulous verification, you can still
    be missing a warm fuzzy feeling that the customer
    wont be happy with the product
  • User or customer involvement in testing can help
    provide that warm fuzzy feeling
  • Other tests will support verification

3
Customer-involved Testing
  • Several kinds of testing may involve the
    customer, and help discover whether they like
    the products approach
  • Prototype testing
  • Alpha testing
  • Beta testing
  • Acceptance testing

4
Prototype Testing
  • Prototype testing may only use a small part of
    the products capabilities
  • Looking for overall look and feel of the
    application, scope of features, and other basic
    features
  • Useful to avoid major disconnects in the approach
    used for the product

5
Alpha and Beta Testing
  • Commercial vendors may use alpha and beta
    testing to validate the product
  • Alpha and beta testing are done in that order,
    and toward the end of system development
  • Also are a major means used to find additional
    bugs (free testers - yippee!)

6
Alpha Testing
  • Alpha testing is very limited in scope
  • Only the most active and adventuresome users are
    allowed to get alpha test copies
  • Product is somewhat to mostly functional
  • Feature limitations expected

7
Beta Testing
  • Beta testing involves more users than alpha
    testing may even be open to the public
  • Beta products are nearly complete
  • Might have noticeable bugs, incompatibilities, or
    other issues unresolved
  • Some may use many rounds of beta testing, even if
    ill named (e testing, anyone?)

8
Acceptance Testing
  • Acceptance testing is done when the system is
    complete (or so the developers think) and the
    customer is ready to accept the system
  • Often acceptance tests are formal tests of every
    !_at_ requirement to prove that the delivered
    system really does what it is supposed to do
  • Often the basis for final contract payment

9
Verification Testing
  • Most other forms of testing support verification
    that the system meets its requirements
  • IEEE 8291998 is the main industry standard for
    software testing
  • Testing involves its own little life cycle

10
Testing Life Cycle
  • Plan testing activities
  • Design the tests
  • Decide what the scope of testing will be
  • Prepare test plans
  • Determine test cases
  • Conduct testing
  • Allow time to fix problems found
  • Retest

11
Testing Scope
  • Testing generally based on different levels the
    system to be tested
  • Unit tests for a module or small group
  • Integration tests for components or subsystems
  • System tests for the whole system
  • Might have external interfaces available or not

12
Tracing Tests
  • Hence each test should be traceable to whatever
    parts of the system its verifying
  • Want to look for redundant, missing, or
    extraneous tests through that tracing

13
Requirements-based Testing
  • Great buzzword-of-the-month
  • Just means that testing (presumably at the system
    level) is directly mapped or traced to the system
    requirements
  • Often looking for overall system response to
    intended activities
  • May be based on scenarios or use cases

14
Positive and Negative Testing
  • Positive test cases look for proper response
    when a correct or valid input is given to the
    system
  • Test the main success scenario of a use case
  • Negative test cases look for proper error
    handling when an incorrect, missing, or invalid
    input is provided
  • Test some extensions or variations

15
Negative Test Conditions
  • Negative testing might include
  • What happens if the data server goes down?
  • Illogical or inconsistent input combinations
  • Misplaced inputs (city NJ)
  • Fields too long or too short
  • Numbers in alphabetic fields
  • What if an external system isnt available?
  • Redundant or repeated inputs

16
Black Box Testing
  • Black box testing looks only for overall response
    of the system, without regard for its internal
    structure, characteristics, or limitations
  • Black since you cant see whats inside the
    system
  • Often like requirements-based testing
  • Often used for system level tests

17
White Box Testing
  • White box testing is when we are allowed to
    construct the test based on internal structure of
    the system
  • White since we presumably turned on a light to
    see whats inside the system
  • Allows test cases for extreme conditions
  • Better for low level testing impractical at the
    system level

18
White Box Test Conditions
  • White box testing might look for extremes such as
  • Start and end conditions (new customer)
  • Odd combinations of conditions
  • Initial conditions (the first blah event ever)
  • Critical values or numbers (0, 1, 2, etc.)
  • Rare events (leap day, holidays)

19
Development Testing Overview
20
Tracing Test Cases
  • Integration or system testing might map directly
    to use cases or to requirements
  • Just like any other verification, we can analyze
    the traceability matrix the same way discussed
    last week
  • Look for omitted and extra relationships, then
    check completeness and correctness manually

21
Testing Design Constraints
  • Many design constraints need special types of
    testing
  • Inspection can be a verification method
  • For example, to prove that the right programming
    language was used
  • Just look at it!
  • Then record the observation somehow

22
Defect Identification
  • The purpose of testing is often to find defects
    in the system
  • Need a consistent method for reporting those
    defects, so they can be removed
  • Example which follows is from the Team Software
    Process

23
Defect Identification
  • For every defect, we need to identify
  • Date, and who identified the defect
  • Test case executed and conditions
  • Some identifier number (defect 264)
  • Description
  • What phase or activity the defect was found
  • What phase the defect was created (harder!)
  • Type of defect (see next slide)

24
Defect Types per TSP
  • Documentation or comments
  • Syntax, typos
  • Build or packaging
  • Assignment, declaration
  • Interface, procedure calls
  • Checking, errors, or validation
  • Data structure or content
  • Function logic, loops
  • System configuration, memory
  • Environment or support system

25
How Much Verification Effort?
  • The depth and coverage of verification for a
    system component may be done at several levels,
    depending on the importance and complexity of the
    component
  • Need to assess the impact of failure by the
    component in question

26
Depth of Verification
  • Possible verification methods, in increasing
    depth of coverage detail, are
  • Examination review by one or two people to
    inspect an item briefly
  • Walkthrough A more structured review by a group
    of peers to look for defects
  • Independent review a walkthrough by someone not
    from the developers organization

27
Depth of Verification
  • Black box testing is often done with automated
    testing tools to simulate system operation (e.g.
    WinRunner)
  • White box testing verification that every
    logical path has been exercised is a possible
    test goal, in addition to deliberately extreme
    input conditions

28
Coverage of Verification
  • The coverage of verification and validation (VV)
    could include
  • VV Everything
  • Use hazard or risk analysis to determine needed
    VV scope

29
VV Everything
  • In cases of extreme quality needs, a clear but
    expensive option is to check everything in
    detail
  • Often people will find parts which still need
    minimal VV might allow on a case-by-case basis
  • Hence the amount of VV needed might vary from
    one component to another

30
Hazard Analysis
  • Sometimes hazard or risk analysis can help define
    what portions of the system need extensive VV
  • Often mandated for safety reasons
  • Main idea is to determine the impact of a
    components failure on the overall functionality
    of the system

31
Hazard Analysis
  • Much effort is needed to determine how the system
    would respond if part of it fails
  • In such cases, degraded modes of operation could
    be considered
  • Even if system failure isnt life threatening, it
    could have severe financial, public relations,
    marketing, or liability issues

32
Return on Investment (ROI)
  • Another approach to deciding VV scope is basing
    it on cost/benefit analysis (CBA)
  • Main concept is to determine the cost of failure
    of a component, and compare that to the cost of
    VV activities
  • See lecture 8 of INFO 503 for additional details
    on ROI

33
Return on Investment
  • ROI is defined as ROI (benefits
    costs)/costs
  • ROI usually expressed in percent total, or
    percent per year
  • Where benefits is the dollar value of things we
    avoided by doing good VV, and costs is the
    cost of doing the VV activities

34
Return on Investment
  • The benefits could be estimated by risk impact
    analysis (benefit avoided risk)
  • What is the chance or risk of the component
    failing, in percent, during the life of the
    system?
  • What is the dollar value of a failure?
  • Multiply those two things to get the benefits
    value for a given component benefits risk
    value

35
Return on Investment
  • The cost of VV activities is based on the labor
    cost of people who plan, design, conduct, and
    report on those activities
  • For the costs, need to know the cost/hour rate
    of those labor categories, and multiply it by
    the number of hours needed to perform VV
    activities for a component costs rate
    hours

36
VV Summary
  • Hence the scope of VV activities can be
    customized for each component (or package, or
    subsystem, etc.) depending on its value to the
    overall system (and hence, its value to the
    stakeholders)

37
Change Management
  • Requirements need to be managed, since we know
    they will change and evolve over time
  • Hence a change request (CR) system and change
    control process are usually needed as part of
    configuration management

38
Change Management
  • Change management must do many things
  • Plan for change
  • Baseline the requirements
  • Establish one channel to request change
  • Use a system to process proposed changes
  • Manage change throughout the hierarchy (system,
    requirements, use cases, SRS, Vision)

39
Plan for Change
  • System needs and requirements change because of
    many factors
  • External influences, such as changes to the
    problem being solved, the external environ-ment,
    or even the existence of the new system
  • Internal factors, like requirements clarification
  • Unofficial changes, such as through developer
    ideas, marketing ideas, response to competition

40
Baseline the Requirements
  • After the requirements get fairly well defined,
    they need to start being formally controlled
  • When they are first approved, they have become
    baselined
  • Baseline establishes a line between existing
    requirements and new ones
  • Typical change is 1-4 of reqts per month

41
Establish One Channel
  • There must be only one way to propose a change
  • Might be called a change request system, which
    feeds the change control process

42
Process Proposed Changes
  • A change control process must be used to process
    (analyze and guide implementation of) changes to
    the system
  • The change control system must determine the
    impact of the change, estimate the effort to
    implement the change, then where and when (e.g.
    what release or build)
  • May use a Configuration Control Board (CCB)

43
Manage Change
  • What is the impact of this change?
  • Need to determine if a change affects other parts
    of the system, or the design or requirements of
    the system
  • Part of the CCBs job may be to help look for
    such impacts, including training, installation,
    documentation, etc.
  • Traceability matrices help look for impact

44
FAA Example
  • See here for an FAA example of a change control
    process
  • Added elements include
  • Checking for interaction with external systems
  • Checking for possible commercial solutions
    instead of custom code
  • Getting external funding or technical expertise

45
Requirements Tools
  • Tools specifically for requirements management
    may be used
  • Custom databases may be used
  • Advantage is that your change control process can
    be built into the tool

46
Implementing Changes
  • Once a baseline has been defined for a system
    (often through a particular release), then
    changes to that baseline are controlled using the
    change control process
  • Changes (fixes or improvements) are developed,
    and at some point added to the draft release build

47
Implementing Changes
  • Then the system is built, and goes through
    integration, regression, and system testing
  • Regression testing proves we didnt break the
    baseline system
  • Then the contents of the build are fixed, or bad
    parts removed
  • Testing is repeated, until the final build
    contents are agreed upon

48
Change Control Feeds Releases
49
Implementing Changes
  • After the build passes testing, the new baseline
    is defined
  • New baseline old baseline specific CRs
  • Then the process starts over again for changing
    the new baseline
  • This is the system maintenance life cycle

(CR Change Request)
Write a Comment
User Comments (0)
About PowerShow.com