Advancing the state of the art in automated testing - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Advancing the state of the art in automated testing

Description:

Model-Based Testing in the Industry. Applying internal tools: ... Testing one of the most cost-intensive activities in the process ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 39
Provided by: wolfgangg
Category:

less

Transcript and Presenter's Notes

Title: Advancing the state of the art in automated testing


1
Advancing the state of the artin automated
testing
Dr. Wolfgang GrieskampMicrosoft Research,
Redmond, USA With Colin Campbell, Nico Kicillof,
Wolfram Schulte, Thorsten Schuett, Nikolai
Tillmann, Margus Veanes
2
Agenda
  • Introduction
  • Model-based testing
  • Parameterized unit testing
  • Method sequence generation
  • Conclusions

3
Introduction
4
MSR Mission Statement
  • Expand the state of the art in each of the areas
    in which we do research
  • Rapidly transfer innovative technologies into
    Microsoft products
  • Ensure that Microsoft products have a future

5
MSRs Strategy
  • Significant Investment
  • Investing over 6B in RD (MSR and product dev.)
  • Hire the smartest people
  • Staff of over 700 in 10 broad areas
  • International Research lab locations
  • Redmond, Washington (1991)
  • San Francisco, California (1995)
  • Cambridge, United Kingdom (July, 1997)
  • Beijing, Peoples Republic of China (Nov, 1998)
  • Mountain View, California (July, 2001)
  • India (Jan 2005)

6
MSR Names and Notables
  • Jim Gray, 1998 ACM Turing Award
  • Butler Lampson, 1992 ACM Turing Award
  • C.A.R. Hoare, 1980 ACM Turing Award
  • Gary Starkweather, Inventor of the laser printer
  • Jim Blinn, Graphics pioneer
  • Michael Freedman, Fields Medal winner
  • Gordon Bell, Father of the VAX
  • Rick Rashid, creator of the MACH kernel

7
More Information at http//research.microsoft.com/
8
My Research Area Foundations of Programming
Systems (FPS)
  • Major topics
  • Modeling
  • Testing
  • Verification
  • Languages
  • About 15 heads located in Redmond
  • Manager Wolfram Schulte (schulte_at_microsoft.com)
  • This talk focuses on testing

9
What is software testing?
  • Software testing is the execution of code in
    order
  • To reveal bugs (fault directed)
  • To demonstrate conformance
  • Why is it hard?
  • The selection problem
  • Where do the inputs come from?
  • The oracle problem
  • Are the outputs correct?
  • The assessment problem
  • When do we have tested enough?
  • The test management problem
  • How do we setup a test environment? How do we
    log? Etc.

Focus of this talk
10
Mapping testing technologies
11
Model-Based Testing
12
What is a model?
  • A model
  • Is an abstraction of a system from a particular
    perspective
  • Supports investigation, construction or
    prediction
  • Is not necessarily comprehensive
  • Can be expressed as a table, diagram, program,
    etc.

13
Behavioral models
  • Represents (aspects of) the dynamic behavior of a
    system
  • Mathematically labeled state transition system
  • Constructed using one of two general flavors
  • Interaction based
  • Activity (Flow) charts, sequence charts, etc.
  • State based
  • State charts, model programs, etc.

14
What can we do with behavioral models?
  • Simulate and animate
  • Check for conditions (model-checking)
  • Derive and run tests (model-based testing)

15
What is Model-Based Testing?
Definition Generation of Test Cases with
Oracles from a Behavior Model
Model
Provides expected results for
Generates
PassFail
User Info
Test Oracle
Test Cases
Provides actual results for
Run
Implementation
16
Model-Based Testing in the Industry
  • Applying internal tools
  • IBM, Microsoft, BMW, various embedded system
    companies, NASA, NAVY,
  • Offering MBT tools
  • Telecordia, Conformiq, LEIRIOS, All4Tec, T-VEC,
    I-Logix, Telelogic,

17
Model-Based Testing _at_ MS
  • Applied since around 1999
  • Reaches about 10 of the test teams
  • Various internal tools
  • MSR one tool provider (Spec Explorer)

18
Spec Explorer Workflow
Model with Charts
Model
Explore
Traverse
Test
19
Spec Explorer Workflow
Model with Programs
Model
Explore
Traverse
Test
20
Spec Explorer Workflow
Explore State Space
Model
Explore
Traverse
Test
21
Spec Explorer Workflow
Check Properties (Model-Checking)
Model
Explore
Traverse
Test
22
Spec Explorer Workflow
Generate Test Suites
Model
Explore
Traverse
Test
23
Spec Explorer Workflow
Run Test Suite
Model
Explore
Traverse
Test
24
How does it work?
  • Central paradigm is (model) state space
    exploration
  • States can be just control or data or both
  • Transitions are labeled with actions of the
    system-under-test
  • State space exploration finds states and
    transitions of a model
  • Both state and actions can be symbolic
  • Models can be freely composed
  • Uniform representation of different model types
  • State exploration works on the composed model
  • Model and system-under-test are bounded on action
    level
  • Controlled actions must be accepted by the SUT
  • Observed actions must be accepted by the Model
  • Example
  • Control call a method M with given parameters
  • Observation method M returns a given value

25
  • Spec Explorer 2007

26
Parameterized Unit-Testing
27
What is a unit test?
  • A unit test is a sequence of method calls with
    assertions
  • TestMethod
  • void HashtableDefineTest()
  • ArrayList a new ArrayList(1) // 1- make an
    array list
  • object o new object()
  • a.Add(o) // 2- push a
    value onto it
  • Assert.IsTrue(a0 o) // 3- check that
    value was added
  • Unit tests are commonly written today
  • Unit tests are often thought of as specifications

28
Parameterized unit tests (PUTs)
  • Adding parameters turns unit tests into general
    specifications
  • TestMethod
  • void AddParameterizedTest( ArrayList a, object o
    )
  • Assume.IsTrue(a ! null)
  • int len a.Count
  • a.Add(o)
  • Assert.IsTrue(alen o)
  • Read as forall a, o the given assertion holds
  • We can often choose argument values that cover
    all implementation paths
  • // 1. case existing storage used
  • AddParameterizedTest(new ArrayList(1), new
    object())
  • // 2. case new storage allocated
  • AddParameterizedTest(new ArrayList(0), new
    object())

29
Finding Parameters
  • Finding parameters which reach coverage can be
    very tricky
  • may require deep white-box knowledge
  • What about finding them automatically?
  • We use a combination of concrete execution and
    symbolic reasoning to do so (concolic execution)

30
Concolic execution how it works
  • class List
  • int head
  • List tail
  • static bool Find(List xs, int x)
  • while (xs!null) if (xs.head x)
  • return true xs xs.tail
  • return false
  • Concrete values Symbolic
    constraints
  • 1. Choose arbitrary value for x, choose null for
    xs
  • x 0
  • xs null
  • 2. Negate xs null ? choose new list with new
    arbitrary head
  • x 0
  • xs.head 1
  • xs.tail null
  • 3. Negate xs.head ! x ? choose head as 0
  • x 0 xs.head 0 xs.tail null

xs null
xs!null xs.head ! x xs.tail null
? Full coverage!
31
ChallengeExploration of .NET code
ChallengeExploration of real code
Calls to external world
Unmanaged x86 code
Combining concrete executionandsymbolic
reasoning
Unsafe managed .NET code (with pointers)
Program model checker (e.g. JPF/XRT)
Safe managed .NET code
  • Our framework monitors all safe managed .NET code
  • It rewrites .NET byte code on-the-fly, inserting
    monitoring callbacks
  • However, most .NET programs use unsafe/unmanaged
    code for legacy and performance reasons
  • Combining concrete execution and symbolic
    reasoning still worksall conditions that can be
    monitored will be systematically explored

32
  • Parameterized unit testing

33
Method Sequence Generation
34
Method sequence generation
  • Objective
  • Automatically find interesting sequences of
    method invocations of a given API
  • Approach
  • Start with sequences of length 1 and
    incrementally extend them
  • For object parameters, reuse objects created
    previously in a sequence
  • For value parameters, use concolic execution
    (PUT)
  • Apply various heuristics for sequence extension
  • What kind of errors are found
  • Unwanted exceptions
  • Violations of user-written assertion
  • Non-termination

35
Sequence generation how it works
class IntHashtable Hashtable() Add(int key,
object value) int Count get
Round 1
null.Count/c
new Hashtable()/h
null.Add(0,null)
Do not continue exceptions
Round 2
new Hashtable()/h
new Hashtable()/h
new Hashtable()/h
h.Add(0,null)
h.Add(0,h)
h.Count/0
Do not continue no change
Round 3
new Hashtable()/h
new Hashtable()/h
new Hashtable()/h
new Hashtable()/h
h.Add(0,null)
h.Add(0,h)
h.Add(0,h)
h.Add(0,h)
h.Count/1
h.Count/1
h.Add(0,h)
h.Add(1,h)

Round 4
36
  • Method Sequence Generation

37
Conclusion
  • Testing one of the most cost-intensive activities
    in the process
  • We have seen examples how the state-of-the-art
    can be improved
  • Model-based testing (relative mature)
  • Parameterized unit-testing (still experimental)
  • Sequence generation (still experimental)
  • and there is more
  • Watch out for these technologies in products
  • But still some way to go until productization

38
.
Thank you for your attention
Write a Comment
User Comments (0)
About PowerShow.com