Chapter 3 Writing Testbenches Functional Verification of HDL Models - PowerPoint PPT Presentation

1 / 67
About This Presentation
Title:

Chapter 3 Writing Testbenches Functional Verification of HDL Models

Description:

VHDL and Verilog are simulation languages, not verification languages. Specman ... verilog, take as argument the value expected to be. produced by the design. ... – PowerPoint PPT presentation

Number of Views:702
Avg rating:3.0/5.0
Slides: 68
Provided by: Alo8
Category:

less

Transcript and Presenter's Notes

Title: Chapter 3 Writing Testbenches Functional Verification of HDL Models


1
Chapter 3Writing TestbenchesFunctional
Verification of HDL Models
  • ??????????
  • ???
  • sklu_at_ee.fju.edu.tw

2
Verification
  • Verification
  • A process used to demonstrate the
    functional
  • correctness of a design.
  • Verification consumes about 70 of the design
    effort.
  • The methodologies to reduce the verification time
  • Parallelism
  • Abstraction
  • Automation
  • What is a Testbench?
  • Create a pre-determined input sequence to a
    design, then optionally observe the response.

3
Reconvergence Model
Generic structure of a testbench And design under
test
Reconvergent path in verification
  • Do you know what you are actually verifying?
  • The purpose of verification is to ensure that the
    result of some transformation is as intended or
    as expected.
  • Transformation RTL coding from a specification,
    insertion of a scan chain, synthesizing RTL code
    into a gate-level netlist, and layout of a
    gate-level netlist.

4
The Human Factor
  • A designer verifying his or her own design
    verifies against his or her own interpretation,
    not against the specification.
  • If that interpretation is wrong in any way, then
    this verification activity will never highlight
    it.

The same individual
RTL Coding
Interpretation
Verification
Specification
5
Redundancy
  • Redundancy in an ambiguous situation enables
    accurate verification.
  • What is being verified?
  • Formal verification
  • Model checking
  • Functional verification
  • Testbench generators

6
Formal Verification
  • Formal verification falls under two broad
    categories
  • Equivalence checking
  • Model checking
  • Equivalence checking
  • Compare two models
  • Mathematically prove that the origin and output
    are logically equivalent and the transformation
    preserved its functionality
  • It can compare two netlists to ensure that some
    netlist post-processing, such as scan-chain
    insertion, clock-tree synthesis, or manual
    modification, did not change the functionality of
    the circuit.

7
Equivalence Checking
  • It can detect bugs in the synthesis software
  • Equivalence checking found a bug in an arithmetic
    operator.

8
Model Checking
  • Look for generic problems or violation of
    user-defined rules about the behavior of the
    design.
  • Assertions or characteristics of a design are
    formally proven or disproved.
  • All state machines in a design could be checked
    for unreachable or isolated states.
  • To determine if deadlock conditions can occur.

9
Functional Verification
  • Ensure that a design implements intended
    functionality
  • Show that a design meets the intent of its
    specification, but it can not prove it.
  • You can prove the presence of bugs, but you
    cannot prove their absence.

10
Testbench Generation
  • Generate testbenches to either increase code
    coverage or to exercise the design to violate a
    property.

11
Functional VerificationApproaches
  • Black-box
  • Without any knowledge of the actual
    implementation of a design
  • All verification must be accomplished through the
    available interfaces, without direct access to
    the internal state of the design.
  • White-box
  • Has full visibility and controllability of the
    internal structure and implementation of the
    design being verified
  • Grey-box
  • Controls and observes a design entirely through
    its top-level interfaces

12
Testing vs Verification
  • Testing
  • Verify that the design was manufactured correctly
  • Testing is accomplished through test vectors. The
    objective of these test vectors is not to
    exercise functions
  • Verification
  • Ensure that a design meets its functional intent

13
Testing vs Verification
  • Scan-Based testing
  • All registers are hooked-up in a long serial
    chain.
  • Design for Verification
  • Addition design effort to simplify verification
  • Providing additional software-accessible
    registers to control and observe internal
    locations
  • Providing programmable multiplexors to isolate or
    bypass functional units

14
Verification and Design reuse
  • Today, design reuse is considered the best to
    overcome the difference between the number of
    transistors that can be manufactured on a single
    chip.
  • Engineers do not trust that the other design is
    as good as reliable as one designed by
    themselves.
  • Proper functional verification demonstrates
    trustworthiness of a design.
  • Verification for reuse
  • Reusable designs must be verified to a greater
    degree of confidence.
  • All claims, possible configurations and uses must
    be verified.

15
The cost of verification
  • Verification is a necessary evil. It always takes
    too long and costs too much.
  • Is my design functionally correct?
  • How much is enough?
  • When will I be done?

16
Verification Tools
17
Verification Tools
  • Linting Tools
  • The term lint comes from a name of a UNIX utility
    that parses a C program and reports questionable
    uses and potential problems.
  • Identify common mistakes programmer made, such as
    syntax errors
  • Similar to spell checkers
  • Only find problems that can be statically deduced
    by looking at the code structure, not problems in
    the algorithm or data flow.

18
Simulators
  • Simulate your design before implementing it.
  • An approximation of reality
  • Not static tools (?Simulation requires stimulus)
  • Linting tools are static tools
  • The simulation outputs are validated externally,
    against design intents.
  • Co-simulators
  • Both simulators are running together, cooperating
    to simulate the entire design.
  • The synchronous portion of the design is
    simulated using the cycle-based algorithm, while
    the remainder of the design is simulated using a
    conventional event-driven simulator.

19
Simulators
  • Event-driven simulation
  • Outputs change only when an input changes
  • Change in values, called events, drive the
  • simulation process
  • Cycle-driven simulation
  • Has no timing information
  • Can only handle synchronous circuits

20
Third-Party Models
  • It is cheaper to buy models than write them
    yourself.
  • Your models is not as reliable as the one you
    buy.
  • Hardware Modeler
  • A real physical chip that needs to be simulated
    is
  • plugged in it.

21
Waveform Viewers
  • Display the changes in signal values over time
  • Used to debug simulations
  • Record trace information significantly reduce the
    performance of the simulator
  • Do not use a waveform viewer to determine if a
    design passes or fails.
  • Some viewers can compare sets of waveforms
  • How do you define a set of waveforms as golden?
  • Are the differences really significant?

22
Code Coverage
  • Did you forget to verify some function in your
    code?
  • Code must first be instrumented.

23
Statement Coverage
  • How much of the total lines of code were executed
  • Ex
  • ? if (parity ODD parity EVEN) begin
  • tx lt compute_parity(data, parity)
  • (tx_time)
  • end
  • ? tx lt 1b0
  • ? (tx_time)
  • ? if (stop_bits 2) begin
  • ? txlt1b0
  • ? (tx_time)
  • end
  • Statement Coverage 6/8 75

24
Path Coverage
25
Expression Coverage
26
What Does 100 Percent CoverageMean?
  • Completeness does not imply correctness.
  • Code coverage lets you know if you are not done.
  • Some tools can help you reach 100 coverage.

27
Verification Language
  • Verification languages can raise the level of
    abstraction.
  • VHDL and Verilog are simulation languages, not
    verification languages.
  • Specman from Verisity
  • VERA from Synopsys
  • Rave from Chronology

28
Stimulus and Response
  • Simple Stimulus
  • Verifying the Output
  • Self-Checking Testbenches
  • Complex Stimulus
  • Complex Response
  • Predicting the Output
  • Summary

29
Generating aSimple Waveform
  • reg clk
  • parameter cycle10 //100 MHz clock
  • always
  • begin
  • (cycle/2)
  • clk1b0
  • (cycle/2) //50 duty-cycle clock
  • clk1b1 clock
  • end

30
Generating Synchronized Waveform
  • always
  • begin
  • 50 clk1b0
  • 50 clk1b1
  • end
  • initial
  • begin
  • rst1b0 //There is a race
    condition between clk and
  • 150 rst1b1 rst signals. How to
    solve it?
  • 200 rst1b0
  • end

31
Solve the Race Condition
  • always
  • begin
  • 50 clk lt 1b0
  • 50 clk lt 1b1
  • end
  • initial
  • begin
  • rst1b0
  • 150 rst lt 1b1 // Use the non-blocking
    assignment
  • 200 rst lt 1b0
  • end

32
Non-Zero Delay Generation ofSynchronous Data
  • initial
  • begin
  • rst1b0
  • 50 clklt1b0
  • repeat (2) 50 clklt clk
  • rstlt 1 1b1
  • repeat (4) 50 clklt clk
  • rstlt1 1b0
  • end
  • //What if it were necessary to reset the device
    under verification multiple times during the
    execution of a testbench ?

33
Encapsulating the Generation of aSynchronized
Waveform
34
Abstracting Waveform Generation
  • Using synchronous test vectors to verify a design
    is rather cumbersome .
  • Hard to interpret and difficult to correctly
    specify.
  • Try to apply the worst possible combination of
  • inputs .
  • Pass input values as arguments to the subprogram
    .
  • Stimulus generated with abstracted operations
  • is easier to write and maintain .

35
Sampling Using the monitor task
  • initial
  • begin
  • monitor(,rst,d0,d1,sel,q,qb)
  • end
  • //change in values of signals
  • rst,d0,d1,cause the
    display
  • of simulation results.

36
Visual Inspection of Waveforms
  • However, waveform displays usually provide a
  • more intuitive visual representation of
    simulation results
  • It is a tool-dependent process that is different
    for each language and each tool

37
Self-Checking Testbenches
  • A reliable and re-producible technique for output
    verification testbench that verify themselves
  • We must automate the process of comparing the
    simulation results against the expected output

38
Automating Output Verification
  • Step 1 include the expected output with the
    input stimulus for
  • every clock cycle
  • Step2 golden vectors (a set of reference
    simulation results)
  • ?
  • If the simulation results are kept in ASCII files
    , the simplest comparison process involes using
    UNIX diff utility.
  • must still be visually inspected
  • do not adapt to change
  • require a significant maintenance effort

39
Run-Time Result Verification
  • Using a reference model (a extension of golden
    vector)
  • However, in reality, a reference model rarely
    exist.

40
Model the Expected Response
  • Include the verification of the operations
    output as part of the subprogram .
  • Integrate both the stimulus and response checking
    into complete operations .

41
Complex Stimulus
  • More complex stimulus generation scenarios
    through the use of bus-functional models.
  • Applying stimulus to a clock or reset input is
    straightforward.
  • If the interface being driven contains
    handshaking or flow-control signals, the
    generation of the stimulus requires cooperation
    with the design under verification .

42
Feedback between stimulus and design
  • Without feedback, verification can be under
    constrained .
  • Stimulus generation can wait for feedback before
    proceeding

43
Wait for Feedback
What happens if the grt signal is never asserted?
44
Wait for feedback Avoid deadlock
45
Asynchronous Interface
46
Complex Response
  • Def something that cannot be verified in the
    same
  • process that generate the stimulus .
  • definitely not verifiable using visual inspection
    of waveforms .
  • Latency and output protocols create complex
    responses .

47
Complex Response
  • Universal Asynchronous Receiver Transmitter
    (UART)
  • Because the RS-232 protocol is slow, waiting for
    the output corresponding to the last CPU write
    cycle would introduce huge gaps in the input
    stimulus.

48
Handling Unknown or Variable Latency
  • Stimulus and response could be implemented in
    different execution threads
  • event sync
  • initial
  • begin stimulus
  • -gt sync
  • end
  • initial
  • begin response
  • _at_(sync)
  • end

49
Abstracting Output Operation
  • The output operations, encapsulated using tasks
    in
  • verilog, take as argument the value expected
    to be
  • produced by the design.
  • The most flexible implementation for a output
    operation monitor is to simply return to the
    caller whatever output value was just received.
  • Separate monitoring from value verification

50
Monitoring Multiple Possible Operations
  • load A,R0
  • load B,R1
  • add R0,R1,R2
  • sto R2,X
  • load C,R3 many possible execution orders
  • add R0,R3,R4
  • sto R4,Y
  • How to write an encapsulated output monitor ?
  • Write an operation
  • dispatcher task or procedure .

51
Predicting the Output
  • When implementing selfchecking testbenches, we
    should have detailed knowledge of the output to
    be expected.
  • Knowing exactly which output to expect and how it
    can be verified to determine functional
    correctness is the most crucial step in
    verification .

52
Predicting the Output
  • There is a class of design where the input
    information is not transformed, but simply
    reformatted

53
Predicting the Output
  • If the input sequence is short and predetermined,
    using a global data sequence table is the
    simplest approach.

54
Predicting the Output
  • Long data sequence can use a FIFO between the
    generator and monitor

55
Predicting the Output
  • Some design processes and transforms the input
    data completely and thoroughly.

56
Summary
  • Using bus-functional models to generate stimulus
    and monitor response.
  • Abstract the interface operations and remove the
    test-cases from the detailed implementation of
    each physical interface.
  • Make each individual test-bench
  • completely self-checking.
  • The expected response must be embedded in the
    test-bench at the same time as the stimulus.

57
Architecting Testbench
  • Focuses on the structure of the testbench
  • Show good stimulus generators and response
    monitors to minimize maintenance, facilitate
    implementing a large number of testbenches, and
  • promote the reusability of verification
    component.

58
Outline
  • Reusable verification components Reusable
    verification components
  • Verilog Implementation
  • Autonomous Generation and Monitoring
  • Input and Output Paths
  • Verifying Configurable Designs
  • Summary

59
Reusable verification components
  • Goal Maximize the amount of verification code
    reused across testbenches.
  • Minimize the development efforts.
  • Structure of Testbech
  • Two major components of a testbench
  • Resuable test harness
  • Testcase-specific code

60
What is Test harness
  • Low-level layer common to all testbenches for the
    design under verification.

61
Reusable Utility Routine
  • Many testbenches share some common functionality.
  • Once the low-level features are verified, the
    repetitive nature of communicating with the
    device under verification can be abstracted into
    high level utility routine

62
Example
  • Structure of a testbench with reusable utility
    routines

63
Procedural Interface
  • To reusable by many testcases, we must define a
    procedural interface independent of their detail
    implementaion.
  • All components is accessed through procedures or
    tasks.
  • Never through global varibles or singals.

64
Flexibility through layers
  • Verification components must be flexible to
    provide functionality for all testbenches.
  • Layering utility routine on top of general
    purpose lower-level routines.
  • The low-level layer provides detail control
  • The high-level layer provides greater abstraction
  • Dont implement all functionality in single
    level.
  • Complicate the implementation of the bus
    functional models.
  • Increasing the risk of introducing a functional
    failure

65
Procedural Interface
  • Procedural interfaces remove the testcase from
    knowing the low-level details of the physical
    interfaces on the design.
  • Well-designed procedural interface
  • The physical interface of design can be modified
    without having to modify any testbench
  • Example A processor interface is changed from a
    VME bus to a X86 bus.
  • All that needs to be modified is the
    implementation of CPU bus-functional model
  • A data transmission protocol from parallel to
    serial

66
Development Process
  • Dont write the ultimate verification component
    that includes every configuration option.
  • Use the verification plan to determine the
    required functionality.
  • Start with the basic functions required by basic
    testbenches.
  • Add configurability to the bus-functional models
    or creating utility routines.
  • The procedural interface are maintained to avoid
    breaking testbeches.

67
Development Process
  • The incremental approach minimizes development
    effort
  • Wont develop nouse functionality
  • Minimize your debugging effort
  • Allows the development of the verification
    infrastructure to parallel the development of the
    testbenches.
Write a Comment
User Comments (0)
About PowerShow.com