Title: Hardware Functional Verification Class
1- Hardware Functional Verification Class
Non Confidential Version
2- Introduction
- Verification "Theory"
- Secret of Verification
- Verification Environment
- Verification Methodology
- Tools
- Future Outlook
3 4- What is functional verification?
- Act of ensuring correctness of the logic design
- Also called
- Simulation
- logic verification
5What is Verification
6- How do we know that a design is correct?
- How do we know that the design behaves as
expected? - How do we know we have checked everything?
- How do we deal with size increases of designs
faster than tools performance? - How do we get correct Hardware for the first RIT?
7- Answer Functional Verification
- Also called
- Simulation
- logic verification
- Verification is based on
- Testpattern Generation
- Reference Model Development
- Result Checking
8- Why do functional verification?
- Product time-to-market
- hardware turn-around time
- volume of "bugs"
- Development costs
- "Early User Hardware" (EUH)
9- Facilities a general term for named wires (or
signals) and latches. Facilities feed gates
(and/or/nand/nor/invert, etc) which feed other
facilities. - EDA Engineering Design Automation--Tool vendors.
IBM has an internal EDA organization that
supplies tools. We also procure tools from
external companies.
10- Behavioral Code written to perform the function
of logic on the interface of the
design-under-test - Macro 1. A behavioral 2. A piece of logic
- Driver Code written to manipulate the inputs of
the design-under-test. The driver understands
the interface protocols. - Checker Code written to verify the outputs of
the design-under-test. A checker may have some
knowledge of what the driver has done. A check
must also verify interface protocol compliance.
11- Snoop/Monitor Code that watches interfaces or
internal signals to help the checkers perform
correctly. Also used to help drivers be more
devious. - Architecture Design criteria as seen by the
customer. The design's architecture is specified
in documents (e.g. POPS, Book 4, Infiniband,
etc), and the design must be compliant with this
specification. - Microarchitecture The design's implementation.
Microarchitecture refers to the constructs that
are used in the design, such as pipelines,
caches, etc.
12 13Develop environment
Create Testplan
Debug hardware
Escape Analysis
Regression
Hardware debug
Fabrication
14- Team leaders work with design leaders to create a
verification testplan. The testplan includes - Schedule
- Specific tests and methods by simulation level
- Required tools
- Input criteria
- Completion criteria
- What is expected to be found with each test/level
- What's not covered by each test/level
15Allows design team to break system down into
logical and comprehendable components. Also
allows for repeatable components.
16- Only lowest level macros contain latches and
combinatorial logic (gates) - Work gets done at these levels
- All upper layers contain wiring connections only
- Off chip connections are C4 pins
17- Current Practices for Verifying a System
- Designer Level sim
- Verification of a macro (or a few small macros)
- Unit Level sim
- Verification of a group of macros
- Element Level sim
- Verification of a entire logical function such as
a processor, storage controller or I/O control - Currently synonomous with a chip
- System Level sim
- Multiple chip verification
- Often utilizes a mini operating system
18- The black box has inputs, outputs, and performs
some function. - The function may be well documented...or not.
- To verify a black box, you need to understand the
function and be able to predict the outputs based
on the inputs. - The black box can be a full system, a chip, a
unit of a chip, or a single macro.
19- White box verification means that the internal
facilities are visible and utilized by the
testcase driver. - Examples 0-in (vendor) methods
- Grey box verification means that a limited number
of facilities are utilized in a mostly black-box
environment. - Example Most environments! Prediction of
correct results on the interface is occasionally
impossible without viewing and internal signal.
20- To fully verify a black box, you must show that
the logic works correctly for all combinations of
inputs. - This entails
- Driving all permutations on the input lines
- Checking for proper results in all cases
- Full verification is not practical on large
pieces of designs...but the principles are valid
across all verification.
21- Every macro would have perfect verification
performed - All permutations would be verified based on legal
inputs - All outputs checked on the small chunks of the
design - Unit, chip, and system level would then only need
to verify interconnections - Ensure that designers used correct Input/Output
assumptions and protocols
22- Macro verification across an entire system is not
feasible for the business - There may be over 400 macros on a chip, which
would require about 200 verification engineers! - That number of skilled verification engineers
does not exist - The business can't support the development
expense - Verification Leaders must make reasonable
trade-offs - Concentrate on Unit level
- Designer level on riskiest macros
23- Typical Bug rates per level
24- Checklist of items that must be completed before
RIT - Verification items, along with Physical/Circuit
design criteria, etc - Verification criteria is based on
- Function tested
- Bug rates
- Coverage data
- Clean regression
25- Escape analysis is a critical part of the
verification process - Important data
- Fully understand bug! Reproduce in sim if
possible - Lack of repro means fix cannot be verified
- Could misunderstand the bug
- Why did the bug escape simulation?
- Process update to avoid similar escapes in future
(plug the hole!)
26- Escape Analysis Classification
- We currently classify all escapes under two views
- Verification view
- What areas are the complexities that allowed the
escape? - Cache Set-up, Cycle dependency, Configuration
dependency, Sequence complexity, and expected
results - Design View
- What was wrong with the logic?
- Logic hole, data/logic out of synch, bad control
reset, wrong spec, Bad logic
27- The longer a bug goes undetected, the more
expensive the fix - A bug found early (designer sim) has little cost
- Finding a bug at Chip or System Sim has moderate
cost - Requires more debug time and problem isolation
- Could require new algorithm, which could effect
schedule and cause rework of physical design - Finding a bug in System Test (testfloor) requires
new hardware RIT - Finding a bug in the customer's environment can
cost hundreds of millions in hardware and brand
image
28- Secret of Verification
- (Verification Mindset)
29- Two simple questions
- Am I driving all possible input scenarios?
- How will I know when it fails?
30Three Simulation Commandments
Thou shalt not move onto a higher platform until
the bug rate has dropped off
- Thou shalt stress thine logic harder than it will
ever be stressed again
Thou shalt place checking upon all things
31- Need for Independent Verification
- The verification engineer should not be an
individual who participated in logic design of
the DUT - Blinders If a designer didn't think of a
failing scenario when creating the logic, how
will he/she create a test for that case? - However, a designer should do some verification
on his/her design before exposing it to the
verification team - Independent Verification Engineer needs to
understand the intended function and the
interface protocols, but not necessarily the
implementation
32- Verification Do's and Don'ts
- DO
- Talk to designers about the function and
understand the design first, but then - Try to think of situations the designer might
have missed - Focus on exotic scenarios and situations
- e.g try to fill all queues while the design is
done in a way to avoid any buffer full conditions - Focus on multiple events at the same time
33- Verification Do's and Don'ts (continued)
- Try everything that is not explicitly forbidden
- Spend time thinking about all the pieces that you
need to verify - Talk to "other" designers about the signals that
interface to your design-under-test - Don't
- Rely on the designer's word for input/output
specification - Allow RIT Criteria to bend for sake of schedule
34- Typical Verification diagram
Coverage Data
Stimulus
Device
types
FSMs
latency
conditions
address
transactions
sequences
transitions
35- Escape A problem that is found on the test
floor and therefore has escaped the verification
process - The Line Delete escape was a problem on the H2
machine - S/390 Bipolar, 1991
- Escape shows example of how a verification
engineer needs to think
36- The Line Delete Escape
- (pg 2)
- Line Delete is a method of circumventing bad
cells of a large memory array or cache array - An array mapping allows for removal of defective
cells for usable space
37- The Line Delete Escape
- (pg 3)
If a line in an array has multiple bad bits (a
single bit usually goes unnoticed due to
ECC-error correction codes), the line can be
taken "out of service". In the array pictured,
row 05 has a bad congruence class entry.
38- The Line Delete Escape
- (pg 4)
Data enters ECC creation logic prior to storage
into the array. When read out, the ECC logic
corrects single bit errors and tags Uncorrectable
Errors (UEs), and increments a counter
corresponding to the row and congruence class.
39- The Line Delete Escape
- (pg 5)
When a preset threshhold of UEs are detected from
a array cell, the service controller is informed
that a line delete operation is needed.
Data in
Data out
40- The Line Delete Escape
- (pg 6)
The Service controller can update the
configuration registers, ordering a line delete
to occur. When the configuration registers are
written, the line delete controls are engaged and
writes to row 5, congruence class 'C'
cease. However, because three other cells remain
good in this congruence class, the sole
repercussion of the line delete is a slight
decline in performance.
41- The Line Delete Escape
- (pg 7)
How would we test this logic? What must occur in
the testcase? What checking must we implement?
Data out
42 43- General Simulation Environment
Compiler (not always required)
C/C HDL Testbenches Specman e Synopsis' VERA
Event simulator Cycle simulator Emulator
Initialization Run-time requirements
Testcase results
Event Simulation compiler Cycle simulation
compiler .... Emulator Compiler
VHDL Verilog
44Logic Designer
Environment Developer
Verification Engineer
Model Builder
Project Manager
45- Event Simulators
- Model Technology's (MTI) VSIM is most common
- capable of simulating analog logic and delays
- Cycle Simulators
- For clocked, digital designs only
- Model is compiled and signals are "ordered".
Infinite loops are flagged during compile as
"signal ordering deadlocks". Each signal is
evaluated once per cycle, and latches are set for
the next cycle based on the final signal value.
46- Types of Simulators
- (con't)
- Simulation Farm
- Multiple computers are used in parallel for
simulation - Acceleration Engines/Emulators
- Quickturn, IKOS, AXIS.....
- Custom designed for simulation speed
(parallelized) - Accel vs. Emulation
- True emulation connects to some real, in-line
hardware - Real software eliminates need for special
testcase
47- Influencing Factors
- Hardware Platform
- Frequency, Memory, ...
- Model content
- Size, Activity, ...
- Interaction with Environment
- Model load time
- Testpattern
- Network utilization
Relative Speed of different Simulators
Event Simulator
1
Cycle Simulator
20
Event driven cycle Simulator
50
Acceleration
1000
Emulation
100000
48- Cycle Sim for one processor chip
- 1 sec realtime 6 month
- Sim Farm with a few hundred computers
- 1 sec realtime 1 day
- Accelerator/Emulator
- 1 sec realtime 1 hour
49- Basic Testcase/Model Interface Clocking
- Clocking cycles
- A simulator has the concept of time.
- Event sim uses the smallest increment of time in
the target technology - All other sim environments use a single cycle
- A testcase controls the clocking of cycles
(movement of time) - All APIs include a clock statement
- Example "Clock(n)", where n is the increment to
clock (usually '1')
Cycle 0 Cycle 1 Cycle 2 ...
....Cycle n
50- Basic Testcase/Model Interface Setfac/Putfac
- Setting facilities
- A simulator API allows you to alter the value of
facilities - Used most often for driving inputs
- Can be used to alter internal latches or signals
- Can set a single bit or multi-bit facility
- values can be 0,1, or possibly X, high impedence,
etc - Example syntax "Setfac facility_name value"
Cycle 0 Cycle 1 Cycle 2 ...
....Cycle n
51- Basic Testcase/Model Interface Getfac
- Reading facilities values
- A simulator API allows you to read the value of a
facility - Used most often checking outputs
- Can be used to read internal latches or signals
- Example syntax "Getfac facility_name varname"
Getfac adder_sum checksum
Cycle 0 Cycle 1 Cycle 2 ...
....Cycle n
52- Basic Testcase/Model Interface Putting it
together
- Clocking, setfacs and putfacs occur at set times
during a cycle - Setting of facilities must be done at the
beginning of the cycle. - Getfacs must occur at the end of a cycle
- In between, control goes to the simulation
engine, where the logic under test is "run"
(evaluated)
Setfac address_bus(031) "0F3D7249"x
Getfac adder_sum checksum
Cycle 0 Cycle 1 Cycle 2 ...
....Cycle n
53- Basic steps
- Create a testcase
- Build a model
- Different model build programs for different
simulation engines - Run the simulation engine
- Check results. If testcase fails
- do preliminary debug (create AET, view scans)
- Get fix from designer and repeat from step 2
54- Calculator has 4 functions
- Add
- Subtract
- Shift left
- Shift right
- Calculator can handle 4 requests in parallel
- All 4 requestors use separate input signals
- All requestors have equal priority
55c_clk
req1_cmd_inlt03gt
out_resp1lt01gt
req1_data_inlt031gt
out_data1lt031gt
out_resp2lt01gt
req2_cmd_inlt03gt
calc_top
req2_data_inlt031gt
out_data2lt031gt
out_resp3lt01gt
req3_cmd_inlt03gt
req3_data_inlt031gt
out_data3lt031gt
out_resp4lt01gt
req4_cmd_inlt03gt
req4_data_inlt031gt
out_data4lt031gt
resetlt07gt
56- I/O Description
- Input commands
- 0 - No-op
- 1 - Add operand1 and operand2
- 2 - Subtract operand2 from operand1
- 5 - Shift left operand1 by operand2 places
- 6 - Shift right operand1 by operand2 places
- Input Data
- Operand1 data arrives with command
- Operand2 data arrives on the following cycle
57- Outputs
- Response line definition
- 0 - no response
- 1 - successful operation completion
- 2 - invalid command or overflow/underflow error
- 3 - Internal error
- Data
- Valid result data on output lines accompanies
response (same cycle)
58- Other information
- Clocking
- When using a cycle simulator, the clock should be
held high (c_clk in the calculator model) - The clock should be toggled when using an event
simulator - Calculator priority logic
- Priority logic works on first come first serve
algorithm - Priority logic allows for 1 add or subtract at a
time and one shift operation at a time
59Calculator Design
req1_cmd_inlt03gt
req1_data_inlt031gt
out_resp1lt01gt
out_data1lt031gt
60- Calculator Exercise part 1
- Build the model
- make a directory
- mkdir calc_test
- cd calc_test
- ../calc_build
- Run the model
- calc_run
- Check the AET
- scope tool
- use calc4.wave for input/output facility names
61- Calculator Exercise Part 2
- There are 5 bugs in the design!
- How many can you find by altering the simple
testcase?
62 63- Verification Methodology Evolution
Hand Generated Hand Checked Hardcoded
Test Patterns
Hand Generated Self Checking Hardcoded AVPs, IVPs
Testcases
Time
Tool Generated Self Checking
Testcase Drivers
Testcase Generators
Interactive on-the-fly generation On-the-fly
checking Random SMP, C/C
Hardcoded AVPGEN, GENIE/GENESYS SAK
More Stress per Cycle
Formal Verification
64- Abstraction of design implementation
- Could be a
- complete behavior description of the design using
a standard programming language - formal specification using math. languages
- complete state transition graph
- detailed testplan in english language for
handwritten testpattern - part of a random driver or checker
- ....
65- One of the most difficult concepts for new
verification engineers is that your behavioral
can "cheat". - The behavioral only needs to make the
design-under-test think that the real logic is
hanging off its interface - The behavioral can
- predetermine answers
- return random data
- look ahead in time
66- Cheating examples
- Return random data in Memory modeling
- A memory controller does not know what data was
stored into the memory cards (behavioral).
Therefore, upon fetching the data back, the
memory behavioral can return random data. - Branch prediction
- A behavioral can look ahead in the instruction
stream and know which way a branch will be
resolved. This can halve the required work of a
behavioral!
67- Hardcoded Testcases and IVPs
- IVP (Implementation Verification Program)
- A testcase that is written to verify a specific
scenario - Appropriate usage
- during initial verification
- as specified by the designer/verification
engineer to ensure that important or
hard-to-reach scenarios are verified. - Other hardcoded testcases are done for simple
designs - Hardcoded indicates a single scenario
68- Testbench is a generic term that is used
differently across locations/teams/industry - It always refers to a testcase
- Most commonly (and appropriately), a testbench
refers to code written in the design language (eg
VHDL) at the top level of the hierarchy. The
testbench is often simple, but may have some
elements of randomness.
69- Software that creates multiple testcases
- Parameters control the generator in order to
focus the testcases on more specific arch/
microarchitectural components. - Ex If branch intensive testcases are desired,
the parameters would be set to increase the
probability of creating branch instructions. - Can create "tons" of testcases which have desired
level of randomness. - broad-brush approach complements IVP plan
- Randomness can be in data or control
70- "Random" is used to describe many environments
- Some teams call testcase generators "random"
(they have randomness in the generation process) - The two major differentiators are
- Pre-determined vs. on-the-fly generation
- Post processing vs. on-the-fly checking
71- The most robust random environments use
on-the-fly drivers and on-the-fly checking - On-the-fly drivers will give more flexibility and
more control, along with the cabability to stress
the logic to the micro-architecture's limit - On-the-fly checkers will flag interim errors.
The testcase is stopped upon hitting an error. - However, the overall quality is determined by how
good the verification engineer is! If scenarios
aren't driven or checks are missing, the
environment is incomplete!
72- Costs of optimal random environment
- Code intensive
- Need an experienced verification engineer to
oversee effort to ensure quality - Benefits of optimal random environment
- More stress on the logic than any other
environment, including the real hardware - It will find nearly all of the most devious bugs
and all of the easy ones.
73- Sometimes too much randomness will prevent
drivers from uncovering design flaws. - "Un-randomizing the random drivers" needs to be
built into the environment depending upon the
design - Hangs due to looping
- Low activity scenarios
- "Micro-modes" can be built into the drivers
- Allows user to drive very specific scenarios
74- Random Example Cache model
- Cache coherency is a problem for multiprocessor
designs - Cache must keep track of ownership and data on a
predetermined boundary (quad-word, line,
double-line, etc)
75- High stress environment requires limiting size of
data used in testcase
A limited number of congruence classes are chosen
at the start of the testcase to ensure stress.
Only these addresses will be used by the drivers
to generate requests.
76- Multiprocessor Environment
...
77Start
Y
N
78- This environment drives more stress than with the
real processors in a system environment - Micro-architectural level on the interfaces vs.
architectural instruction stream - Real processor and I/O will add delays based on
it's own microarchitecture
79- Testcase seed is randomly chosen at the start of
simulation - The initial seed is used to seed decision-making
driver logic - Watch out for seed synchronization across drivers
80- Formal Verification employs mathematic algorithms
to prove correctness or compliance - Formal applications fall under the following
- Model Checking (used for logic verification)
- Equivelence Checking (ex VHDL vs. Synthesis
output) - Theorem Proving
- Symbolic Trajectory Analysis (STE)
81- Simulation vs. Model Checking
- If the overall State space of a design is the
universe, then Model checking is like a bulb
and Simulation is like a
laser beam
82- Formal Verification-Model Checking
- IBM's "Rulebase" is used for Model Checking
- Checks properties against the logic
- Uses EDL and Sugar to express environment and
properties - Limit of about 300 latches after reduction
- State space size explosion is biggest challenge
in FV
83- Formal Verification-Model Checking
84- Coverage techniques give feedback on how much the
testcase or driver is exercising the logic - Coverage makes no claim on proper checking
- All coverage techniques monitor the design during
simulation and collect information about desired
facilities or relationships between facilities
85Coverage Goals
- Measure the "quality" of a set of tests
- Supplement test specifications by pointing to
untested areas - Help create regression suites
- Provide a stopping criteria for unit testing
- Better understanding of the design
86- People use coverage for multiple reasons
- Designer wants to know how much of his/her macro
is exercised - Unit/Chip leader wants to know if relationships
between state machine/microarchitectural
components have been exercised - Sim team wants to know if areas of past escapes
are being tested - Program manager wants feedback on overall quality
of verification effort - Sim team can use coverage to tune regression
buckets
87- Coverage methods include
- Line-by-line coverage
- Has each line of VHDL been exercised?
(If/Then/Else, Cases, states, etc) - Microarchitectural cross products
- Allows for multiple cycle relationships
- Coverage models can be large or small
88Functional Coverage
- Coverage is based on the functionality of the
design - Coverage models are specific to a given design
- Models cover
- The inputs and the outputs
- Internal states
- Scenarios
- Parallel properties
- Bug Models
89Interdependency-Architectural Level
- The Model
- We want to test all dependency types of a
resource (register) relating to all instructions - The attributes
- I - Instruction add, add., sub, sub.,...
- R - Register (resource) G1, G2,...
- DT - Dependency Type WW, WR, RW, RR and None
- The coverage tasks semantics
- A coverage instance is a quadruplet ltIj, Ik, Rl,
DTgt, where Instruction Ik follows Instruction Ij,
and both share Resource Rl with Dependency Type
DT.
90Interdependency-Architectural Level (2)
- Additional semantics
- The distance between the instructions is no more
than 5 - The first instruction is at least the 6th
- Restrictions
- Not all combinations are valid
- Fixed point instructions cannot share FP
registers
91Interdependency-Architectural Level (3)
- Size and grouping
- Original size 400 x 400 x 100 x 5 8107
- Let the Instructions be divided into disjoint
groups I1 ... In - Let the Resources be divided into disjoint groups
R1 ... Rk - After grouping 60 x 60 x 10 x 5 180000
92The Coverage Process
- Defining the domains of coverage
- Where do we want to measure coverage
- What attributes (variables) to put in the trace
- Defining models
- Defining tuples and semantic on the tuples
- Restrictions on legal tasks
- Collecting data
- Inserting traces to the database
- Processing the traces to measure coverage
- Coverage analysis and feedback
- Monitoring progress and detecting holes
- Refining the coverage models
- Generating regression suites
93Coverage Model Hints
- Look for the most complex, error prone part of
the application - Create the coverage models at high level design
- Improve the understanding of the design
- Automate some of the test plan
- Create the coverage model hierarchically
- Start with small simple models
- Combine the models to create larger models.
- Before you measure coverage check that your rules
are correct on some sample tests. - Use the database to "fish" for hard to create
conditions. - Try to generalize as much as possible from the
data - X was never 3 is much more useful than the task
(3,5,1,2,2,2,4,5) was never covered.
94- One area of research is automated coverage
directed feedback - If testcases/drivers can be automatically tuned
to go after more diverse scenarios based on
knowledge about what has been covered, then bugs
can be encountered much sooner in design cycle - Difficulty lies in the expert system knowing how
to alter the inputs to raise the level of
coverage.
95- How do I pick a methodology?
- Components to help guide you are in the design
- Amount of work required to verify is often
proportional to the complexity of the
design-under-test - Simple macro may need only IVPs
- Is design dataflow or control?
- FV works well on control macros
- Random works on dataflow intensive macros
96- How do I pick a methodology?
- Experience!
- Each design-under-test has a best-fit methodology
- It is human nature to use the techniques in which
you're familiar - Gaining experience with multiple techniques will
increase your ability to properly choose a
methodology
97- How would you test a Branch History Table?
- BHT looks ahead in the instruction stream in
order to prefetch branch target addresses - Large performance benefit
- BHT Array keeps track of previous branch target
address - BHT uses current instruction address to look
forward for known branch addresses - BHT uses "taken" or "not-taken" branch execution
results to update array
98 99Tools are targeted for specific levels
- Most testcase drivers/checkers are targeted for a
specific level - There may be some usage by related levels
100- Examples of tools targeted for specific levels
- Formal Verification
- Designer sim level
- Cannot handle large pieces of design
- Architectural Testcase Generators
- AVPGEN, GENIE/GENESYS-PRO, SAK, TnK
- Intended for Microprocessor or System levels
- Some usage at neighboring levels
- There are no drivers/checkers that are used at
all levels
101- Mainline vs. Pervasive definitions
- Mainline function refers to testing of the logic
under normal running conditions. For example,
the processor is running instructions streams,
the storage controller is accessing memory, and
the I/O is processing transactions. - Pervasive function refers to testing of logic
that is used for non-mainline functions, such as
power-on-reset (POR), hardware debug, error
injection/recovery, scanning, BIST or
instrumentation.
Pervasive functions are more difficult to test!
102- Mainline testing examples
- Architectural testcase generators (processor)
- Random drivers
- Storage control verification
- Data moving devices
- System level testcase generators
103- Some Pervasive Testing targets
- Trace arrays
- Scan Rings
- Power-on-reset
- Recovery and bad machine paths
- BIST (Built-in Self Test)
- Instrumentation
104At the end the verification engineer understands
the design better than anybody else !
105 106- Increasing Complexity
- Increasing Modelsize
- Exploding State spaces
- Increasing number of functions ... but ...
- Reduced timeframe
- Reduced development budget
107- Evolution of Problem Debug
- Analysis of simulation results (no tools support)
- Interactive observation of model facilities
- Tracing of certain model facilities
- Trace postprocessing to reduce amount of data
- On the fly checking by writing programs
- Intelligent agents, knowledge based systems
108Evolution of Functional Verification
RTL-level Model
Chip
109Evolution of Functional Verification
RTL-level Model
Chip
110Evolution of Functional Verification
RTL-level Model
Chip
111Evolution of Functional Verification
RTL-level Model
Chip
Unit
112Evolution of Functional Verification
Chip
Unit
113Evolution of Functional Verification
Chip
Unit
114Evolution of Functional Verification
Chip
Unit
115Evolution of Functional Verification
Chip
Unit
116Evolution of Functional Verification
Chip
Unit
117Evolution of Functional Verification
Chip
Unit
118Evolution of Functional Verification
Chip
Unit
119- New ways / New development
- Combination of formal methods and simulation
- First tools available today
- New algorithms in formal methods to solve size
problems - Verification of specification and formal proof
that implementation is logically correct - requires formal specification language
- Coverage directed testcase generation
- HW/SW coverification