An Efficient Relaxation-based Test Width Compression Technique for Multiple Scan Chain Testing - PowerPoint PPT Presentation

About This Presentation
Title:

An Efficient Relaxation-based Test Width Compression Technique for Multiple Scan Chain Testing

Description:

An Efficient Relaxation-based Test Width Compression Technique for Multiple Scan Chain Testing MS Thesis Defense Presentation by Mustafa Imran Ali – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 63
Provided by: MustafaI4
Category:

less

Transcript and Presenter's Notes

Title: An Efficient Relaxation-based Test Width Compression Technique for Multiple Scan Chain Testing


1
An Efficient Relaxation-based Test Width
Compression Technique for Multiple Scan Chain
Testing
  • MS Thesis Defense Presentation
  • by Mustafa Imran Ali
  • COE Department
  • Advisor Dr. Aimane H. El-Maleh

2
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

3
The Issue Test Data Volume
  • Exhaustive IC testing critical to ensure product
    quality
  • Full Scan based IC testing using Automatic Test
    Equipment (ATE) the most widely used approach
  • A typical SoC ASIC may require 2.5 Gbits of test
    data
  • ATE memory capacity test application time
    dictate cost
  • test data volume ? IC complexity
  • manufacturing costs ? test data volume

4
Test data volume problem
  • Test data volume can be calculated as
  • Test data volume scan cells scan patterns
  • scan cells and scan patterns related to design
    complexity
  • 10M gates, 1 scan cell/20 gates ? 0.5M scan cells
  • Complex designs require a large number of
    patterns e.g. 10,000 patterns

5
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

6
Solutions!
  • Eliminate the costly ATE! Use BIST
  • Built-in-Self-Testing generate test patterns on
    chip
  • Use Test Resource Partitioning (TRP) Solutions to
    ease burden on ATE
  • some hardware added on-chip, working in
    conjunction with external tester
  • helps reduce the test data and/or
  • the test application time

7
Built-In-Self-Testing (BIST)
  • Uses on-chip test pattern generation
  • Linear Feedback Shift Registers (LFSRs)
  • Limitations! Random Pattern Resistant Faults
  • Fault coverage is less than 100
  • Very long test sequences required
  • Cores have to be BIST-ready
  • Solution Mixed-mode BIST
  • Uses external testing BIST

8
TRP Using on-chip decompression
  • Test sets contain a large number of dont care
    values
  • Up to 98 bits can be dont cares in industrial
    circuits
  • Different Classes of Techniques Exist
  • Code-based Schemes
  • Linear Decompressor Based Schemes
  • Combinational Linear Decompressor
  • Sequential Linear Decompressor
  • Broadcast-Scan Based Schemes
  • Reconfigurable Broadcast Schemes
  • Static Reconfiguration
  • Dynamic Reconfiguration

9
Another Classification
  • Uses Structural Information
  • Uses fault simulation
  • Requires custom ATPG
  • Decompression Hardware
  • Input Dependent
  • Independent
  • Number of scan inputs/chains
  • Single
  • Multiple

10
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

11
(Scan) Inputs Width Compression
  • Idea driving multiple scan chains with same
    values
  • only common data need to be stored
  • Such chains are form a compatible class
  • Types of compatibility
  • Direct
  • Inverse
  • Combinational
  • Compression Ratio
  • (internal chains - external chains) /
    internal chains

12
Example Using Direct Compatibility
Scan Chains 8
1 2 3 4 5 6 7 8
Test Vector 1
1
Test Vector 2
Test Vector 3
13
Continued
1 2 3
Decompressor
Representative Scan Chains 3
14
Key Observations
  • Extent of compatibility/width compression depends
    upon
  • length of chains
  • the longer the chains, greater the conflicting
    bits
  • Percentage of dont care bits
  • more Xs give less conflicts, resulting in greater
    compatibility
  • Relative positions of dont care bits

15
Key Observations
  • Some vectors need more colors than others
  • limiting the reduction if multiple vectors are
    analyzed together
  • Two or more vectors achieving the same number of
    colors can still have different compatibility
    groups
  • Compatibility analysis per vector gives a lower
    bound on achievable reduction

16
An Example 50 reduction
17
Test Set Partitioning
  • Identifying acceptable bottleneck vectors
  • A desired coloring threshold is targeted
  • Vectors satisfying the threshold are acceptable
  • Put non-conflicting vectors in a partition
  • Members satisfy the threshold colored together
  • Bottleneck vectors decomposed (relaxed) to derive
    new acceptable vectors having greater dont cares

18
Partitioning Algorithm
  • sort acceptable vectors on colors in descending
    order
  • create a (default) partition with first vector
  • For each vector in list
  • compatibility analyze with vectors in all
    existing partition(s). If threshold not exceeded,
    include in that partition
  • If no such partition, create new partition for
    current vector

19
Decomposition based on Relaxation
  • Compatibility can increase as dont care bits per
    vector increase
  • Each vector can be decomposed into multiple
    vectors
  • each having greater Xs than the original vector
  • each detects a subset of faults of the original
  • bottleneck vectors are decomposed until the
    resulting vectors satisfy the threshold
  • These new relaxed vectors are partitioned

20
An Example of Decomposition
21
Algorithms Input and Output
N gtgt M
nV gt nV
22
Compression Algorithm
  • Objectives
  • Minimize decomposition required
  • To maximize compression
  • To minimize partitions
  • To minimize test application time
  • Minimize Partitions
  • To minimize hardware cost
  • Constraint
  • Maintain the fault coverage of the original test
    set

23
Approach Used
  • Decomposition can be minimized if faults per
    bottleneck vector are minimized
  • A representative vector obtained after coloring
    is more specified than the original vector
  • (O)1 X X 0 X 0 X 0 0 X 0 X 1 X 1 X X 0 X X X X 0
    1 1 0 X X 0 X X 1
  • (R)1 X X 0 1 0 1 X 0 X 0 1 1 0 1 X 1 0 1 X 0 X 0
    1 1 0 1 X 0 X 0 1
  • Fault simulate each representative vector
    obtained to drop detected faults
  • Decompose bottleneck vector to only target
    remaining faults

24
Decomposition Algorithm
  1. Create a new vector for first undetected fault in
    faults list of a bottleneck vector
  2. Select next fault and check if covered. If it is,
    skip to next fault
  3. If it is not, get its atomic component and merge.
    Get coloring and if threshold not exceeded, go to
    step 2
  4. If threshold exceeded, revert to the previous
    state and partition new vector
  5. Get its representative vector, fault simulate,
    drop newly covered faults
  6. If current bottleneck vector has remaining
    faults, goto step 1.

25
Missing Faults Problem
  • Fault coverage linked to representative vectors
  • Representative vectors linked to a partitions
    coloring configuration
  • Partitions can change to accommodate a new vector
    with re-coloring
  • Representative vector changes if a partitions
    coloring configuration changes

26
Changing Fault Detection Problem
Partition's Compatibility
Before change
After change
Representative Vectors Specified Values
1 X 0 1 1 X 0 1 1 0 1 0 X 0 1 0
1 X 1 0 1 X 0 1 0 0 1 0 X 0 1 0
Faults Covered
27
End Result
  • A dropped fault may become undetected when its
    vector is modified later on
  • If this fault is not covered by any subsequently
    created vector, this fault is left out
  • This is more likely for essential faults

28
Solution Approaches
  • Do not allow any essential fault detection to
    change while partitioning
  • Try without recoloring
  • Select different partition accordingly or create
    new one
  • Allow any fault to be disturbed but address all
    undetected faults at the end
  • e.g. create new vectors

29
Solution Outcomes
  • Different results based on the approach used
  • First approach tends to create more partitions
  • Second approach creates more vectors but may
    lead to less partitions
  • The vectors created will have many dont cares so
    are likely to fit in existing partitions

30
Proposed Variations
  • Three variations proposed for partitioning newly
    derived subvectors
  • Do not disturb any previously covered fault
  • Disturb a fault if it can be covered by current
    bottleneck vector
  • Allow any faults to be disturbed but go for
    minimum disturbance

31
Minimizing Additional Vectors
  • To minimize any additional vectors created
  • Attempt to incrementally modify existing
    partitioned vectors
  • Successful only if it doesnt disturb the
    partition
  • Create new vectors only as a last resort
  • Bunch as many faults as possible in a single
    vector

32
Complete Algorithm
Mark Essential Non-essential faults
Create Partition Addit. Vectors
Compatibility Analyze All Vectors
Perform Merging
Partition Acceptables
Fault Simulate Representative Vectors
Decompose Partition
33
Prioritizing Essential Faults
  • Fourth variation attempted
  • Begin compression with relaxed vectors targeting
    essential faults
  • Has the potential to give reduced partitions
  • Non-essential fault left to surreptitious
    detection by representative vectors
  • Any undetected fault covered at the end by
    incremental merging or additional vectors

34
Decompression H/W
Log2(Max Partition Size)
Log2(Partition)
35
MUXs for Partitioning
N MUXs for N Chains
Number of MUX inputs Number of partitions
36
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

37
Experiments Setup
  • Algorithms implemented in C under Linux
  • HOPE simulator used for fault simulation
  • Publicly available graph coloring algorithms used
  • Algorithm by El-Maleh and Al-Suwaiyan used for
    test relaxation
  • Full-scan version of 7 largest ISCAS-89
    benchmarks used
  • Test sets generated by MINTEST ATPG used

38
Details of Test Sets
  • Static compacted test sets used for all detailed
    results
  • Dynamic compacted test sets used for comparison
    with other works

39
Methodology
  • Algorithm input parameters
  • Selecting a scan chain length for each benchmark
  • Selecting a desired number of ATE channels to
    target
  • Parameters setup used
  • Scan chain length giving near 64 scan chains for
    two smallest cases and 100 and 200 chains for
    largest five
  • Desired ATE channels varied over a range to
    observe effect on achieved compression, test
    vector counts and number of partitions

40
Test Set Characteristics
41
Color Histogram for Chain length 7
42
Color Histogram for Chain Length 4
43
Compression without Partitioning
44
Compression w/o decomposition
45
Compression w/o decomposition
46
Results 88 scan chains
47
Results 153 scan chains
48
Compression vs. Scan Inputs
49
Vectors vs. Scan Inputs
50
Partitions vs. Scan Inputs
51
Case with Large of bottlenecks
52
s38417 98 scan chains
53
s38417 185 scan chains
54
Compression vs. Scan Inputs
55
Vectors vs. Scan Inputs
56
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

57
Compression Levels
58
Compression Level
59
Hardware Costs
60
Presentation Outline
  • Motivation
  • Compression Approaches
  • Proposed Approach
  • Experiments
  • Comparison
  • Future Work

61
Future Work
  • Incorporating selective dont care identification
  • Greater reduction by eliminating conflicting bit
    values
  • Partitioning can be improved as well
  • Existing relaxation technique can be modified

62
Thank you.
  • Questions?
Write a Comment
User Comments (0)
About PowerShow.com