Title: Chapter 15 Data Analysis
1Chapter 15 - Data Analysis
2- Introduction to Data Anaysis
- The Role of Data Analysis in Test and Product
Engineering - The process by which we examine test results and
draw conclusions from them. - evaluate DUT design weaknesses,
- identify DIB and tester repeatability and
correlation problems, - improve test efficiency,
- expose test program bugs
3- Introduction to Data Analysis
- The Role of Data Analysis in Test and Product
Engineering - Many data analysis tools are designed to help
improve the silicon fabrication process itself. - The fabrication process can be improved through
statistical data analysis of production test
results. - A methodology called statistical process control
(SPC) formalizes the steps by which this
improvement is achieved. - In this chapter we will examine various data
visualization tools, study the statistics that
describe repeatability and process variations,
and introduce the topic of statistical process
control.
4- Data Visualization Tools
- Datalogs (Data Lists)
- A datalog, or data list, is a concise listing of
test results generated by a test program. - Datalogs are the primary means by which test
engineers evaluate the quality of a tested
device. - The format of a datalog typically includes
- test category
- test description
- minimum and maximum test limits
- measured result
5- Data Visualization Tools
- Datalogs (Data Lists)
Sequencer S_continuity 1000 Neg PPMU Cont
Failing Pins 0 Sequencer S_VDAC_SNR
5000 DAC Gain Error T_VDAC_SNR -1.00 dB lt
-0.13 dB lt 1.00 dB 5001 DAC S/2nd
T_VDAC_SNR 60.0 dB lt 63.4 dB
5002 DAC S/3rd T_VDAC_SNR 60.0
dB lt 63.6 dB 5003 DAC S/THD
T_VDAC_SNR 60.00 dB lt 60.48 dB
5004 DAC S/N T_VDAC_SNR
55.0 dB lt 70.8 dB 5005 DAC
S/NTHD T_VDAC_SNR 55.0 dB lt 60.1
dB Sequencer S_UDAC_SNR 6000
DAC Gain Error T_UDAC_SNR -1.00 dB lt
-0.10 dB lt 1.00 dB 6001 DAC S/2nd
T_UDAC_SNR 60.0 dB lt 86.2 dB
6002 DAC S/3rd T_UDAC_SNR 60.0
dB lt 63.5 dB 6003 DAC S/THD
T_UDAC_SNR 60.00 dB lt 63.43 dB
6004 DAC S/N T_UDAC_SNR
55.0 dB lt 61.3 dB 6005 DAC
S/NTHD T_UDAC_SNR 55.0 dB lt 59.2
dB Sequencer S_UDAC_Linearity
7000 DAC POS ERR T_UDAC_Lin -100.0 mV lt
7.2 mV lt 100.0 mV 7001 DAC NEG ERR
T_UDAC_Lin -100.0 mV lt 3.4 mV lt
100.0 mV 7002 DAC POS INL T_UDAC_Lin
-0.90 lsb lt 0.84 lsb lt 0.90 lsb 7003
DAC NEG INL T_UDAC_Lin -0.90 lsb lt -0.84
lsb lt 0.90 lsb 7004 DAC POS DNL
T_UDAC_Lin -0.90 lsb lt 1.23 lsb (F) lt 0.90
lsb 7005 DAC NEG DNL T_UDAC_Lin -0.90
lsb lt -0.83 lsb lt 0.90 lsb 7006 DAC LSB
SIZE T_UDAC_Lin 0.00 mV lt 1.95 mV
lt 100.00 mV 7007 DAC Offset V T_UDAC_Lin
-100.0 mV lt 0.0 mV lt 100.0 mV 7008
Max Code Width T_UDAC_Lin 0.00 lsb lt 1.23
lsb lt 1.50 lsb 7009 Min Code Width
T_UDAC_Lin 0.00 lsb lt 0.17 lsb lt 1.50
lsb Bin 10
6- Data Visualization Tools
- Lot Summaries
- Lot summaries are generated after all devices in
a given production lot have been tested. - lot number
- product number
- operator number, etc.
- yield loss and cumulative yield associated with
each of the specified test bins. - The overall lot yield is defined as the ratio of
the total number of good devices divided by the
total number of devices tested.
7- Data Visualization Tools
- Lot Summaries
- Lot summaries also list test categories and what
percentage of devices failed each category. A
simplified lot summary, includes yields for a
variety of test categories
Lot Number 122336 Device Number
TLC1701FN Operator Number 42 Test Program
F779302.load Devices Tested 10233 Passing
Devices 9392 Test Yield 91.78 Bin Test
Category Devices Failures Yield Cum. Tested L
oss Yield ---------------------------------------
-------------------------------------- 7 Continuit
y 10233 176 1.72 98.28 2 Supply Currents 10057
82 0.80 97.48 3 Digital Patterns
9975 107 1.05 96.43 4 RECV Channel AC
9868 445 4.35 92.08 5 XMIT Channel AC 9423
31 0.30 91.78
8- Data Visualization Tools
- Lot Summaries
- Since the test program halts after the first DUT
failure, the earlier tests will tend to cause
more yield loss than later ones, simply because
fewer DUTs proceed to the later tests. The
earlier failures mask any failures that would
have occurred in later tests. - We can improve our overall production throughput
by moving the more commonly failed tests toward
the beginning of the test program. Average test
time is reduced because we dont waste time
performing tests that seldom fail only to lose
yield to tests that often fail. - When rearranging test programs based on yield
loss, we also have to consider the test time that
each test consumes. For example, the RECV
channel tests in may take 800 milliseconds,
while the digital pattern tests only takes 50
milliseconds. The digital pattern test is more
efficient at identifying failing DUTs since it
takes so little test time.
9- Data Visualization Tools
- Wafer Maps
- A wafer map displays the location of failing die
on each probed wafer in a production lot. Unlike
lot summaries, which only show the number of
devices that fail each test category, wafer maps
show the physical distribution of each failure
category. - This is very useful in locating areas of the
wafer where a particular problem is most
prevalent. - Continuity failures are most severe at the upper
edge of the wafer. Therefore, we might examine
the bond pad quality along the upper edge of the
wafer to see if we can find out why the
continuity test fails most often in this area. - RECV channel failures are most severe near the
center of the wafer. This kind of ring-like
pattern often indicates a processing problem,
such as uneven diffusion of dopants into the
semiconductor surface.
10- Data Visualization Tools
- Wafer Maps
11- Data Visualization Tools
- Shmoo Plots
- Functional Shmoo Plot
- passing / non-passing results
- Parametric Shmoo Plot
- Displays analog measurement at each combination
of test condition - Three Dimensional Shmoo Plot
- Displays analog measurements of two test
conditions versus result.
12- Data Visualization Tools
- Functional Shmoo Plot
13- Data Visualization Tools
- Parametric Shmoo Plot
14- Statistical Analysis
- Mean (Average) and Standard Deviation (Variance)
- One of the most useful items listed in a
histogram is the population statistics. In
statistics, the term population refers to a set
of measured or calculated values of x(n). The
mean m and standard deviation s are the most
important of the population statistics. The mean
represents the most probable value of a measured
variable. It corresponds to the average value of
the population. - In many texts, the terms x (x-bar) and s are used
to denote mean and standard deviation calculated
from a finite population of values, while m and s
are used to denote the theoretical limits of the
mean and standard deviation as the population
size extends to infinity. For small populations,
the values of x-bar and s only approximate m and
s.
15- Statistical Analysis
- Mean (Average) and Standard Deviation (Variance)
- The standard deviation s, on the other hand, is a
measure of the dispersion or uncertainty of the
measured quantity about the mean value, m. - If the values tend to be concentrated near the
mean, the standard deviation is small. - If the values tend to be distributed far from the
mean, the standard deviation is large.
16- Statistical Analysis
- Mean (Average) and Standard Deviation (Variance)
- There is an interesting relationship between a
sampled signals DC offset and RMS voltage and
the population statistics of its samples.
Assuming all frequency components of the sample
set are coherent, the mean of the signal samples
is equal to the signals DC offset. - Less obvious is the fact that the standard
deviation of the samples is equal to the signals
RMS value, excluding the DC offset. The RMS of
a sample set is calculated as the square root of
the mean of the squares of the samples
17- Statistical Analysis
- Probability Density Functions (PDF)
- According to the central limit theorem, the
distribution of a set of random variables each of
which is equal to a summation of a large number
(N gt 30) of statistically independent random
values trends toward a Gaussian distribution. - As N becomes very large, the distribution of the
random variables becomes Gaussian, whether or not
the individual random values themselves exhibit a
Gaussian distribution. - The variations in a typical mixed-signal
measurement are caused by a summation of many
different random sources of noise and crosstalk
in both the device and the tester instruments. - As a result, many mixed-signal measurements
exhibit the common Gaussian distribution
18- Statistical Analysis
- Probability Density Functions (PDF)
19- Statistical Analysis
- Probability Density Functions (PDF)
- the probability P that a randomly selected value
X will fall between a and b is given by the
equation - This equation can not be solved in a closed form,
so we must switch to applied statistics or tables
to obtain values for our probability distributions
20- Statistical Analysis
- Cumulative Distribution Functions
- The probability that a randomly selected value in
a population will be less than a particular value
b - the CDF of a Gaussian distribution is
- again there is no closed
solution
1.0
P(Xltb)
0.5
b
m-1.0s
m1.0s
m
21- Statistical Analysis
- Non-Gaussian Distributions
- Uniform Distribution
- Seen in the Random () function in C
- Also seen in quantization error in ADCs.
Uniform Distribution PDF
22- Statistical Analysis
- Guardbanding and Gaussian Statistics
- Guardbanding is an important technique for
dealing with the uncertainty of each individual
measurement in a test program. - If a particular measurement is known to be
accurate and repeatable with a worst-case
uncertainty of e, then the final test limits
should be tightened by e to make sure no bad
devices are shipped to the customer. - In other words
23- Statistical Analysis
- Guardbanding and Gaussian Statistics
- In practice, we need to set e equal to 3 to 6
times the standard deviation of the measurement
to account for measurement variability. This
diagram shows a marginal device with an average
(true) reading equal to the upper specification
limit. The upper and lower specification limits
(USL and LSL) have each been tightened by e3s.
The tightened upper and lower test limits (UTL
and LTL) reject marginal devices such as this,
regardless of the magnitude of the measurement
error.
24- Statistical Analysis
- Guardbanding and Gaussian Statistics
- If a device is well-designed and a particular
measurement is sufficiently repeatable, then
there will be few failures resulting from that
measurement. But if the distribution of
measurements from a production lot is skewed so
that the average measurement is close to one of
the test limits, then production yields are
likely to fall. In other words, more good
devices will fall within the guardband region and
be disqualified. - The only way the test engineer can minimize the
required guardbands is to improve the
repeatability and accuracy of the test, but this
requires longer test times. At some point, the
test time cost of a more repeatable measurement
outweighs the cost of throwing away a few good
devices.
25- Statistical Analysis
- Guardbanding and Gaussian Statistics
- The standard deviation of a test result
calculated as the average of N values from a
statistical population is given by the equation - So, for example, if we want to reduce the value
of a measurements standard deviation s by a
factor of two, we have to average a measurement
four times. This gives rise to an unfortunate
exponential tradeoff between test time and
repeatability.
26- Problem
- How many times would we have to average a DC
measurement with 27 mV standard deviation, to
achieve 6s guardbands of 10 mV? If each
measurement takes 5 ms, what would be the total
test time for the averaged measurement? - Solution
- The value of save must be equal to 10 mV divided
by 6 to achieve 6s guardbands. N must be equal
to - The total test time would be equal to 262 times 5
ms, or 1.31 seconds. This is clearly
unacceptable for production testing of a DC
offset. The 27 mV standard deviation must be
reduced through an improvement in the DIB
hardware or the DUT design.
Measurements
27- Statistical Analysis
- Effects of Measurement Variability on Test Yield
28- Statistical Analysis
- Effects of Measurement Variability on Test Yield
29- Statistical Analysis
- Effects of Measurement Variability on Test Yield
30- Statistical Analysis
- Effects of Measurement Variability on Test Yield
31- Statistical Analysis
- Effects of Reproducibility and Process Variation
on Yield - Factors affecting DUT parameter variation include
measurement repeatability, measurement
reproducibilty, and the stability of the process
used to manufacture the DUT. - So far we have examined only the effects of
measurement repeatability on yield, but the
equations describing yield loss due to
measurement variability are equally applicable to
the total variability of DUT parameters. - Inaccuracies due to poor tester-to-tester
correlation, day-to-day correlation, or
DIB-to-DIB correlation appear as reproducibility
errors.
32- Statistical Analysis
- Effects of Reproducibility and Process Variation
on Yield - Reproducibilty errors add to the yield loss
caused by repeatability errors. To accurately
predict yield loss caused by tester inaccuracy,
we have to include both repeatability errors and
reproducibility errors. If we collect averaged
measurements using multiple testers, multiple
DIBs, and repeat the measurements over multiple
days, we can calculate the mean and standard
deviation of the reproducibility errors for each
test. We can then combine the standard
deviations due to repeatability and
reproducibiliy using the equation
33- Statistical Analysis
- Effects of Reproducibility and Process Variation
on Yield - The variability of the actual DUT performance
from DUT to DUT and from lot to lot also
contribute to yield loss. Thus the overall
variability can be described using an overall
standard deviation, calculated using an equation
incorporating all sources of variation - Since stotal ultimately determines our overall
production yield, it should be made as small as
possible to minimize yield loss. The test
engineer must try to minimize the first two
standard deviations. The design engineer and
process engineer should try to reduce the third.
34- Problem
- A six-month yield study finds that the total
standard deviation of a particular DC offset
measurement is 37 mV across multiple lots,
multiple testers, multiple DIB boards, etc. The
standard deviation of the measurement
repeatability is found to be 15 mV, while the
standard deviation of the reproducibility is
found to be 7 mV. What is the standard deviation
of the actual DUT-to-DUT offset variability,
excluding tester repeatability errors and
reproducibility errors? If we could test this
device using perfectly accurate, repeatable test
equipment, what would be the total yield loss due
to this parameter, assuming an average value of
2.430 Volts and test limits of 2.5V ? 100 mV.
35- Solution
- Thus, even if we could test every device with
perfect accuracy and no repeatability errors, we
would see a DUT-to-DUT variability of s 33 mV.
The value of m is equal to 2.430 Volts, so our
overall yield loss for this measurement is given
by - We would therefore expect an 18 yield loss due
to this one parameter, due to the fact that the
DUT-to-DUT variability is too high to tolerate an
average value that is only 30 mV from the lower
test limit.
36- Statistical Analysis
- Effects of Reproducibility and Process Variation
on Yield - The probability that a particular device will
pass all tests in a test program is equal to the
product of the passing probabilities of each
individual test. In other words, if the values
P1, P2, P3, Pn represent the probabilities that
a particular DUT will pass each of the n
individual tests in a test program, then the
probability that the DUT will pass all tests is
equal to - For example, if each of 200 tests has a 2 chance
of failure, then each test has only a 98 chance
of passing. The yield will therefore be
(0.98)200, or 1.7
37- Problem
- A particular test program performs 857 tests,
most of which cause little or no yield loss.
Five measurements account for most of the yield
loss. Using a lot summary and a continue-on-fail
test process, the yield loss due to each
measurement is found to be - Test1 1, Test2 5, Test3 2.3, Test4 7,
Test5 1.5, All other tests 0.5 - What is the overall yield of this lot of material?
38- Solution
- The probability of passing each test is equal to
1 minus the yield loss produced by that test. The
values of P1, P2, P3P5 are therefore - P199, P295, P397.7, P493, P598.5,
- If we consider all other tests to be a sixth test
having a yield loss of 0.5, we get a sixth
probability - P699.5
- Thus, we expect an overall test yield of 83.75
39- Statistical Process Control (SPC)
- Goals of SPC
- SPC provides a means of identifying device
parameters that exhibit excessive variations over
time. It does not identify the root cause of the
variations, but it tells us when to look for
problems. Once an unstable parameter has been
identified using SPC, the engineering and
manufacturing team searches for the root cause of
the instability. Hopefully, the excessive
variations can be reduced or eliminated through a
design modification or through an improvement in
one of the many manufacturing steps. By
improving the stability of each tested parameter,
the manufacturing process is brought under
control, enhancing the inherent quality of the
product.
40- Statistical Process Control (SPC)
- Goals of SPC
- Once the stability of the distributions has been
verified, the parameter might only be measured
for every tenth device or every hundredth device
in production. If the mean and standard
deviation of the limited sample set stays within
tolerable limits, then we can be confident that
the manufacturing process itself is stable. SPC
thus allows statistical sampling of highly stable
parameters, dramatically reducing testing costs.
41- Statistical Process Control (SPC)
- Goals of SPC
42- Statistical Process Control (SPC)
- Six Sigma Quality
- If successful, the SPC process results in an
extremely small percentage of parametric test
failures. The ultimate goal of SPC is to achieve
six-sigma quality standards for each specified
device parameter. - A parameter is said to meet six-sigma quality
standards if the center of its statistical
distribution is at least 6s away from the upper
and lower test limits. - Six-sigma quality standards result in a failure
rate of only 3.4 parts per million (ppm).
Therefore, the chance of an untested device
failing a six-sigma parameter is extremely low. - This is the reason we can often eliminate
DUT-by-DUT testing of six-sigma parameters.
43- Statistical Process Control (SPC)
- Six Sigma Quality
44- Statistical Process Control (SPC)
- Process Capability Cp and Cpk
- Process capability is the inherent variation of
the process used to manufacture a product.
Process capability is defined as the ? 3s
variation of a parameter around its mean value.
For example, if a given parameter exhibits a 10
mV standard deviation from DUT to DUT over a
period of time, then the process capability for
this parameter is defined as 60 mV.
45- Statistical Process Control (SPC)
- Process Capability Cp and Cpk
- The centering and variation of a parameter are
defined using two process stability metrics, Cp
and Cpk. The process potential index, Cp, is the
ratio between the range of passing values and the
process capability - Cp indicates how tightly the statistical
distribution of measurements is packed, relative
to the range of passing values. A very large Cp
value indicates a process that is stable enough
to give high yield and high quality, while a Cp
less than 2 indicates a process stability
problem. It is impossible to achieve six-sigma
quality with a Cp less than 2, even if the
parameter is perfectly centered. For this
reason, six-sigma quality standards dictate that
all measured parameters must maintain a Cp of 2
or greater in production
46- Statistical Process Control (SPC)
- Process Capability Cp and Cpk
- The process capability index, Cpk, measures the
process capability with respect to centering
between specification limits - where
- and
- T specification target (ideal measured value)
- m average measured value
47- Problem
- The values of an AC gain measurement are
collected from a large sample of the DUTs in a
production lot. The ideal measured value is 1V/V
while the average reading is 0.991 V/V and the
upper and lower test limits are 1.050 V/V and
0.950 V/V respectively. The standard deviation
is found to be 0.0023 V/V. What is the process
capability and the values of Cp and Cpk for this
lot? Does this lot meet six-sigma quality
standards?
48- Solution
- The process capability is equal to 6 sigma, or
0.0138 V/V. The values of Cp and Cpk are - This parameter meets six-sigma quality
requirements, since the values of Cp and Cpk are
both greater than 2.
49- Statistical Process Control (SPC)
- Guage Repeatability and Reproducibility
- As mentioned previously in this chapter, a
measured parameters variation is partially due
to variations in the materials and the process
used to fabricate the device and partially due to
the testers repeatability errors and
reproducibility errors. In the language of SPC,
the tester is known as a gauge. Before we can
apply SPC to a manufacturing process, we first
need to verify the accuracy, repeatability, and
reproducibility of the gauge. Once the quality
of the testing process has been established, the
test data collected during production can be
continuously monitored to verify a stable
manufacturing process.
50- Statistical Process Control (SPC)
- Guage Repeatability and Reproducibility
- Guage repeatability and reproducibility (GRR) is
evaluated using a metric called measurement Cp.
We collect repeatability data from a single DUT
using multiple testers and different DIBs over a
period of days or weeks. The composite sample
set represents the combination of tester
repeatabilty errors and reproducibility errors. - Using the composite mean and standard deviation,
we calculate the measurement Cp. - The guage repeatability and reproducibility
percentage (precision-to-tolerance ratio) is
defined as
51- Statistical Process Control (SPC)
- Guage Repeatability and Reproducibility
- Measurement Cp GRR Rating
- 1 100 Unacceptable
- 3 33 Unacceptable
- 5 20 Marginal
- 10 10 Acceptable
- 50 2 Good
- 100 1 Excellent
52- Statistical Process Control (SPC)
- Pareto Charts
- A Pareto chart is a graph of values in ascending
or descending order of importance. Pareto charts
help us identify the most significant factors in
a sea of data. For example, we may wish to
concentrate our process improvement efforts on
the ten parameters that have the lowest Cpk
values. We can plot the value of Cpk for every
parameter in a test program, starting with the
lowest and progressing toward the highest. If
we have hundreds of tests, this technique allows
us to quickly isolate the tests having the worst
centering and variability.
53- Statistical Process Control (SPC)
- Pareto Charts
54- Statistical Process Control (SPC)
- Scatter Plots
- Once it has been determined that a problem
exists, it is often useful to investigate
suspected cause-and-effect relationships. The
scatter plot is a very useful tool for this
purpose.
55- Statistical Process Control (SPC)
- Scatter Plots
- If all the points in a scatter plot form a line,
then there is a strong correlation between the
factors. If they are randomly placed throughout
the chart, then there is no correlation. As the
example scatter plot shows, the threshold voltage
and distortion exhibit a fairly strong
correlation. The engineering team would then
know that the distortion parameter might be
stabilized by stabilizing the transistor
threshold voltage.
56- Statistical Process Control (SPC)
- Control Charts
- In addition to monitoring the Cp and Cpk of
critical parameters, we can also monitor the
stability of a process using control charts. A
control chart is a graph of parameter stability
over time. An effective SPC implementation
depends in large part on selecting the
appropriate critical parameters to monitor and
then choosing an appropriate set of control
charts. Control charts are the mechanism by which
we determine when the quality metric of interest
is drifting out of control.
57- Statistical Process Control (SPC)
- Control Charts
- For example, we may choose to monitor the mean
and range (range maximum reading minimum
reading) of a particular parameter for each
production lot. We can track the fluctuations in
these mean and range values over time, creating
an X-Bar control chart and a range control chart.
We then define upper and lower control limits
for each chart.
58(No Transcript)
59- Summary
- There are literally hundreds if not thousands of
ways to view and process data gathered during the
production testing process. In this chapter, we
have examined only a few of the more common data
displays, such as the datalog, wafer map, scatter
plot, and histogram. Using statistical analysis,
we can predict the effects of a parameters
variation on the overall test yield of a product.
We can also use statistical analysis to evaluate
the repeatability and reproducibility of the
measurement equipment itself.
60- Summary
- Statistical process control allows us to not only
evaluate the quality of the process, including
the test and measurement equipment, but it tells
us when the manufacturing process is not stable.
We can then work to fix or improve the
manufacturing process to bring it back under
control.