PLANNING MEASUREMENTS: Precision and Accuracy - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

PLANNING MEASUREMENTS: Precision and Accuracy

Description:

PLANNING MEASUREMENTS: Precision and Accuracy Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Types of Variables Deciding What to Measure Deciding ... – PowerPoint PPT presentation

Number of Views:106
Avg rating:3.0/5.0
Slides: 33
Provided by: IETMedi6
Category:

less

Transcript and Presenter's Notes

Title: PLANNING MEASUREMENTS: Precision and Accuracy


1
PLANNING MEASUREMENTSPrecision and Accuracy
  • Patrick S. Romano, MD, MPH
  • Professor of Medicine and Pediatrics

2
Types of Variables
  • Continuous (e.g., blood pressure, weight)
  • Maximum information and statistical power.
  • Discrete/quasi-continuous (e.g., age, cigarettes)
  • Similar properties as continuous variables, but
    may require special count data methods.
  • Ordinal (e.g., pain score)
  • May not have ratio/interval characteristics
  • Nominal/categorical (e.g., sex, race, death)
  • Limits options for statistical analysis
  • Includes dichotomous variables (continuous or
    ordinal variables can be dichotomized).

3
Deciding What to Measure
  • Research hypothesis or question
  • Will a variable help you answer your research
    question?
  • Is it the predictor of interest, a potential
    confounder, or an effect modifier?
  • 2. Availability of data
  • 3. Cost of collection (resource efficiency)
    when in doubt, cut it out

4
Deciding What to Measure (cont)
  • 4. Type of variable (avoid dichotomizing if
    possible)
  • 5. Adequate variability in the distribution
    (variance)
  • 6. Ability to detect relevant differencesin the
    phenomenon of interest (sensitivity)
  • 7. Precision or reliability (random error)
  • 8. Accuracy or validity (systematic error)

5
Reliability / Precision
Reliability reflects the consistency or
dependability of a set of measurements. Do my
measurements repeatedly and consistently yield
the same results under similar conditions?
A precise measurement has less random error than
an imprecise measurement. Greater precision
enhancesstatistical power.
6
 Reliability / Precision (cont)
  • There are three sources of imprecision, or random
    error
  • 1. Observer variability
  • 2. Biological (subject) variability
  • 3. Instrument variability

7
Measuring Reliability/Precision
1. Internal consistency The concordance among
different variables designed to measure the same
basic characteristic. This concept applies
especially to composite measures intended to
summarize various components of a complex,
multi-dimensional phenomenon (e.g., functional
status, attitudes, knowledge).
8
Measuring Reliability/Precision (cont)
Split half reliability Correlation coefficient
between summary scores on two randomly split
instruments. Item-total correlations Correlation
coefficient between each item and the composite
score. Items with low item-total correlations
should be separated or deleted. Cronbach's alpha
(Kuder-Richardson) A summary measure of
correlation among all possible split halves
examined simultaneously (ideally gt0.7, but gt0.95
suggests redundancy). Recalculate n times, each
time omitting one item, to identify any that
compromise reliability.
9
(No Transcript)
10
(No Transcript)
11
Measuring Reliability/Precision (cont)
2. Test-retest consistency Cconcordance among
repeated measures on the same patients collected
at 2 or more times. The interval must be
carefully selected to minimize reactivity and
recall on the one hand, and true change on the
other. Compute the Pearson correlation
coefficient for continuous variables Kappa
score for nominal variables
12
Measuring Reliability/Precision (cont)
3. Interrater consistency Concordance among 2
or more measures on the same patients collected
by different observers. For continuous/quasi
variables, compute the Pearson correlation or the
Intraclass Correlation Coefficient (ICC),
indicating the proportion of variance
attributable to subjects (vs. observers vs.
random error). Kappa score for nominal variables
observed expected
K
100 expected
13
(No Transcript)
14
Enhancing Reliability/Precision
1. Develop an operations manual. All
measurement methods should be standardized and
explicitly defined and described in
writing. For self-administered surveys,
instructions should be short, clear, easily
understood. 2. Train, support, and monitor data
collectors All data collectors should undergo
the same training, have access to the same
resources for answering questions, and be equally
subject to monitoring and disciplinary action.
15
Enhancing Reliability/Precision (cont)
3. Minimize the number of data collectors and the
number of different environments in which data
are collected A small number of full-time data
collectors are better than a large number of
part-time data collectors. One carefully
controlled setting for data collection is better
than many.
16
Enhancing Reliability/Precision (cont)
4. Pretest all survey instruments and measurement
devices You may need to refine measurement
systems to improve precision. Correct
ambiguities, improve vocabulary, etc. Increase
true variance by eliminating ceiling and floor
effects. 5. Automate data collection
Automatic systems may improve reliability.
17
Enhancing Reliability/Precision (cont)
6. Collect repeated measures Three blood
pressure measurements are more reliable than one.
Three questions on satisfaction are more
reliable than one. Different devices/survey
instruments may be used to measure the same
concept. 7. Pay attention to the respondent's
condition and circumstances If a respondent is
emotionally distressed or hurried, you may need
to reschedule.
18
Accuracy or Validity
Validity is the degree to which the results of
your measurement correspond to the true state of
the phenomenon being measured. Do my
measurements actually measure what I think they
do? An invalid measurement has systematic error,
which may bias your entire analysis.
19
 Accuracy or Validity (cont)
There are three sources ofinaccuracy, or
systematic error
1. Observer bias A consistent distortion that
results from the observer's methods (e.g., desire
to show effect, altered perception) 2. Subject
bias A consistent distortion that results from
the subject's actions (e.g., social desirability,
fear of consequences, biased recall, desire to
improve, acquiescence). 3. Instrument bias
20
Measuring Accuracy/Validity
  • 1. Content validity
  • The degree to which a measure makes intuitive
    sense, or adequately samples all domains of the
    concept of interest.
  • Face validity (investigator)
  • Consensual validity(panel of experts) 

21
Measuring Accuracy/Validity (cont)
  • 2. Criterion (or concurrent) validity
    (sensitivity, specificity)
  • The degree to which one measure agrees with
    another approach to measuring the same
    characteristic.
  • Implies the availability of an accepted gold
    standard (serum vs. fingerstick glucose).
  • Note risk of criterion contamination if the
    test measure is used to select individuals for
    the gold standard measurement.

22
(No Transcript)
23
Measuring Accuracy/Validity (cont)
3. Predictive validity The degree to which a
measure successfully predicts an outcome of
interest. Does viral load predict outcomes of
HIV infection? Do MCAT scores predict graduation
from medical school? Note risk of sample
truncation if the initial measurement is used to
determine eligibility for follow-up.
24
Measuring Accuracy/Validity (cont)
4. Construct validity The degree to which a
variable relates to other variables according to
a construct that is based on your
conceptualization of the phenomenon (or prior
literature). Are the predicted relationships
supported by empirical analyses? Be careful
your construct may be wrong, or your sample may
be restricted to subjects in whom the
relationship is weak!
25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
Measuring Accuracy/Validity (cont)
4. Construct validity (cont)
Convergent analysis (correlation with similar
concepts) Discriminant analysis (lack of
correlation with dissimilar concepts) Multitrai
t-multimethod matrix Different measures of
similar traits (heterotrait-homomethod) are
better correlated than similar measures of
different traits (homotrait-heteromethod)
29
Multitrait Multimethod MatrixExample from
Streiner and Norman (1989, 1995)
Self-directed learning Self-directed learning Knowledge Knowledge
Rater Exam Rater Exam
Self-directed learning Rater 0.53
Self-directed learning Exam 0.42 0.79
Knowledge Rater 0.18 0.17 0.58
Knowledge Exam 0.15 0.23 0.49 0.88
30
Enhancing Accuracy/Validity
Note that lack of reliability may obscure
validity by increasing random error
  • 1. Develop an operations manual
  • 2. Train, support, and monitor data collectors
  • Automate/systematize data collection to reduce
    observer bias
  • Take unobtrusive measurements to reduce subject
    bias

31
Enhancing Accuracy/Validity (cont)
  • 5. Single, double, or triple blinding
  • Blinding the subject (single) may avoid biased
    reporting
  • Blinding the observer (double) may avoid biased
    ascertainment
  • Blinding the analyst (triple) may avoid biased
    analysis.
  • Check accuracy of instruments (including surveys)
    and recalibrate as needed.

32
Thank you !
Write a Comment
User Comments (0)
About PowerShow.com