Title: Measurement Issues
1Measurement Issues
- General steps
- Determine concept
- Decide best way to measure
- What indicators are available
- Select intermediate, alternate or indirect
measures
2Measurement Issues
- General steps
- Consider limitations of measures selected
- Collect or secure info/data
- Summarize findings in writing
3- What is the relation between concepts, variables,
instruments measures?
4Concepts
- Program is based on conceptual basis of why
people behave the way they do
5- Why do you think people behave the way they do?
- Think of food and nutrition issues
6Variables
- A theory has variables
- Variables define concepts
- Theory states how the variables interact or are
related
7Variables
- Variables of the theory are what you measure
- Variables are the verbal or written abstractions
of the ideas that exist in the mind
8- Why should an intervention be based on a theory?
9Why use theory?
- Know what you are to address in the intervention
- Makes evaluation easier
- Know what to measure to evaluate
10- Figure 6.1 A simple social learning theory model
for reducing salt in the diet - Comes next
11Fig. 6.1 Social learning theory
12- Need measurements and instruments to assess
changes in the variables of interest
13Instruments
- Something that produces a measure of an object
- Series of questions to measure the variable,
concept - Includes instructions
14Measures
- The numbers that come from the person answering
questions on the instrument
15- Figure 6.2 Relation among models, variables,
measures, and an instrument - Comes next
16Fig. 6.2
17- Based on why you think people behave the way the
do, list possible variables to consider to
measure this variable. - What might be variables of the social learning
theory?
18- What about variables that would verify if a
change has or has not taken place?
19- Figure 6.1 A simple social learning theory model
for reducing salt in the diet - Comes next
- See how the program links with the theory what
measure
20Fig. 6.1 Social learning theory
21Reliability
- The extent to which an instrument will produce
the same result (measure or score) if applied two
different or more times.
22Reliability
- X T E
- X is measure
- T is true value
- E is random error
23Reliability
- Measurement error reduces the ability to have
reliable and valid results.
24Reliability
- Random error is all chance factors that confound
the measurement. - Always present
- Effects reliability but doesnt bias results
25Reliability
- Figure 6.5 Distribution of scores of multiple
applications of a test with random error - A is true score
- a is measure
26Fig. 6.5 Distribution of scores of multiple
applications of a test and random error
27Distribution
- Can have the same mean with two different
distributions - Figure 6.6 next
28Fig. 6.6 Two distributions of scores around the
true mean
29- Which distribution has less variability?
- Which distribution has less random error?
30Sources of Random Error
- Day-to-day variability
- Confusing instructions
- Unclear instrument
- Sloppy data collector
31Sources of Random Error
- Distracting environment
- Respondents
- Data-management error
32- What can you do to reduce random error and
increase reliability?
33Variability the Subject
- What you want to measure will vary from day to
day and within the person
34Variability the Subject
- Intraindividual variability
- variability among the true scores within a person
over time
35- Figure 6.7 True activity scores (A, B, C) for 3
days with three measures (a, b, c) per day - Comes next
36Fig. 6.7 True activity (A, B, C) for 3 days with
three measures (a, b, c) per day
37Variability the Subject
- Interindividual variability
- variability between each person in the sample
38- Figure 6.8 Interindividual (A, X) and
intraindividual (A1, A2, A3) variability for two
people (A, X) in level of physical activity - Comes next
39Fig. 6.8 Interindividual (A, X) and
intraindividual (A1, A2, A3) variability for two
people (A, X) in level of physical activity
40Assessing Reliability
- Need to know the reliability of your instruments
- Reliability coefficient of 1 is highest, no error
- Reliability coefficient of 0 is lowest, all error
41Factors of Reliability
- Type of instrument
- observer
- self-report
- Times instrument applied
- same time
- different time
42- Figure 6.9 Types of reliability
- Comes next
43Fig. 6.9 Types of reliability
44Assessing Reliability
- Interobserver reliability
- have 2 different observers rate same action at
same time - reproducibility
45Assessing Reliability
- Intraobserver reliability
- 1 observer assesses same person at two different
times - video tape the action practice
46Assessing Reliability
- Repeat method
- self-report or survey
- repeat the same item/question at 2 points in
survey
47Assessing Reliability
- Internal consistency
- average inter-item correlation among items in an
instrument that are cognitively related
48Assessing Reliability
- Internal consistency
- Cronbachs alpha
- 0.70 above a good score
49Assessing Reliability
- Test-retest reliability (internal consistency
method) - same survey/test at 2 different times to same
person
50Validity
- Degree to which an instrument measures what the
evaluator wants it to measure
51Bias
- Systematic error that produces a systematic
difference between an obtained score and the true
score - Bias threatens validity
52Bias
- Figure 6.10 Distribution of scores of multiple
applications of a test with systematic error - Comes next
53Fig. 6.10 Distribution of scores of multiple
applications of a test with systematic error
54- What will basis do to your ability to make
conclusions about your subjects?
55- Figure 6.11 Effect of bias on conclusions
- Comes next
56Fig. 6.11 Effect of bias on conclusions
57Types of Validity
58Face Validity
- Describes the extent to which an instrument
appears to measure what it is suppose to measure - How many veg did you eat yesterday?
59Content Validity
- Extent to which an instrument is expected to
cover several domains of the content - Consult a group of experts
60Criterion Validity
- How accurate is a less costly way to measure the
variable compared to the valid and more expensive
instrument
61What can lower validity?
- Guinea pig effect
- awareness of being tested
- Role selection
- awareness of being measured may make people feel
they have to play a role
62What can lower validity?
- Measurement as a change agent
- act of measurement could change future behavior
63What can lower validity?
- Response sets
- respond in a predictable way that has nothing to
do with the questions
64What can lower validity?
- Interviewer effects
- characteristics of the interviewer affects the
receptivity and answers of the respondent
65What can lower validity?
- Population restrictions
- if people cant use the method of data
collection, cant generalize to others
66- End of reliability and validity
- Questions
- Look at CNEP Survey