Title: Measurement, Reliability and Validity
1Chapter 5
- Measurement, Reliability and Validity
2006 Prentice Hall, Salkind.
2CHAPTER OVERVIEW
- The Measurement Process
- Levels of Measurement
- Reliability and Validity Why They Are Very, Very
Important - Validity
- The Relationship Between Reliability and Validity
- A Closing (Very Important) Thought
3THE MEASUREMENT PROCESS
- Two definitions
- Stevensassignment of numerals to objects or
events according to rules. - the assignment of values to outcomes.
- Chapter foci
- Levels of measurement
- Reliability and validity
4LEVELS OF MEASUREMENT
- Variables are measured at one of these four
levels - Qualities of one level are characteristic of the
next level up - The more precise (higher) the level of
measurement, the more accurate is the measurement
process
  Â
 Â
5NOMINAL SCALE
        Â
        Â
        Â
           Â
           Â
           Â
6ORDINAL SCALE
7INTERVAL SCALE
8RATIO SCALE
9CONTINUOUS VERSUS DISCRETE VARIABLES
- Continuous variables
- Values can range along a continuum
- E.g., height
- Discrete variables (categorical)
- Values are defined by category boundaries
- E.g., gender
10WHAT IS ALL THE FUSS?
- Measurement should be as precise as possible
- In psychology, most variables are probably
measured at the nominal or ordinal level - Buthow a variable is measured can determine the
level of precision
11RELIABILITY AND VALIDITY
- Reliabilitytool is consistent
- Validitytool measures what-it-should
- Good assessment tools ?
- Rejection of Null hypotheses
- OR
- Acceptance of Research hypotheses
12A CONCEPTUAL DEFINITION OF RELIABILITY
13A CONCEPTUAL DEFINITION OF RELIABILITY
- Observed score
- Score actually observed
- Consists of two components
- True Score
- Error Score
14A CONCEPTUAL DEFINITION OF RELIABILITY
- True score
- Perfect reflection of true value for individual
- Theoretical score
15A CONCEPTUAL DEFINITION OF RELIABILITY
- Error score
- Difference between observed and true score
16A CONCEPTUAL DEFINITION OF RELIABILITY
- Method error is due to characteristics of the
test or testing situation - Trait error is due to individual characteristics
- Conceptually, reliability
- Reliability of the observed score becomes higher
if error is reduced!!
True Score True Score Error Score
17INCREASING RELIABILITY? Decreasing Error
- Increase sample size
- Eliminate unclear questions
- Standardize testing conditions
- Use both easy and difficult questions
- Minimize the effects of external events
- Standardize instructions
- Maintain consistent scoring procedures
18HOW RELIABILITY IS MEASURED
- Reliability is measured using a
- Correlation coefficient
- r test1test2
- Reliability coefficients
- Indicate how scores on one test change relative
to scores on a second test - Can range from -1.0 to 1.0
- 1.00 perfect reliability
- 0.00 no reliability
19TYPES OF RELIABILITY
20VALIDITY
- A valid test does what it was designed to do
- A valid test measures what it was designed to
measure
21A CONCEPTUAL DEFINITION OF VALIDITY
- Validity refers to the tests results, not to the
test itself - Validity ranges from low to high, it is not
either/or - Validity must be interpreted within the testing
context
22TYPES OF VALIDITY
23HOW TO ESTABLISH CONSTRUCT VALIDITY OF A NEW TEST
- Correlate new test with an established test
- Show that people with and without certain traits
score differently - Determine whether tasks required on test are
consistent with theory guiding test development
24MULTITRAIT-MULITMETHOD MATRIX
- Convergent validitydifferent methods yield
similar results - Discriminant validitydifferent methods yield
different results
25THE RELATIONSHIP BETWEEN RELIABILITY AND VALIDITY
- A valid test must be reliable
- But
- A reliable test need not be valid