More reliability Validity - PowerPoint PPT Presentation

1 / 11
About This Presentation
Title:

More reliability Validity

Description:

Inter-rater reliability: same result no matter who conducts observation. Measured by percent ... Someone else's (which is a measure of a separate variable) ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 12
Provided by: nfr5
Category:

less

Transcript and Presenter's Notes

Title: More reliability Validity


1
  • More reliability Validity

2
Reliability
  • Means same thing in methods as in life can be
    depended on, get same result again and again
  • Inter-rater reliability same result no matter
    who conducts observation
  • Measured by percent agreement or correlation
  • Internal reliability same result no matter how
    ask the question
  • Measured by Cronbachs alpha

3
One more type of reliability
  • Test-retest reliability get same result each
    time give measure?
  • Sometimes you should
  • Sometimes you should not
  • ? test-retest reliability is not always a good
    thing keep context in mind!

4
Which type do you care about?
  • You want to measure driving behavior
  • You use a questionnaire
  • You use observation
  • You want to measure public displays of affection
  • You use a questionnaire
  • You use observation

5
Validity
  • If you observe people and record what color shirt
    theyre wearing, probably have high levels of
    inter-rater reliability
  • But, is this a good measure of peoples
    relationship satisfaction?

6
Reliability vs. validity
  • Reliability refers to getting the same result
    across observers, across questions, across time
  • Validity refers to measuring what you think
    youre measuring
  • Can have reliability without validity, but not
    the reverse

7
How assess validity?
  • Face
  • No quantitative measure
  • Convergent
  • Measured by correlation between two measures of
    the same variable (should be high and positive)
  • Discriminate
  • Measured by correlation between two measures of
    different variables (should be small, and not
    significant)
  • Predictive
  • Measured by correlation between measure and
    future, related, outcome (should be high and
    positive)

8
Thinking about past research
  • How did McNulty Karney assess inter-rater
    reliability?
  • How did they assess internal reliability?

9
Applying it all
  • Team up with
  • The partner you had, who measured the same
    concept that you did
  • Another pair of students who are measuring
    something different than you measured
  • Exchange scales
  • Give one copy of your scale to your partner
  • Give the other copy to someone from the other
    pair of students

10
Applying it all
  • You now have three scales
  • Yours
  • Your original partners (which is a different way
    of measuring the same variable)
  • Someone elses (which is a measure of a separate
    variable)
  • Find one classmate (not your partner, and not
    from the new team of partners you paired up with)
  • Have this classmate fill out the three scales you
    have fill out the three scales this classmate has

11
Applying it all
  • Which types of reliability are relevant?
  • How could we test for them?
  • Which types of reliability are not relevant? Why
    not?
  • Which types of validity are relevant?
  • How could we test for them?
  • Which types of validity are not relevant? Why not?
Write a Comment
User Comments (0)
About PowerShow.com