Psychological Measurement - PowerPoint PPT Presentation

1 / 9
About This Presentation
Title:

Psychological Measurement

Description:

Research subjects must perceive the data collection instrument as intended. ... Y and we do this three times, will we get effectively the same result every time? ... – PowerPoint PPT presentation

Number of Views:68
Avg rating:3.0/5.0
Slides: 10
Provided by: jdr7
Category:

less

Transcript and Presenter's Notes

Title: Psychological Measurement


1
Psychological Measurement
  • Dr. John T. Drea
  • Professor of Marketing
  • Western Illinois University

2
Basic Measurement Concepts
  • Always remember that what you collect is a
    measurement score - it is not the true level of a
    construct.
  • True score (construct) actual score
    (measure) error
  • Error refers to any type of variation in the
    actual score that is not attributable to
    variation in the true score.
  • The best measurement technique is the one which
    satisfies the research question while closely
    approximating the true score.

3
Basic Measurement Concepts
  • Outside sources of variation are commonly
    referred to as bias.
  • To reduce bias, remember
  • Clearly define what it is you want to measure
    (e.g., theres a difference in attitude toward a
    brand and satisfaction in using the brand)

4
Basic Measurement Concepts (cont.)
  • Reducing bias
  • Research subjects must have the ability to answer
    the stated question.
  • Research subjects must perceive the data
    collection instrument as intended.
  • Research subjects must be motivated to provide
    the an actual score in such a way it parallels
    their true score.
  • The response categories provided must closely
    approximate the true score of the individual.

5
Should You Use Single or Multi-Item Measures?
  • It depends on the complexity of what is being
    measured.

Multi-item Measures
Single-item Measure
Actual-1
Actual-2
TRUE
TRUE
Actual
Actual-3
Think of trying to represent an object by
photographing it from one angle vs. several
different angles.
6
Reliability
  • Refers to the similarity of results provided by
    independent but comparable measures of the same
    object, trait, or construct.
  • Reliability is based on the use of maximally
    similar methods.
  • If we measure X using scale Y and we do this
    three times, will we get effectively the same
    result every time?
  • Can be assessed by measuring an individual at two
    points in time.
  • Most common means of assessing reliability is
    through Cronbachs alpha.

7
Reliability (continued)
  • Cronbachs (coefficient) alpha
  • A measure of the internal homogeneity of a set of
    scale items.
  • Assesses the degree to which a group of scale
    items correlate with a representation of the
    score.
  • Acceptable levels are 0.70 for exploratory
    research, 0.80 for applied research (Nunnally)
  • Look at item-to-total correlations to improve
    levels of reliability.

8
Validity
  • Refers to the extent to which differences in
    scores on (a measure) reflect true differences
    among individuals on the characteristic we seek
    to measure, rather than constant or random
    errors. (Selltiz, Wrightsman, and Cook 1976)
  • The burden is on the researcher to establish that
    a measure captures the characteristic of
    interest.
  • The relationship between a measured score and a
    true score is inferred but never established
    unequivocally. The basis for this inference
  • Direct assessment of validity
  • Indirect assessment of reliability

9
Types of Validity
  • Pragmatic Validity How well does the measure
    predict?
  • Content (Face) Validity Do the construct and the
    measure match up in a superficial manner?
  • Construct Validity What is the instrument
    actually measuring?
  • Convergent Validity two different measures of
    the same construct are highly correlated with one
    another
  • Discriminant validity two measures of different
    constructs do not correlate highly
  • Nomological validity the measure behaves in a
    way consistent with theory/prior experience
  • The assessment of validity should involve the use
    of maximally different methods.
Write a Comment
User Comments (0)
About PowerShow.com