Seth M. Noar, Ph.D. - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Seth M. Noar, Ph.D.

Description:

Often we are trying to measure a latent construct / variable. Construct some postulated attribute of people, assumed to be reflected in test ... – PowerPoint PPT presentation

Number of Views:49
Avg rating:3.0/5.0
Slides: 22
Provided by: sethmich
Category:
Tags: construct | noar | seth

less

Transcript and Presenter's Notes

Title: Seth M. Noar, Ph.D.


1
Scale Development
  • Seth M. Noar, Ph.D.
  • CJT665
  • Department of Communication
  • University of Kentucky

2
Scale Development
  • Survey research relies heavily on self-report
    scales
  • Why?
  • Many concepts can only be measured with such
    scales
  • Huge correlational literature of theory testing,
    etc., which relies heavily on data from these
    scales
  • Researchers need strong ways in which to measure
    concepts

3
Scale Development
  • There is a systematic method for developing
    self-report scales
  • Here we discuss a broad overview of the method
  • In addition , we discuss two very important
    aspects of scales
  • Reliability
  • Validity

4
Scale Development (contd)
  • There are several important steps in scale
    development, including
  • Theoretical framework
  • Scale purpose and length
  • Developing an initial item pool
  • Sampling
  • Conducting scale analyses
  • Demonstrating reliability and validity

5
Latent Variable
  • Often we are trying to measure a latent construct
    / variable
  • Construct some postulated attribute of people,
    assumed to be reflected in test performance
    (Cronbach Meehl, 1955)
  • We attempt to assess latent variables using
    multiple-item scales (rather than single items)
  • Why?

6
Latent Variable (contd)
  • It is believed that a set of good items can
    assess a construct accurately (content validity)
  • Note There will always be measurement error

7
Theoretical Framework
  • Theory can guide the development of a scale
  • If one has an explicit theory, scale development
    will be more straightforward
  • If not, it will be more difficult
  • In this case, literature review and focus groups
    become even more important
  • Unidimensional or multi-dimensional?

8
Scale Purpose and Length
  • What is purpose of scale? How will it be used?
  • These questions should guide length, response
    format.
  • Scales of many different lengths have been
    developed (no clear rule of thumb)
  • Response format (Likert, semantic differential)
    should be carefully chosen

9
Developing Items
  • Developing an initial item pool is harder than
    one may think (writing good items is not easy)
  • There are a number of suggestions for writing
    good items (see Noar, 2003, for a summary)
  • Need to write 2-3 times as many items as will end
    up in final scale
  • Items should have face validity
  • Have experts review items / sort procedures

10
Sampling
  • Sample should be as close to the target
    population as possible
  • Sample size will dictate what types of analyses
    one can conduct
  • A good situation might be this N500
  • N200 for exploratory analyses
  • N300 for confirmatory analyses
  • We will talk more about sampling

11
Conducting Scale Analyses
  • Analyses should be carefully undertaken
  • One should first examine basic descriptive
    information on items (mean, SD, skewness,
    kurtosis, etc.)
  • Bad items can be discarded early on
  • A smaller set of strong items can then be
    analyzed

12
EFA
  • Exploratory factor analysis (and principal
    components analysis) are most often used for
    scale development
  • One should have a hypothesized structure for the
    scale
  • Final solution should be a balance of the
    theoretical and empirical
  • EFA (and PCA) helps one determine of factors as
    well as which items are better or worse

13
CFA
  • Most scales have been developed without the use
    of confirmatory factor analysis
  • However, it is increasingly being used
  • Hypothesized structure (what was found in the
    EFA)
  • Procedure provides the following
  • Confirms what was found in the EFA
  • Tells us more about how the factors relate to one
    another
  • CFA greatly strengthens scale development

14
Reliability and Validity
  • With a new scale, one should attempt to
    demonstrate both of these
  • One study should provide some initial evidence of
    both of these, but its just a start
  • One needs to think ahead
  • Collect data at 1 or more timepoints?
  • Need to include other measures in the survey to
    examine validity
  • Might include other theoretical constructs,
    social desirability, etc.

15
Reliability
  • Refers to the consistency of responses
  • Methods of assessing reliability tend to be
    correlational techniques
  • Reliability can be assessed with data at 1 or 2
    timepoints
  • 4 types of reliability (Anastasi Urbina, 1997)

16
Reliability
  • Test-retest reliability Give out scale at two
    timepoints, and examine correlation between the
    two
  • Alternate-form reliability Give out 2 different
    forms of the test / scale, and examine
    correlation between the two (e.g., SAT, GRE)
  • Split-half reliability Test / scale is split
    into equal halves, and correlation is examined
  • Coefficient (Cronbachs) alpha an index of the
    internal consistency of a scale, 0-1 (gt.70 is
    reasonable)
  • Note Kuder-Richardson used for tests

17
Validity
  • Refers to what a scale measures and how well it
    does so
  • Is the scale measuring what we think it is
    measuring?
  • Validity is not easy to demonstrate, and is
    typically not demonstrated in 1 study
  • Anastasi and Urbina (1997) describe three classes
    of validity

18
Validity
  • Content-description procedures
  • Face validity items look right on the face of
    it
  • Content validity covered appropriate content
  • Criterion-prediction procedures
  • Predictive validity scale predicts a criterion
  • Concurrent validity scale correlates with a
    criterion

19
Validity
  • 3. Construct-identification procedures
  • Construct validity measure assesses the
    appropriate construct
  • Convergent validity correlates with other
    similar measures
  • Divergent validity doesnt correlate with
    measures that it should not

20
Implications
  • Scale development is a sequential, thoughtful
    process
  • The more care you put into the process, the
    better result you will get
  • Following these steps will better ensure that
    ones scale will accurately assess the construct
    of interest

21
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com