Seth M' Noar, Ph'D' - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Seth M' Noar, Ph'D'

Description:

Scale Development. Scale Development. In Communication we often use self-report scales. There is a systematic method for developing such scales ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 18
Provided by: sethmich
Category:
Tags: noar | scale | seth

less

Transcript and Presenter's Notes

Title: Seth M' Noar, Ph'D'


1
Scale Development
  • Seth M. Noar, Ph.D.
  • CJT665
  • Department of Communication
  • University of Kentucky

2
Scale Development
  • In Communication we often use self-report scales
  • There is a systematic method for developing such
    scales
  • Here we discuss a broad overview of the method,
    as well as two very important aspects of scales
  • Reliability
  • Validity

3
Scale Development (contd)
  • There are several important steps in scale
    development, including
  • Theoretical framework
  • Scale purpose and length
  • Developing an initial item pool
  • Sampling
  • Conducting scale analyses
  • Demonstrating reliability and validity

4
Latent Variable
  • Often we are trying to measure a latent construct
    / variable
  • Construct some postulated attribute of people,
    assumed to be reflected in test performance
    (Cronbach Meehl, 1955)
  • We attempt to assess latent variables using
    multiple-item scales (rather than single items)

5
Latent Variable (contd)
  • It is believed that a set of good items can
    assess a construct accurately (content validity)
  • Note There will always be measurement error

6
Theoretical Framework
  • Theory can guide the development of a scale
  • If one has an explicit theory, scale development
    will be straightforward
  • If not, will be more difficult
  • In this case, literature review and focus groups
    become even more important
  • Unidimensional or multi-dimensional?

7
Scale Purpose and Length
  • What is purpose of scale? How will it be used?
  • These questions should guide length, response
    format.
  • Scales of many different lengths have been
    developed (no clear rule of thumb)
  • Response format (Likert, semantic differential)
    should be carefully chosen.

8
Developing Items
  • Developing an initial item pool is harder than
    one may think (writing good items is not easy)
  • There are a number of suggestions for writing
    good items (see Noar, 2003, for a summary of
    these)
  • Need to write 2-3 times as many items as will end
    up in final scale
  • Items should have face validity
  • Have experts review items / sort procedures

9
Sampling
  • Sample should be as close to the target
    population as possible
  • Sample size will dictate what types of analyses
    one can conduct
  • A good situation might be this N500
  • N200 for exploratory analyses
  • N300 for confirmatory analyses

10
Conducting Scale Analyses
  • One should examine basic descriptive information
    on items (mean, SD, skewness, kurtosis, etc.).
    Bad items can be discarded.
  • An ideal situation would allow for
  • Exploratory analyses (factor analysis or PCA) on
    one portion of the data
  • Then, confirmatory factor analysis (SEM) on the
    second portion of the data

11
Reliability and Validity
  • With a new scale, one should attempt to
    demonstrate both of these
  • One study should provide some initial evidence of
    both of these, but its just a start.
  • One needs to include other measures in ones
    survey to examine validity.
  • One may also want to include measures of social
    desirability, reported honesty, etc.

12
Reliability
  • Refers to the consistency of responses
  • Methods of assessing reliability tend to be
    correlational techniques
  • Reliability can be assessed with data at 1 or 2
    timepoints
  • 4 types of reliability (Anastasi Urbina, 1997)

13
Reliability
  • Test-retest reliability Give out scale at two
    timepoints, and examine correlation between the
    two
  • Alternate-form reliability Give out 2 different
    forms of the test / scale, and examine
    correlation between the two (e.g., SAT, GRE)
  • Split-half reliability Test / scale is split
    into equal halves, and correlation is examined
  • Coefficient (Cronbachs) alpha an index of the
    internal consistency of a scale, 0-1 (gt.70 is
    reasonable)
  • Note Kuder-Richardson used for tests

14
Validity
  • Refers to what a scale measures and how well it
    does so
  • Is the scale measuring what we think it is
    measuring?
  • Validity is not easy to demonstrate, and is
    typically not demonstrated in 1 study.
  • Anastasi and Urbina (1997) describe three classes
    of validity.

15
Validity
  • Content-description procedures
  • Face validity items look right on the face of
    it
  • Content validity covered appropriate content
  • Criterion-prediction procedures
  • Predictive validity scale predicts a criterion
  • Concurrent validity scale correlates with a
    criterion

16
Validity
  • 3. Construct-identification procedures
  • Construct validity measure assesses the
    appropriate construct
  • Convergent validity correlates with other
    similar measures
  • Divergent validity doesnt correlate with
    measures that it should not

17
Implications
  • Scale development is a sequential, thoughtful
    process
  • The more care you put into the process, the
    better result you will get
  • Demonstrating reliability and validity is very
    important need to think of this at beginning of
    scale development process
  • In the end, one hopes the scale accurately
    assesses the construct one is interested in
Write a Comment
User Comments (0)
About PowerShow.com