Method Variance - PowerPoint PPT Presentation

1 / 43
About This Presentation
Title:

Method Variance

Description:

When method variance inflates alpha, the formula's correction is smaller than it ... Since method error tends to inflate both reliability and validity calculations ... – PowerPoint PPT presentation

Number of Views:254
Avg rating:3.0/5.0
Slides: 44
Provided by: mikega5
Category:

less

Transcript and Presenter's Notes

Title: Method Variance


1
Method Variance
  • Mike Garwood
  • Psych 562
  • Advanced Human Factors

2
Spector (1987) Method Variance as an Artifact in
Self-Reported Affect and Perceptions at Work
Myth or Significant Problem?
  • Method Variance is considered to be an artifact
    of measurement that biases results when relations
    are explored among constructs measured in the
    same way
  • Frequent concern among researchers that rely on
    questionnaires.

3
Method Variance cont
  • Many researchers in the I/O field believe method
    variance is responsible for much of the shared
    variance in self reports, though there is very
    little research to support this.
  • Leads to the old I/O adage all self-report
    measures intercorrelate at .30.

4
The most frequently found sources on method
variance in self-reports were concerned with
acquiescence and social desirability (SD).
5
Acquiescence
  • Tendency for a respondent to agree (or disagree)
    with items, regardless of content.
  • Studied extensively in the 40s and 50s
  • Cronbach (1950) provided evidence for its
    existence and suggested ways to minimize it.
  • Later, Husek (1961) showed that various measures
    of acquiescence were uncorrelated with one
    another, and therefore there does not exist a
    specific acquiescence trait common across
    instruments.

6
Social Desirability
  • The tendency for a respondent to choose the
    socially desirable response, regardless of the
    veracity of that response.
  • Research has shown that certain traits are highly
    correlated to social desirability.
  • Most researchers try to avoid including these
    traits when designing a questionnaire, or they
    are included in a fixed-choice format to minimize
    the effect.

7
Multitrait- Multimethod (MTMM) Analysis
  • Created by Campbell and Fiske (1959) as a global
    test for method variance.
  • Can be conducted when at least two constructs are
    measured by at least two methods.

8
Tests for
  • Convergent Validity the tendency of alternate
    measures of the same construct to correlate
    strongly
  • Discriminate Validity the tendency for measures
    of different constructs to correlate, at most,
    moderately.
  • And Method Variance the tendency for different
    traits measured with the same method to correlate
    more highly than different traits measured with
    different methods.

9
First we test for convergent validity
  • If we use method A and B to access traits 1 and
    2, than there should be strong correlations
    between the same traits measured by different
    methods A1 vs B1, and A2 vs B2

10
Then we can test for method variance
  • Method variance is present if the correlation
    between different traits measured by the same
    method is greater than the correlation between
    different traits measured by different methods.

11
MTMM Short Comings
  • The researcher must use their own judgment when
    assessing the results, sometimes leading to
    different conclusions from the same data.
  • Each trait must be measured by each method to get
    a complete table.
  • Does not allow the detection of the source or
    type of bias.

12
In the Present Study
  • Spector analyzed 10 published studies on job
    satisfaction using a MTMM matrix. (four
    additional studies were tossed out for lack of
    convergent validity)

13
Results
  • He was only able to find method variance in 1 of
    the 10 studies, and due to the nature of the
    MTMM, it was unclear exactly were the bias had
    come from.

14
Bagozzi and Yi (1990)Assessing Method Variance
in MTMM Matrices The Case of Self-Reported
Affect and Perceptions at Work
  • Re-examines findings from Spector (1987) who
    found no evidence for method variance, and a
    subsequent study by Williams, Cote, and Buckley
    (1989) that apparently found strong evidence of
    method variance.

15
What We Missed in 89
  • Williams et al. (1989) re-analyzed the same data
    as Spector (1987), but they used chi-square
    difference tests and variance partitioning with
    confirmatory factor analyses (CFA).
  • Determined that method variance is present in 9
    of 11 data sets, and accounts for substantial
    variance in the measures.

16
Wait Just a Minute
  • Bagozzi and Yi noted some major short comings of
    the Williams et al (1989) study
  • For one, they only examined the overall effects
    of method factors, and didnt provide information
    on individual measures.
  • Using a chi-square difference test on a MTMM
    model with 10 measures will show significant
    method variance when only 1 of 10 measures are
    significantly affected.

17
Also
  • While Williams et al (1989) examined the
    chi-square goodness of fit and normed-fit index,
    they ignored other indicators. This left them
    open to invalid results due to a lack of power or
    overfitting.

18
And Finally
  • By using a confirmatory factor analysis (CFA)
    model they assumed that methods have an additive
    affect on measures.
  • Campbell and OConnels (1967) study found
    evidence that some times methods have a
    multiplicative affect on measures. That is the
    higher the basic relationship between two traits,
    the more that relationship is increased when the
    same method is shared.
  • Since the CFA model only accounts for a linear
    effect, it might not be appropriate for all of
    the data sets.

19
For this study
  • Bagozzi and Yi (1990) proposed the use of a
    direct-product model (DPM) developed by Swain
    (1975) in addition to the CFA model to see which
    one is a better fit.
  • Basically, just as the CFA assumes an additive
    effect, the DPM assumes a multiplicative effect.
  • If it is not clear by looking at the data which
    one to use, the researcher should test both.

20
Method
  • The 11 data sets used in both the Spector (1987)
    study and the Williams et al. (1989) study were
    reexamined for this study.
  • Before re-analyzing the data used in the previous
    studies, Bagozzi and Yi (1990) combed through the
    data and adjusted sample sizes and number of
    variables.

21
Results
  • Overall, 5 of the 11 data sets showed significant
    method variance for half or more of their
    measures.
  • This conclusion falls between Spectors 1 of 10
    and Williams et al.s 9 of 11

22
CFA vs. DPM
  • Next, they examined the appropriateness of the
    CFA model versus DPM model by composing
    standardized residuals and checking for improper
    estimates.

23
Standardized Residuals
  • Formed by taking the residuals from the observed
    and implied variancecovariance matrices and
    dividing these residuals by their asymptotic
    standard errors.
  • The presence of large standardized residuals
    indicates that a significant amount of variance
    remains unexplained and that the model may not be
    a good fit.

24
Improper Estimate
  • An improper estimate is one that is either
    illogical or outside the range of conventional
    acceptability, such as, negative error variances,
    correlations greater than 1.00, and standardized
    factor loadings greater than one.

25
CFA Results
  • No improper estimates were found
  • The CFA model was found to be a good fit for 9 of
    11 data sets.

26
DPM Results
  • The same approach was taken to test the
    appropriateness of the DPM model
  • Only one of the data sets (Gillet and Schwab,
    1975) provided support for DPM.
  • (according to table 7 the McCabe et al., 1980
    data set shows support for the DPM, but the
    researcher noted an error message while analyzing
    the data indicating an unidentified parameter,
    and it also passed the CFA model fit)

27
Table 7
28
Conclusions
  • Bagozzi and Yi concluded that the assessment of
    method variance is a complex process and the
    contradictory results of Spectors (1987) and
    Williams et al.s (1989) studies were
    attributable to incomplete analysis of the data
    in the aforementioned studies.
  • After finding support for both additive and
    multiplicative properties of method variance the
    researchers concluded that both models should be
    used when analyzing data to which method offers a
    better fit.

29
Tepper Tepper (1993) The Effects of Method
Variance Within Measures
  • Survey instruments in the behavioral sciences
    customarily assess perceptual and affective
    constructs using uniform, fixed-alternative
    formats. (i.e. a series of statements using a
    Likert scale)
  • This format reduces the difficulty of surveys and
    should increase participation.
  • However, method variance is most pronounced when
    questionnaire items share precisely the same
    response format.

30
The Problem is Systemic Error
  • Computational formulas for reliability are
    designed to estimate and remove most of the
    random error and other noise from the true
    variance, however, any systemic error will be
    lumped in with the true variance and will inflate
    the reliability estimates.

31
Coefficient Alpha
  • The most frequently reported index of internal
    consistency reliability is the coefficient alpha,
    which indicates the percentage of true variance
    within a measure.
  • So, an alpha coefficient of .70 indicates that
    70 of the variance observed is true variance.
  • Since method variance is a source of systemic
    error, it inflates the correlations and causes
    alpha to overestimate the true variance.

32
Implications
  • Williams et al. (1989) believed that method
    variance accounted for more than 25 of the
    observed variance in a study.
  • Thus, a study resulting in an alpha coefficient
    of .70 would really only be capturing 45 of the
    true variance.

33
When a Good Alpha Goes Bad
  • Other measures of reliability and validity are
    affected by the inaccurate alpha coefficient
  • Correction for attenuation
  • Standard error of measurement
  • Validity ceiling

34
Correction for attenuation
  • Determines the expected correlation, had the two
    measures been perfectly reliable, by dividing the
    observed correlation between variables by the
    square root of the product of their alpha
    coefficients and provides a correction for
    attenuation.
  • When method variance inflates alpha, the
    formulas correction is smaller than it should be.

35
Standard Error of Measurement
  • Used to estimate the amount of variability
    between observed scores and true scores
  • Calculated by multiplying the standard deviation
    of observed scores by the square root of 1 minus
    the alpha coefficient.
  • If the alpha coefficient is inflated the standard
    error of measurement will be too low.

36
Validity Ceiling
  • The highest possible validity coefficient based
    on a given reliability coefficient.
  • Calculated by taking the square root of alpha.
  • So, an inflated alpha will provide an inflated
    validity ceiling.
  • Using the example from before if alpha is .70
    the Validity ceiling would be .84, however, if we
    subtract Williams, et al.s (1989) assumed method
    variance of .25 from our alpha we get a validity
    ceiling of only .67.

37
What Can Be Done?
  • The researchers offer 5 suggestions for improving
    survey reliability and validity

38
1. Heterogeneous Item Formats
  • Varying fixed-response formats from item to item
    within a scale this may involve using
    Likert-type, bi-polar, metric, and other formats
    within the same measure varying the structure of
    items so they look different, even if they have
    the same basic format

39
2. Juxtaposition of Contractually Dissimilar
Items
  • placing items that measure different constructs
    next to each other rather than grouping items
    with similar content and format.

40
3.Random Placement of Dummy Items
  • Writing items that capture irrelevant content and
    randomly inserting them into the instrument

41
4. Skip Patterns
  • Breaking up the respondents concentration by
    instructing them to skip back and forth through
    the questionnaire

42
5. Longitudinal Administration
  • Instructing the participants to pause at certain
    points in the questionnaire or administering
    questionnaire items over several sittings rather
    than on one occasion.

43
Short Comings of Their Approach
  • They acknowledge that lengthy and/or mentally
    taxing surveys may provoke resistance from
    respondents and not be well received.
  • Since method error tends to inflate both
    reliability and validity calculations it tends to
    improve the overall look of the study and
    increase the chances of being published.
Write a Comment
User Comments (0)
About PowerShow.com