Metaanalysis - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

Metaanalysis

Description:

Brown, S. A., Upchurch, S. L., & Acton, G. J. (2003) ... Thousand Oaks, CA: Sage Publications. Muncer, S. J., Craigie, M., & Holmes, J. (2003) ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 48
Provided by: edstu
Category:

less

Transcript and Presenter's Notes

Title: Metaanalysis


1
Meta-analysis
  • Funded through the ESRCs Researcher Development
    Initiative

Session 1.2 Introduction
Department of Education, University of Oxford
2
Why a course on meta-analysis?
  • Meta-analysis is an increasingly popular tool for
    summarising research findings
  • Cited extensively in research literature
  • Relied upon by policymakers
  • Important that we understand the method, whether
    we conduct or simply consume meta-analytic
    research
  • Should be one of the topics covered in all
    introductory research methodology courses

3
Difference between meta-analysis and systematic
review
  • Meta-analysis a statistical analysis of a set of
    estimates of an effect (the effect sizes), with
    the goal of producing an overall (summary)
    estimate of the effects. Often combined with
    analysis of variables that moderate/predict this
    effect
  • Systematic review a comprehensive, critical,
    structured review of studies dealing with a
    certain topic. They are characterised by a
    scientific, transparent approach to study
    retrieval and analysis
  • Most meta-analyses start with a systematic review

4
A blend of qualitative and quantitative approaches
  • Coding the process of extracting the information
    from the literature included in the
    meta-analysis. Involves noting the
    characteristics of the studies in relation to a
    priori variables of interest (qualitative)
  • Effect size the numerical outcome to be analysed
    in a meta-analysis a summary statistic of the
    data in each study included in the meta-analysis
    (quantitative)
  • Summarise effect sizes central tendency,
    variability, relations to study characteristics
    (quantitative)

5
The meta-analytic process
6
Steps in a meta-analysis
7
Steps in a meta-analysis
8
Establish research question
  • Comparison of treatment control groups
  • What is the effectiveness of a reading skills
    program for treatment group compared to an
    inactive control group?
  • Pretest-posttest differences
  • Is there a change in motivation over time?
  • What is the correlation between two variables
  • What is the relation between teaching
    effectiveness and research productivity
  • Moderators of an outcome
  • Does gender moderate the effect of a
    peer-tutoring program on academic achievement?

9
Establish research question
  • Do you wish to generalise your findings to other
    studies not in the sample?
  • Do you have multiple outcomes per study. e.g.
  • achievement in different school subjects
  • 5 different personality scales
  • multiple criteria of success
  • Such questions determine the choice of
    meta-analytic model
  • fixed effects
  • random effects
  • multilevel

10
Example abstract
Brown, S. A. (1990). Studies of educational
interventions and outcomes in diabetic adults A
meta-analysis revisited. Patient Education and
Counseling, 16,189-215
11
Steps in a meta-analysis
12
Defining a population of studies and finding
publications
  • Need to have explicit inclusion and exclusion
    criteria
  • The broader the research domain, the more
    detailed they tend to become
  • Refine criteria as you interact with the
    literature
  • Components of a detailed criteria
  • distinguishing features
  • research respondents
  • key variables
  • research methods
  • cultural and linguistic range
  • time frame
  • publication types

13
Example inclusion criteria
Brown, S. A., Upchurch, S. L., Acton, G. J.
(2003). A framework for developing a coding
scheme for meta-analysis. Western Journal of
Nursing Research, 25, 205-222
14
Locate and collate studies
  • Search electronic databases (e.g., ISI,
    Psychological Abstracts, Expanded Academic ASAP,
    Social Sciences Index, PsycINFO, and ERIC)
  • Examine the reference lists of included studies
    to find other relevant studies
  • If including unpublished data, email researchers
    in your discipline, take advantage of Listservs,
    and search Dissertation Abstracts International

15
Search example
  • motivation OR job satisfaction produces ALL
    articles that contain EITHER motivation OR job
    satisfaction anywhere in the text
  • inclusive, larger yield
  • motivation AND job satisfaction will capture
    only those subsets that have BOTH motivation AND
    job satisfaction anywhere in the text
  • restrictive, smaller yield

16
Steps are the studies eligible for inclusion? If
initial n is large...
Check abstract title
17
Locate and collate studies
  • Inclusion process usually requires several steps
    to cull inappropriate studies
  • Example from Bazzano, L. A., Reynolds, K.,
    Holder, K. N., He, J. (2006).Effect of Folic
    Acid Supplementation on Risk of Cardiovascular
    Diseases A Meta-analysis of Randomized
    Controlled Trials. JAMA, 296, 2720-2726

18
Steps in a meta-analysis
19
Developing the code sheet
  • The researcher must have a thorough knowledge of
    the literature.
  • The process typically involves (Brown et al.,
    2003)
  • reviewing a random subset of studies to be
    synthesized,
  • listing all relevant coding variables as they
    appear during the review,
  • including these variables in the coding sheet,
    and
  • pilot testing the coding sheet on a separate
    subset of studies.

20
Common details to code
  • Coded data usually fall into the following four
    basic categories
  • methodological features
  • Study identification code
  • Type of publication
  • Year of publication
  • Country
  • Participant characteristics
  • Study design (e.g., random assignment,
    representative sampling)
  • substantive features
  • Variables of interest (e.g., theoretical
    framework)
  • study quality
  • Total measure of quality study design
  • outcome measures - Effect size information

21
Developing a code book
  • The code book guides the coding process
  • Almost like a dictionary or manual
  • ...each variable is theoretically and
    operationally defined to facilitate intercoder
    and intracoder agreement during the coding
    process. The operational definition of each
    category should be mutually exclusive and
    collectively exhaustive (Brown et al., 2003, p.
    208).

22
Develop code materials
Code Sheet
Code Book
  • __ Study ID
  • _ _ Year of publication
  • __ Publication type (1-5)
  • __ Geographical region (1-7)
  • _ _ _ _ Total sample size
  • _ _ _ Total number of males
  • _ _ _ Total number of females

23
Example code materials
  • From Brown, et al. (2003).
  • Code sheet Table 1.
  • Code book Table 4.

24
Steps in a meta-analysis
25
Pilot coding
  • Random selection of papers coded by both coders
  • Meet to compare code sheets
  • Where there is discrepancy, discuss to reach
    agreement
  • Amend code materials/definitions in code book if
    necessary
  • May need to do several rounds of piloting, each
    time using different papers

26
Inter-rater reliability
  • Coding should ideally be done independently by 2
    or more researchers to minimise errors and
    subjective judgements
  • Ways of assessing the amount of agreement between
    the raters
  • Percent agreement
  • Cohens kappa coefficient
  • Correlation between different raters
  • Intraclass correlation

27
Steps in a meta-analysis
28
Effect sizes
  • Lipsey Wilson (2001) present many formulae for
    calculating effect sizes from different
    information
  • However, need to convert all effect sizes into a
    common metric, typically based on the natural
    metric given research in the area. E.g.
  • Standardized mean difference
  • Odds-ratio
  • Correlation coefficient

29
Effect size calculation
  • Standardized mean difference
  • Group contrasts
  • Treatment groups
  • Naturally occurring groups
  • Inherently continuous construct
  • Odds-ratio
  • Group contrasts
  • Treatment groups
  • Naturally occurring groups
  • Inherently dichotomous construct
  • Correlation coefficient
  • Association between variables

30
Effect size calculation
Means and standard deviations
Correlations
d
SE
P-values
F-statistics
t-statistics
31
Example of extracting outcome data
  • From Brown et al. (2003).
  • Table 3

32
Steps in a meta-analysis
33
Fixed effects assumptions
  • Includes the entire population of studies to be
    considered do not want to generalise to other
    studies not included (e.g., future studies).
  • All of the variability between effect sizes is
    due to sampling error alone. Thus, the effect
    sizes are only weighted by the within-study
    variance.
  • Effect sizes are independent.

34
Conducting fixed effects meta-analysis
  • There are 2 general ways of conducting a fixed
    effects meta-analysis ANOVA multiple
    regression
  • The analogue to the ANOVA homogeneity analysis is
    appropriate for categorical variables
  • Looks for systematic differences between groups
    of responses within a variable
  • Multiple regression homogeneity analysis is more
    appropriate for continuous variables and/or when
    there are multiple variables to be analysed
  • Tests the ability of groups within each variable
    to predict the effect size
  • Can include categorical variables in multiple
    regression as dummy variables. (ANOVA is a
    special case of multiple regression)

35
Random effects assumptions
  • Is only a sample of studies from the entire
    population of studies to be considered want to
    generalise to other studies not included
    (including future studies).
  • Variability between effect sizes is due to
    sampling error plus variability in the population
    of effects.
  • Effect sizes are independent.

36
Random effects models
  • Variations in sampling schemes can introduce
    heterogeneity to the result, which is the
    presence of more than one intercept in the
    solution
  • Heterogeneity between-study variation in effect
    estimates is greater than random (sampling)
    variance
  • Could be due to differences in the study design,
    measurement instruments used, the researcher, etc
  • Random effects models attempt to account for
    between-study differences

37
Random effects models
  • If the homogeneity test is rejected (it almost
    always will be), it suggests that there are
    larger differences than can be explained by
    chance variation (at the individual participant
    level). There is more than one population in
    the set of different studies.
  • The random effects model helps to determine how
    much of the between-study variation can be
    explained by study characteristics that we have
    coded.
  • The total variance associated with the effect
    sizes has two components, one associated with
    differences within each study (participant level
    variation) and one between study variance

38
Multilevel modelling assumptions
  • Meta-analytic data is inherently hierarchical
    (i.e., effect sizes nested within studies) and
    has random error that must be accounted for.
  • Effect sizes are not necessarily independent
  • Allows for multiple effect sizes per study

39
Multilevel model structure example
  • Level 2 study component
  • Publications
  • Level 1 outcome-level component
  • Effect sizes

40
Conducting multilevel model analyses
  • Similar to a multiple regression equation, but
    accounts for error at both the outcome (effect
    size) level and the study level
  • Start with the intercept-only model, which
    incorporates both the outcome-level and the
    study-level components (analogous to the random
    effects model multiple regression)
  • Expand model to include predictor variables, to
    explain systematic variance between the study
    effect sizes

41
Model selection
  • Fixed, random, or multilevel?
  • Generally, if more than one effect size per study
    is included in sample, multilevel should be used
  • However, if there is little variation at study
    level and/or if there are no predictors included
    in the model, the results of multilevel modelling
    meta-analyses are similar to random effects
    models

42
Model selection
  • Do you wish to generalise your findings to other
    studies not in the sample?
  • Do you have multiple outcomes per study?

43
Steps in a meta-analysis
44
Supplementary analysis
  • Publication bias
  • Fail-safe N (Rosenthal, 1991)
  • Trim and fill procedure (Duval Tweedie, 2000a,
    2000b)
  • Sensitivity analysis
  • E.g., Vevea Woods (2005)
  • Power analysis
  • E.g., Muncer, Craigie, Holmes (2003)
  • Study quality
  • Quality weighting (Rosenthal, 1991)
  • Use of kappa statistic in determining validity of
    quality filtering for meta-analysis (Sands
    Murphy, 1996).
  • Regression with quality as a predictor of
    effect size (see Valentine Cooper, 2008)

45
This course...
46
Steps in a meta-analysis
47
References
  • Brown, S. A., Upchurch, S. L., Acton, G. J.
    (2003). A framework for developing a coding
    scheme for meta-analysis. Western Journal of
    Nursing Research, 25, 205-222.
  • Duval, S., Tweedie, R. (2000a). A Nonparametric
    "Trim and Fill" Method of Accounting for
    Publication Bias in Meta-Analysis. Journal of the
    American Statistical Association, 95, 89-98.
  • Duval, S., Tweedie, R. (2000b). Trim and fill
    A simple funnel-plot-based method of testing and
    adjusting for publication bias in meta-analysis.
    Biometrics, 56, 455463
  • Lipsey, M. W., Wilson, D. B. (2001). Practical
    meta-analysis. Thousand Oaks, CA Sage
    Publications.
  • Muncer, S. J., Craigie, M., Holmes, J. (2003).
    Meta-analysis and power Some suggestions for the
    use of power in research synthesis. Understanding
    Statistics, 2, 1-12.
  • Rosenthal, R. (1991). Quality-weighting of
    studies in meta-analytic research. Psychotherapy
    Research, 1, 25-28.
  • Sands, M. L., Murphy, J. R. (1996). Use of
    kappa statistic in determining validity of
    quality filtering for meta-analysis A case study
    of the health effects of electromagnetic
    radiation. Journal of Clinical Epidemiology, 49,
    1045-1051.
  • Valentine, J. C., Cooper, H. M. (2008). A
    systematic and transparent approach for assessing
    the methodological quality of intervention
    effectiveness research The Study Design and
    Implementation Assessment Device (Study DIAD).
    Psychological Methods, 13, 130-149.
  • Vevea, J. L., Woods, C. M. (2005). Publication
    bias in research synthesis Sensitivity analysis
    using a priori weight functions. Psychological
    Methods, 10, 428443.
Write a Comment
User Comments (0)
About PowerShow.com