Title: Module 2: Appraisal, Extraction and Pooling of Quantitative Data
1Module 2 Appraisal, Extraction and Pooling of
Quantitative Data
2Systematic Review of Evidence of Effectiveness
3The Concept of Effectiveness
- Relating cause and effect
- Establishing that the effect is attributable to
an intervention - Gold Standard therefore the Randomised
Controlled Trial - Lower levels of evidence generated through
quantitative approaches - Cohort, Interrupted
Time Series, Case Control Studies
4Meta Analysis of Statistics Assessmentand Review
Instrument (MAStARI)
- A refinement and expansion of RevMan
5Critical Appraisal
6Why Critically Appraise?
- Combining results of poor quality research may
lead to biased or misleading estimates of
effectiveness
7The Aims of Critical Appraisal - Effectiveness
- To minimise bias
- (ie to establish validity)
8Important Concepts
- Validity
- Precision
- False positive conclusions
9Sources of Bias
- Selection
- Performance
- Detection
- Attrition
10Selection Bias
- Systematic differences between participant
characteristics at the start of a trial - Systematic differences occur during allocation to
groups - Can be avoided by blinding of investigators
and/or participants to group
11(No Transcript)
12Performance Bias
- Systematic differences in the intervention of
interest, or the influence of concurrent
interventions - Systematic differences occur during the
intervention phase of a trial - Can be avoided by blinding of investigators
and/or participants to group
13(No Transcript)
14Detection Bias
- Systematic differences in how the outcome is
assessed between groups - Systematic differences occurs at measurement
points during the trial - Can be avoided by blinding of outcome assessor
15(No Transcript)
16Attrition Bias
- Systematic differences in loss to follow up
between groups - Systematic differences occur at measurement
points during the trial - Can be avoided by
- Accurate reporting of losses
- Use of ITT analysis
17(No Transcript)
18Ranking the Quality of Evidence of Effectiveness
- To what extent does the study design minimise
bias/ demonstrate validity - Generally linked to actual study design in
ranking evidence of effectiveness - Thus, a hierarchy of evidence is most often
used, with levels of quality equated with
specific study designs
19Hierarchy of Evidence-EffectivenessEXAMPLE 1
- Grade I - systematic reviews of all relevant
RCTs. - Grade II - at least one properly designed RCT
- Grade III-1 - controlled trials without
randomisation - Grade III-2 - cohort or case control studies
- Grade III-3 - multiple time series, or dramatic
results from uncontrolled studies - Grade IV - opinions of respected authorities
descriptive studies. - (NHMRC 1995)
20Hierarchy of Evidence-EffectivenessEXAMPLE 2
- Grade I - systematic review of all relevant RCTs
- Grade II - at least one properly designed RCT
- Grade III-1 - well designed pseudo-randomised
controlled trials - Grade III-2 - cohort studies, case control
studies, interrupted time series with a
control group - Grade III-3 - comparative studies with historical
control, two or more single-arm studies, or
interrupted time series without control group - Grade IV - case series
(NHMRC 2001)
21JBI Levels of Evidence - Effectiveness
22The Critical Appraisal Process
- Every review must set out to use an explicit
appraisal process. Essentially - A good understanding of research design is
required in appraisers and - The use of an agreed checklist is usual.
23RCT/Pseudo Randomised Trial Checklist
- Was the assignment to treatment groups truly
random? - Were participants blinded to treatment outcomes?
- Was allocation to treatment groups concealed from
allocator? - Were the outcomes of people who withdrew
described and included in the analysis? - Were those assessing outcomes blind to the
treatment allocation?
24RCT/Pseudo Randomised Trial Checklist
- Were the control and treatment groups comparable
at entry? - Were groups treated identically other than for
the named interventions? - Were outcomes measured in the same way for all
groups? - Were outcomes measured in a reliable way?
- Was an appropriate statistical analysis used?
25Other experimental study types
- Criteria to appraise other experimental study
types vary. - There are JBI appraisal checklists available for
- Descriptive studies
- Case series studies
- Comparable cohort studies
- Case control studies
26Assessing Studies
27Group Work 2.1
- Critical Appraisal of Evidence of Effectiveness
- Reporting Back
28Quantitative Data Extraction and Meta-Analysis
29Quantitative Data Extraction
- The data for the systematic review is the results
from individual studies. - Difficulties related to the extraction of data
include - different populations used
- different outcome measures
- different scales or measures used
- interventions administered differently
- reliability of data extraction (ie between
reviewers)
30Quantitative Data Extraction
- Strategies to minimise the risk of error when
extracting data from studies include - utilise a data extraction form which is developed
specifically for each review by Cochrane but is
standardised by JBI - pilot test extraction form prior to commencement
of review - train and assess data extractors
- have two people extract data from each study
- blind extraction before conferring
31MAStARI Comparative Data Extraction Format
32Quantitative Data Extraction Form (cont.)
33Data Analysis/Meta-Analysis
34General Analysis - What Can be Reported and How
- What interventions/activities have been
evaluated - The effectiveness/appropriateness/feasibility of
the intervention/activity - What interventions may be effective
- Contradictory findings and conflicts
- Limitations of study methods
- Issues related to study quality
- The use of inappropriate definitions
- Specific populations excluded from studies
- Future research needs
35Meta Analysis
36When is meta-analysis useful?
- If studies report different treatment effects.
- If studies are too small (insufficient power) to
detect meaningful effects. - Single studies rarely, if ever, provide
definitive conclusions regarding the
effectiveness of an intervention.
37When meta-analysis can be used
- Meta analysis can be used if studies
- have the same population
- use the same intervention administered in the
same way. - measure the same outcomes
- Homogeneity
- studies are sufficiently similar to estimate an
average effect.
38Calculating an Average
- Odds Ratio
- for dichotomous data eg. the outcome is either
present or not - 51/49 1.04
- (no difference between groups 1)
- Weighted mean difference
- Continuous data, such as weight
- (no difference between groups 0)
- Confidence Interval
- The range in which the real result lies, with the
given degree of certainty
39The Metaview Graph
40Heterogeneity
41Insufficient Power
42Using RevMan to Conduct Meta-Analysis
- Note that this function is part of MAStARI
43(No Transcript)
44(No Transcript)
45(No Transcript)
46(No Transcript)
47(No Transcript)
48(No Transcript)
49(No Transcript)
50(No Transcript)
51(No Transcript)
52(No Transcript)
53(No Transcript)
54(No Transcript)
55(No Transcript)
56(No Transcript)
57(No Transcript)
58(No Transcript)
59(No Transcript)
60(No Transcript)
61(No Transcript)
62(No Transcript)
63(No Transcript)
64(No Transcript)
65(No Transcript)
66(No Transcript)
67Meta Analysis of Statistics Assessmentand Review
Instrument (MAStARI)
- A refinement and expansion of RevMan
68(No Transcript)
69(No Transcript)
70(No Transcript)
71(No Transcript)
72(No Transcript)
73(No Transcript)
74(No Transcript)
75(No Transcript)
76(No Transcript)
77(No Transcript)
78(No Transcript)
79(No Transcript)
80(No Transcript)
81(No Transcript)
82(No Transcript)
83(No Transcript)
84(No Transcript)
85(No Transcript)
86- Group Work 2.2
- Quantitative Data Extraction
- Reporting Back
87Group Work 2.3
- RevMan Trial and Meta-Analysis
- MAStARI Trial and Meta Analysis
88Group Work 2.4 Further Protocol Development