Title: Systematic reviews and metaanalysis
1Systematic reviews and meta-analysis
- Too much information, too little time
2Rationale for Reviews
- Decisions about treatment choices should be based
on reliable information - Clinicians are overwhelmed by amount of new
information - Reviews efficiently bring clinicians up to date
3Rationale for Systematic Reviews
- Not all reviews are good those done by
experts are susceptible to bias and error - Bias something that will cause a systematic
deviation form the truth - may appear in collection, appraisal and
summarising stage of a review - Systematic reviews aim to minimise bias and error
4Ways to reduce bias and error
- Have at least 2 reviewers
- Tell people what you are going to do before you
do it (publish a protocol) - Have reproducible and transparent methods
- Literature search
- Data extraction
- Data pooling
- Have a good editor!
5Steps in a review
- Define the question intervention, subjects,
outcomes - Locate all studies that address the question
- Sift the studies to select relevant ones
- Assess the quality of studies include those
meeting set criteria (RCTs) - Calculate the results of each study
- Combine if appropriate
- Interpret results
6What is The Cochrane Collaboration?
It is surely a great criticism of our profession
that we have not organised a critical summary, by
specialty or subspecialty, adapted
periodically, of all relevant randomized
controlled trials."
7What is The Cochrane Collaboration?
- An international network of individuals who
voluntarily prepare, maintain and promote the
accessibility of systematic reviews of the
effects of health care interventions - Organised
- Geographically Cochrane Centres - 12
- By speciality Review groups 51
- Produce 300 new reviews per year and update
reviews every 2-3 years
8Meta-analysis
- what is a meta-analysis?
- when can you do a meta-analysis?
- what are the stages of a meta-analysis?
- how are the results displayed?
- how are the results interpreted?
- when not to do a meta-analysis?
9What is a meta-analysis?
- a way to calculate an average
- estimates an average or common effect
- statistically combines results from 2 or more
separate studies
10What is a meta-analysis?
- Optional part of a systematic review
Systematic reviews
Meta-analyses
11Why perform a meta-analysis?
- increase power
- improve precision of estimate
- quantify effect sizes and their uncertainty
- assess consistency of results
- answer questions not posed by individual studies
(factors that differ across studies) - settle controversies from conflicting studies or
generate new hypotheses
12When can you do a meta-analysis?
- when more than one study has estimated an effect
- when there are no major differences in the study
characteristics (participants/interventions/outcom
es) - when the outcome and treatment effect have been
measured in similar way
13Steps in doing a meta-analysis
- define comparisons for your review
-
-
indomethacin
control
vs
Review Prophylactic intravenous indomethacin for
preventing mortality and morbidity in preterm
infants
14Steps in doing a meta-analysis
- define comparisons for your review
- decide on appropriate study results (outcomes)
for each comparison -
-
indomethacin
control
vs
a. Death b. Severe IVH c. PDA ligation ..and so
on
15Steps in doing a meta-analysis
- define comparisons for your review
- decide on appropriate study results (outcomes)
for each comparison - select an appropriate summary statistic for each
comparison - this depends on the type of data you collect
-
-
16(No Transcript)
17(No Transcript)
18Steps in doing a meta-analysis
- define comparisons for your review
- decide on appropriate study results (outcomes)
for each comparison - select an appropriate summary statistic for each
comparison - assess the similarity of study results within
each comparison
19Steps in doing a meta-analysis
- define comparisons for your review
- decide on appropriate study results (outcomes)
for each comparison - select an appropriate summary statistic for each
comparison - assess the similarity of study results within
each comparison - consider the reliability of the summaries
20For example
- 8 controlled trials studying the effect of
hypothermia on death rates in newborn infants
with hypoxic ischemic encephalopathy (HIE) - how can we summarise the effect of hypothermia
across these trials?
21Summary statistic for each study
- calculate a single summary statistic to represent
the effect found in each study - for binary data
- ratio of risks (relative risk)
- difference in risks (risk difference)
- ratio of odds (odds ratio)
- For continuous data
- difference between means
22For example
- 8 studies, relative risks of death (95 CI)
- 0.72 (0.17, 3.09)
- 0.18 (0.01, 3.41)
- 0.87 (0.61, 1.25)
- 0.94 (0.14, 6.24)
- 0.48 (0.06, 3.69)
- 0.74 (0.16, 3.48)
- 0.74 (0.38, 1.41)
- 0.64 (0.47, 0.98)
23Averaging studies
- a simple average gives each study equal weight
- this seems intuitively wrong
- some studies are more likely to give an answer
closer to the true effect than others
24Weighting studies
- more weight to the studies which give us more
information - more participants
- more events
- lower variance
- weight is proportional to inverse variance
25For example
26For example
27Displaying results graphically
forest of lines
28theres a label to tell you what the
comparison is and what the outcome of interest is
29(No Transcript)
30The data shown in the graph are also given
numerically
The label above the graph tells you what
statistic has been used
- Each study is given a blob, placed where the data
measure the effect (point estimate). - The size of the blob is proportional to the
weight - The horizontal line is called a confidence
interval and is a measure of - how we think the result of this study might vary
with the play of chance. - The wider the horizontal line is, the less
confident we are of the observed effect.
31The vertical line in the middle is where
the treatment and control have the same effect
there is no difference between the two
32At the bottom theres a horizontal line. This is
the scale measuring the treatment effect. Here
the outcome is death and towards the left
the scale is less than one, meaning the
treatment has made death less likely. Take care
to read what the labels say things to the left
do not always mean the treatment is better
than the control.
33The pooled analysis is given a diamond
shape where the widest bit in the middle is
located at the calculated best guess (point
estimate), and the horizontal width is the
confidence interval
Note on interpretation If the confidence
interval crosses the line of no effect, this is
equivalent to saying that we have found no
statistically significant difference in the
effects of the two interventions
34(No Transcript)
35(No Transcript)
36Why is it wrong to simply add studies together?
- because it can give the wrong answer
- imbalances within trials introduce bias
- breaks the power of randomisation
- tends to overestimate significance, as it
underestimates variance (ignores the difference
between samples) - overweights large studies (think of power
calculations) - cant investigate variation between studies
37Interpretation
- consistency of result
- how similar are the results?
- informal assessment by inspection
- formal assessment by test
38(No Transcript)
39Subgroup analyses
- where it is suspected in advance that certain
features may alter the effect of an intervention
age
gender
dose
40(No Transcript)
41Sensitivity analysis
- does result change according to small variations
in the data and methods? - choice of treatment effects or method for pooling
- inclusion/exclusion of dubious data
- inclusion/exclusion of trials (e.g. quality)
42(No Transcript)
43Other issues in interpretation
- likelihood of bias
- publication bias
- reporting bias
- does the result make sense?
- biological plausibility
- applicability
44The limitations of Systematic Review and
Meta-analysis
- May remain too small
- May be slower than a single large scale trial
45The limitations of Systematic Review and
Meta-analysis
- The review is only as good as the included
studies (garbage in, garbage out) - narrow confidence interval around combination of
biased studies worse than the biased studies on
their own
46The limitations of Systematic Review and
Meta-analysis
- mixing apples with oranges
- studies must address same question
- meta-analysis may be meaningless and genuine
effects may be obscured if studies are too
clinically diverse
47The contribution of Systematic Review and
Meta-analysis
- Identifies unanswered questions
- implications for practice
- implications for research
48The contribution of Systematic Review and
Meta-analysis
- Empowering
- Subverts authority
49Epilogue Important definitions
- The risk describes the number of participants
having the event - in a group divided by the total number of
participants - The odds describe the number of participants
having the event - divided by the number of participants not having
the event - The relative risk (risk ratio) describes the
risk of the event in - the intervention group divided by the risk of the
event in the - control group
- The odds ratio describes the odds of the event
in the - intervention group divided by the odds of the
event in the - control group
- The risk difference describes the absolute
change in risk that - is attributable to the experimental intervention
- The number needed to treat (NNT) gives the
number of - people you would have to treat with the
experimental - intervention (compared with the control) to
prevent one event.