Title: Choice of study design: randomized and non-randomized approaches
1Choice of study design randomized and
non-randomized approaches
PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD
OBESITY PREVENTION A LIFE-COURSE APPROACH
Aruba, June 2012
- Iná S. Santos
- Federal University of Pelotas
- Brazil
2Outline of the presentation
- Introduction
- Types of evidence
- Internal and external validity
- Randomized controlled trials
- Non-randomized designs
- Victora et al. Evidence-based Public Health
moving beyond randomized trials. Am J Public
Health 200494(3)400-405 - Habicht JP et al. Evaluation designs for
adequacy, plausibility and probability of public
health programme performance and impact. Intern J
Epidemiology 19992810-18
3Part I
- Introduction
- Types of evidence
- Internal and external validity
4Types of epidemiological evidence for Public
Health
Type of evidence Type of epidemiological study
Frequency of disease Descriptive
Frequency of exposure Descriptive
Exposure/disease relationship Experimental (or observational)
Coverage of intervention Descriptive
Efficacy of intervention Experimental (or observational)
Programme effectiveness Observational
5Valididy internal and external
External population
Target population
Actual population
Sample
6Validity
- Internal validity
- Are the study results true for the target
population? - Are there errors that affect the study findings?
- Systematic error (bias, confounding)
- Random error (precision)
- External validity
- Generalizability
- Are the study results applicable to other
settings?
7Validity
- Internal validity
- May be judged on the basis of the study methods
- External validity
- Require a value judgment
8Part II
- Randomized controlled trials
- (RCTs)
9Internal validity in probability studies
Issue Comparability of Probability study (RCT) Bias avoided
Populations Randomization Selection bias
Observations Blinding Information bias
Effects Use of placebo Hawthorne effect Placebo effect
RCTs are the gold standard for internal validity
10RCT (from Cochrane Collaboration)
- In a RCT participants are assigned by chance to
receive either an experimental or control
treatment. - When a RCT is done properly, the effect of a
treatment can be studied in groups of people who
are the same at the outset, and treated in the
same way, except for the intervention being
studied. - Any differences then seen in the groups at the
end of the trial can be attributed to the
difference in treatment alone, and not to bias or
chance.
11Randomised controlled trials
- Prioritise internal validity
- random allocation reduces selection bias and
confounding - blinding reduces information bias
- Gained popularity through clinical trials of new
drugs - Essential for determining efficacy of new
biological agents - Adequate for short causal chains
- biological effects of drugs, vaccines,
nutritional supplements, etc.
drug ? pharmacological reaction ? disease cure or
alleviation
12Pooling data from RCTs
- Systematic review
- Comprehensive search for all high-quality
scientific studies on a specific subject - E.g. on effects of a drug, vaccine, surgical
technique, behavioral intervention, etc - Meta-analysis
- Groups data from different studies to determine
an average effect - Improves the precision of the available estimates
by including a greater number of people - But data from different studies cannot always be
combined
13What does a RCT show?
- The probability that the observed result is due
to the intervention - But additional evidence is required to make this
result conceptually plausible - Biological plausibility
- Operational plausibility
14Special issues in RCTs
- Intent-to-treat analyses
- Individuals/groups should remain in the group to
which they were originally assigned - Units of analyses
- It is incorrect to use group allocation (e.g.,
health centers, communities, etc) and to analyse
the data at individual level - This has implications for sample size calculation
and for analysis methods
15CONSORT Statement
- Allocation
- Rationale
- Eligibility
- Interventions
- Objectives
- Outcomes
- Sample size
- Randomization
- Sequence generation
- Concealment
- Implementation
- Blinding (masking)
- Statistical methods
- Participant flow
- Recruitment
- Baseline data
- Numbers analyzed
- Outcomes and estimation
- Ancillary analyses
- Adverse events
- Interpretation
- Generalizability
- Overall evidence
16Major steps in Public Health trials
- Central-level provision of intervention to local
outlets (e.g. health facilities) - Local providers compliance with delivery of
intervention - Recipient compliance with intervention
- Biological effect of intervention
Source Victora, Habicht, Bryce, AJPH 2004
17Example of Public Health Intervention Nutrition
Counselling Trial
National programme is implemented
Health workers are trained
HW knowledge increases
HW performance improves
Maternal knowledge increases
Child diets change
Energy intake increases
Nutritional status improves
Source Santos, Victora et al. J Nutr 2001
18Example of Public Health Intervention Nutrition
Counselling Trial
National programme is implemented
Health workers are trained
HW knowledge increases
0.8070.21
HW performance improves
Maternal knowledge increases
Child diets change
Energy intake increases
Nutritional status improves
Source Santos, Victora et al. J Nutr 2001
19Are RCT findings generalizable to routine
programmes?
- The dose of the intervention may be smaller
- behavioural effect modification
- provider behaviour
- recipient behaviour
- The dose-response relationship may be different
- biological effect modification
The longer the causal chain, the more likely is
effect modification
Source Victora, Habicht, Bryce, AJPH 2004
20Curvilinear associations
Trials often done here
Results often applied here
Source Victora, Habicht, Bryce, AJPH 2004
21Why do RCTs have a limited role in large-scale
effectiveness evaluations
- Often impossible to randomize
- unethical, politically unacceptable, rapid
scaling up - Evaluation team affects service delivery
- service delivery is at least best-practice
- Effect modification is the rule
- are meta-analyses of complex programmes
meaningful? - need for local data
- Need for supplementary approaches for evaluations
in Public Health
22Part III
- Non-randomized designs
- (Quasi-experiments)
23Types of inference in impact evaluations
- Adequacy (descriptive studies)
- the expected changes are taking place
- Plausibility (observational studies)
- observed changes seem to be due to the programme
- Probability (RCTs)
- randomised trial shows that the programme has a
statistically significant impact
Source Habicht, Victora, Vaughan, IJE 1999
24Ensuring internal validity in probability and
plausibility studies
Issue Comparability Probability (RCT) Plausibility (quasi-experiment)
Populations Randomization Matching Understanding determinants of implementation Handling contextual factors
Observations Blinding Avoiding information bias
Effects Use of placebo Being aware of Hawthorne bias and of the placebo effect
25Adequacy evaluations
- Questions
- Were the initial goals achieved?
- E.g. reduce underfive mortality by 20
- Were the observed trends in impact indicators
- in the expected direction?
- of adequate magnitude?
26Plausibility evaluations
- Question
- Is the observed impact likely due to the
intervention? - Require ruling out influence of external factors
- need for comparison group
- adjustment for confounders
- Also known as quasi-experiments
27Adequacy/plausibility designs (1)
- Design cross-sectional
- Measurement points once
- Outcome difference or ratio
- Control group
- Individuals who did not receive the intervention
- Groups/areas without the intervention
- Dose-response analyses, if possible
28ORT and diarrhea deaths in Brazil
Each dot 1 state
Spearman r -0,61 (p0,04)
29Adequacy/plausibility designs (2)
- Design longitudinal (before-and-after)
- Measurement points twice or more
- Outcome change
- Control group
- The same or similar individuals, before the
intervention - The same groups/areas, before the intervention
- Time-trend analyses, if possible
30Hib vaccine in Uruguay
In Uruguay, reported Hib cases declined by over
95 percent after the introduction of routine
infant Hib immunisation in 1994.
Source PAHO, 2004
31Adequacy/plausibility designs (3)
- Design longitudinal-control
- Measurement points twice or more
- Outcome relative change
- Control group
- The same or similar individuals, before the
intervention - The same groups/areas, before the intervention
- Time-trend analyses, if possible
32Adequacy/plausibility designs (4)
- Design case-control
- Measurement points once
- Comparison exposure to intervention
- Groups
- Cases individuals with the disease of interest
- Controls sample of the population from which
cases originated
33Stunting in Tanzania
Stunting prevalence among children aged 24-59
months
p (mean haz) 0.05
Source Schellenberg J et al
34- Transparent Reporting for Evaluations with
Nonrandomised Designs (TREND) - Similar to CONSORT guidelines
- Include
- conceptual frameworks used
- intervention and comparison conditions
- research design
- methods of adjusting for possible biases
- AJPH, March 2004
Source Des Jarlais, Lyles, Crepaz and the TREND
Group, AJPH 2004
35Conclusions (1)
- RCTs are essential for
- clinical studies
- community studies for establishing the efficacy
of relatively simple interventions - RCTs require additional evidence from
non-randomised studies for increasing their
external validity
36Conclusions (2)
- Given the complexity of many Public Health
interventions, adequacy and plausibility studies
are essential in different populations - even for interventions proven by RCTs
- Adequacy evaluations should become part of the
routine of decision-makers - and plausibility evaluations too, when possible
37