Title: Clinical Trials
1Clinical Trials
Research Methodology Workshop
Baghaei AM. MD.
2Objectives
- To define a clinical trial.
- To define common types and phases of clinical
trials in biomedical researches - To recognize the characteristics of a good design
clinical trial (Validity Estimation) - To point ethical considerations in RCTs
- To Define community interventions
- To point Meta-analysis
3Definition of Clinical Trial
- A study in whish, investigator controls the
subjects and the exposure. - In this kind of study we seek about cause and
effect relationship between variables.
4Clinical Trials as Experimental and
Quasiexperimental Studies
5Common Types of Clinical Trials
- Treatment trials test new treatments, new
combinations of drugs, or new approaches to
surgery or radiation therapy. - Prevention trials look for better ways to prevent
disease in people who have never had the disease
or to prevent a disease from returning. These
approaches may include medicines, vitamins,
vaccines, minerals, or lifestyle changes. - Diagnostic trials are conducted to find better
tests or procedures for diagnosing a particular
disease or condition. - Screening trials test the best way to detect
certain diseases or health conditions. - Quality of Life trials (or Supportive Care
trials) explore ways to improve comfort and the
quality of life for individuals with a chronic
illness
6Common Type of Study Design
- Uncontrolled
- Controlled
- Before/after
- Historical Control
- Concurrent, not randomized
- Randomized
- Parallel
- Cross-Over
- Cross-Cross Over
- Factorial Design
- Sequential Trial Design
- ..
7Comparing Treatments
- Fundamental principle
- Groups must be alike in all important aspects and
only differ in the intervention each group
receives - In practical terms, comparable treatment groups
meansalike on the average - Randomization
- Each participant has the same chance of receiving
any of theinterventions under study - Allocation is carried out using a chance
mechanism so that neither the participant nor the
investigator will know in advance which will be
assigned - Placebo
- Blinding
- Avoidance of conscious or subconscious influence
- Fair evaluation of outcomes
8Simple Randomization
Note Randomization is a design model, it differs
from Random Sampling.
9Stratified Randomization
10Non-randomized Trials May Be Appropriate
- Early studies of new and untried therapies
- Uncontrolled early phase studies where the
standard is relatively ineffective - Investigations which cannot be done within the
current climate of controversy (no clinical
equipoise) - Truly dramatic response
11Study Population
- Subset of the general population determined by
the eligibility criteria - General population
- Eligibility criteria
- Study population
- Enrollment
- Study sample
- Observed
12Sample Size
- The study is an experiment in people
- Need enough participants to answer the question
- Should not enroll more than needed to answer the
question - Sample size is an estimate, using guidelines and
assumptions - Beta error vs Alpha error
- Statistical Vs Clinical Diference
13MCID
Minimal Clinically Important Difference
The minimal clinically important difference
(MCID) of a therapy is defined as The smallest
treatment effect that would result in a change in
patient management, given its side effects, costs
and inconveniences (Delta Value).
14Statistical significance vs. clinical importance
Example Investigators turned their attention to
otitis media, and found, in a study of 40,000
children, that imipenem is superior to
amoxicillin (cure rate 96.1 vs. 95.2 p
0.002). This difference is statistically
significant but clinically unimportant. Contrast
this to the findings of the asthma study (n30),
where success rates were 27 for atrovent and 47
for salbutamol (p 0.55). The results are
clinically important but statistically
insignificant.
When interpreting the outcome of any study,
physicians should consider both the reported
effect size (best estimate of truth) and the p
value (likelihood that this difference arose by
chance). If the effect size was clinically
important, but the p value insignificant, the
sample size was probably too small (underpowered
study). If the effect size was clinically
unimportant but the p value significant, the
sample size was probably too large (overpowered
study).
Physicians should be aware that pharmaceutical
manufacturers may be tempted to design
overpowered studies, in the hopes of finding
small but statistically significant differences
that will increase use of their product. These
differences may be clinically unimportant, and
one of our great challenges as physicians is to
establish consensus about what are clinically
important outcome differences for various disease
states.
15Regular Follow-up
- Routine Procedures (report forms)
- Interviews
- Examinations
- Laboratory Tests
- Adverse Event Detection/Reporting
- Quality Assurance
16This is very important to define in a RCT
- The Baseline variables in subjects
- The eligibility criteria in details
- The interventions in details
- The outcome of study
- The conditions in which the study will be
terminated - The manner for follow up
17Phases of Clinical Trials
- In Phase I trials, researchers test a new drug or
treatment in a small group of people (20-80) for
the first time to evaluate its safety, determine
a safe dosage range, and identify side effects. - In Phase II trials, the study drug or treatment
is given to a larger group of people (100-300) to
see if it is effective and to further evaluate
its safety. - In Phase III trials, the study drug or treatment
is given to large groups of people (1,000-3,000)
to confirm its effectiveness, monitor side
effects, compare it to commonly used treatments,
and collect information that will allow the drug
or treatment to be used safely. - In Phase IV trials, post marketing studies
delineate additional information including the
drug's risks, benefits, and optimal use.
18Components of internal and external validity of
controlled clinical trials
Internal validity
Extent to which systematic error (bias)is
minimized in clinical trials
- Selection bias
- Performance bias
- Detection bias
- Attrition bias
Biased allocation to comparison groups
Unequal provision of care apart from treatment
under evaluation
Biased assessment of outcome
Biased occurrence and handling of deviations from
protocol and loss to follow up
19Components of internal and external validity of
controlled clinical trials (continued)
External validity
Extent to which results of trials provide a
correct basis for generalization to other
circumstances.
- Patients
- Treatment
regimens - Settings
- Modalities of outcomes
Age, sex, severity of disease and risk factors,
comorbidity
Dosage and route of administration, type of
treatment within a class of treatments,
concomitant treatments
Level of care (primary to tertiary) and
experience and specialization of care provider
Type or definition of outcomes and duration of
follow up be assessed
20Community Intervention Trials Reference vs
Interventional Ethical Issues in RCTs
21(No Transcript)