An Update on Statistical Issues Associated with the International Harmonization of Technical Standar - PowerPoint PPT Presentation

1 / 68
About This Presentation
Title:

An Update on Statistical Issues Associated with the International Harmonization of Technical Standar

Description:

... issues and related topics in clinical trials. Efficacy Working Party (EWP) ... ICH E10: Choice of Control Group and Related Design Issues in Clinical Trials ... – PowerPoint PPT presentation

Number of Views:86
Avg rating:3.0/5.0
Slides: 69
Provided by: johnfl
Category:

less

Transcript and Presenter's Notes

Title: An Update on Statistical Issues Associated with the International Harmonization of Technical Standar


1
An Update on Statistical Issues Associated with
the International Harmonization of Technical
Standards for Clinical Trials (ICH)
  • Robert ONeill , Ph.D.
  • Director, Office of Biostatistics, CDER, FDA

22nd Spring Symposium, New Jersey Chapter of ASA,
Wed. June 6,2001
2
Outline of talk
  • International Harmonization of technical
    standards efficacy, safety, quality
  • statistics - where does it fit in
  • Resources - who are the people and what are the
    processes
  • A focus on a few ICH Guidances of interest
  • A few issues of particular statistical concern
  • The future - where do we go from here

3
Harmonization of technical standards
  • ICH (Europe, Japan, United States)
  • Began in 1989 ICH 1 in Brussels 1991
  • ICH continues today
  • Outside of ICH
  • APEC - Bridging study initiative , Teipei meeting
  • Canada, observers, WHO

4
Statistical Resources in the ICH regions
  • United States
  • CDER, CBER
  • Europe
  • U.K., Germany, Sweden
  • CPMP
  • Japan
  • MHW advisors, university
  • China, Taiwan, Canada, Korea

5
Web addresses for information and guidances
  • www.fda.gov/cder/guidance/index.htm
  • www.ifpma.org/ich1
  • www.emea.eu.int/

6
ICH Guidances with statistical content
  • E1 Extent of population exposure to assess
    clinical safety
  • E3 structure and content of clinical study
    reports (CONSORT statement)
  • E4 Dose-response information to support drug
    registration
  • E5 Ethnic factors in the acceptability of
    foreign clinical data
  • E9 Statistical principles for clinical trials
  • E10 Choice of control group
  • E11 Clinical investigation of medicinal products
    in the pediatric population

7
ICH Guidances with statistical content
  • Safety
  • carcinogenicity
  • Quality
  • Stability (expiration dating) Q1A, Q1E

8
New initiatives from the European Regulators
(CPMP)- Points to Consider Documents
  • On Validity and Interpretation of Meta-Analyses,
    and One Pivotal Study (Jan, 2001)
  • On Missing Data (April, 2001)
  • On Choice of delta
  • On switching between superiority and
    non-inferiority
  • On some multiplicity issues and related topics in
    clinical trials

9
Efficacy Working Party (EWP) Points to
Consider CPMP/EWP/1776/99 Points to Consider on
Missing Data (Released for Consultation January
2001) CPMP/EWP/2330/99 Points to Consider on
Validity and Interpretation of Meta-Analyses, and
one Pivotal study ( released for consultation
October 2000) CPMP/EWP/482/99 Points to
Consider on Switching between Superiority and
Non-inferiority (Adopted July 2000)
10
ICH E9Statistical Principles for Clinical
Trials Contents
  • Introduction ( Purpose, scope, direction )
  • Considerations for Overall Clinical Development
  • Study Design Considerations
  • Study Conduct
  • Data Analysis
  • Evaluation of safety and tolerability
  • Reporting
  • Glossary of terms

11
Study Design A Major Focus of the Guideline
  • Prior planning
  • Protocol considerations

12
Prospective Planning
  • Design of the trial
  • Analysis of outcomes

13
Confirmatory Study vs. Exploratory Study
  • A hypothesis stated in advance and evaluated
  • Data driven findings

14
Design Issues
  • Endpoints
  • Comparisons
  • Choice of study type
  • Choice of control group
  • Superiority
  • Non-inferiority
  • Equivalence
  • Sample size
  • Assumptions, sensitivity analysis

15
Choice of Study Type
  • Parallel group design
  • Cross-over design
  • Factorial design
  • Multicenter design

16
Analysis Outcome Assessment
  • Multiple endpoints
  • Adjustments

17
Assessing Bias and Robustness of Study Results
  • Analysis sets

18
Analysis Sets
  • ITT principle
  • All randomized population
  • Full Analysis population
  • Per Protocol

19
Data Analysis Considerations
  • Prespecification of the Analysis
  • Analysis sets
  • Full analysis set
  • Per Protocol Set
  • Roles of the Different Analysis Sets
  • Missing Values and Outliers

20
Statistical Analysis Plan (SAP)
  • A more technical and detailed elaboration of the
    principal features stated in the protocol.
  • Detailed procedures for executing the statistical
    analysis of the primary and secondary variables
    and other data.
  • Should be reviewed and possibly updated during
    blind review, and finalized before breaking the
    blind.
  • Results from analyses envisaged in the protocol
    (including amendments) regarded as confirmatory.
  • May be written as a separate document.

21
Analysis Sets
  • The ideal the set of subjects whose data are to
    be included in the analysis
  • all subjects randomized into the trial
  • satisfied entry criteria
  • followed all trial procedures perfectly
  • no loss to follow-up
  • complete data records

22
Full Analysis Set
  • Used to describe the analysis set which is
    complete as possible and as close as possible to
    the intention to treat principle
  • May be reasonable to eliminate from the set of
    ALL randomized subjects, those who fail to take
    at least one dose, or those without data post
    randomization.
  • Reasons for eliminating any randomized subject
    should be justified and the analysis is not
    complete unless the potential biases arising from
    exclusions are addressed and reasonably dismissed.

23
Per Protocol Set
  • Sometimes described as
  • Valid cases, efficacy sample, evaluable subjects
  • Defines a subset of the subjects in the full
    analysis set
  • May maximize the opportunity for a new treatment
    to show additional efficacy
  • May or may not be conservative
  • Bias arises from adherence to protocol related to
    treatment and/or outcome

24
Roles of the Different Analysis Sets
  • Advantageous to demonstrate a lack of sensitivity
    of the principal trial results to alternative
    choices of the set of subjects analyzed.
  • The full analysis set and per protocol set play
    different roles in superiority trials, and in
    equivalence or non-inferiority trials.
  • Full analysis set is primary analysis in
    superiority trials - avoids optimistic efficacy
    estimate from per protocol which excludes
    non-compliers. Full analysis set not always
    conservative in equivalence trial

25
Impact on Drug Development
  • On sponsor design and analysis of clinical trials
    used as evidence to support claims
  • On regulatory advice and evaluation of sponsor
    protocols and completed clinical trials
  • On maximizing quality and utility of clinical
    studies in later phases of drug development
  • On multidisciplinary understanding of key
    concepts and issues
  • Enhanced attention to planning and protocol
    considerations

26
Will the Guideline Help to Avoid Problem Areas
in the Future - Maybe !
  • Not a substitute for professional advice-will
    require professional understanding and
    implementation of the principles stated
  • Will not assure correct analysis and
    interpretation
  • Most of the guideline topics reflect areas where
    problems have been observed frequently in
    clinical trials in drug development

27
ICH Chemistry
  • Q1E Bracketing and Matrixing Designs for
    Stability Testing of Drug Substances and Drug
    Products
  • Considerable new work, including extensive
    simulations to evaluate size of studies and the
    ability to detect important changes to expiration
    date setting (incomplete blocks, alias, etc).

28
ICH E10 Choice of Control Group and Related
Design Issues in Clinical Trials
  • Section 1.5 is very statistically oriented
    involving issues like
  • Assay sensitivty
  • Historical evidence of sensitivity to drug
    effects
  • Choice of a margin for a non-inferiority (dont
    show a difference ) trial.

29
Assay Sensitivity in Non-inferiority designs
  • Assay sensitivity is a property of a clinical
    trial defined as the ability to distinguish an
    effective treatment from a less effective or
    ineffective treatment
  • Note that this property is more than just the
    statistical power of a study to demonstrate an
    effect - it also deals with the conduct and
    circumstances of a trial

30
The presence of assay sensitivity in a
non-inferiority trial may be deduced from two
determinations
  • 1) Historical evidence of sensitivity to drug
    effects, I.e., that similarly designed trials in
    the past regularly distinguished effective
    treatments from less effective or ineffective
    treatments, and
  • 2) Appropriate trial conduct, I.e. that the
    conduct of the trial (current) did not undermine
    its ability to distinguish effective treatments
    from less effective or ineffective treatments.
    can be fully evaluated only after the active
    control non-inferiority trial is completed.

31
Successful use of a non-inferiority trial thus
involves four critical steps
  • 1) Determining that historical evidence of
    sensitivity to drug effect exists. Without this
    determination, demonstration of efficacy from a
    showing of non-inferiority is not possible and
    should not be attempted.
  • 2) Designing a trial. Important details of the
    trial design, e.g. study population, concomitant
    therapy, endpoints, run-in periods, should adhere
    closely to the design of the placebo-controlled
    trials for which historical sensitivity to drug
    effects has been determined.

32
Successful use of a non-inferiority trial thus
involves four critical steps (cont.)
  • 3) Setting a margin. An acceptable
    non-inferiority margin should be defined, taking
    into account the historical data and relevant
    clinical and statistical considerations.
  • 4) Conducting the trial. The trial conduct should
    also adhere closely to that of the historical
    trials and should be of high quality.

33
Choosing the Non-inferiority margin
  • Prior to the trial, a non-inferiority margin,
    sometimes called a delta, is selected.
  • This margin is the degree of inferiority of the
    test treatments to the control that the trial
    will attempt to exclude statistically.
  • The margin chosen cannot be greater than the
    smallest effect size that the active drug would
    be reliably expected to have compared with
    placebo in the setting of the planned trial.
    based on both statistical reasoning and clinical
    judgement, should reflect uncertainties in
    evidence and be suitably conservative.

34
Outline of the Issues
  • What is the the non-inferiority design
  • What are the various objectives of the design
  • Complexities in choosing the margin of treatment
    effect - it depends upon the strength of evidence
    for the treatment effect of the active control
  • Literature on historical controls, and on the
    heterogeneity of treatment effects among studies
  • The statistical approaches to each objective, and
    their critical assumptions
  • Cautions and concluding remarks

35
Non-Inferiority Design
  • A study design used to show that a new treatment
    produces a therapeutic response that is no less
    than a pre-specified amount of a proven treatment
    (active control), from which it is then inferred
    that the new treatment is effective. The new
    treatment could be similar or more effective than
    the existing proven treatment
  • A non-inferiority margin ? is pre-selected as
    the allowable reduction in therapeutic response.
    The margin ? is chosen based on the historical
    evidence of the efficacy of the active control
    and other clinical and statistical considerations
    relevant to the new treatment and the current
    study.
  • ICH - E10 This delta can not be greater than
    the smallest effect size that the active drug
    would be reliably expected to have compared with
    placebo in the setting of a planned trial. - the
    concept of reliably and repeatedly being able
    demonstrate a treatment effect of a specified
    size !

36
Non-Inferiority Design (contd)
  • A test treatment is declared clinically
    non-inferior to the active control if
  • the trial has the necessary assay sensitivity for
    the trial to be valid for non-inferiority testing
  • the one-sided 97.5 confidence interval is
    entirely to the right of - ?

37
Inference for Non-Inferiority
  • Delta Limits 95 Confidence Intervals

Non-inferiority shown
Non-inferiority shown
Non-inferiority not shown
Non-inferiority shown/ superiority could be
claimed
- ?
0
Control Better
Test Agent Better
Treatment Difference
38
What are the various objectives of the
non-inferiority design
  • To prove efficacy of test treatment by indirect
    inference from the active control treatment
  • To establish a similarity of effect to a known
    very effective therapy - e.g. anti-infectives
  • To infer that the test treatment would have been
    superior to an imputed placebo ie.
    had a placebo group been included for comparison
    in the current trial. - a new and controversial
    area - choice of margin is the key

39
What is the Evidence supporting the treatment
effect of the active control, and how convincing
is it ?
  • Large treatment effects vs. small or modest
    effects
  • Large treatment effects - anti-infectives
  • Modest treatment effects - difficulties in
    reliably demonstrating the effect - Sensitivity
    to drug effects
  • Amount of prior study data available to estimate
    an effect
  • One single study
  • Several studies, of different sizes and quality
  • No estimate or study directly on the comparator -
    standard of care

40
How is the margin ? chosen based upon prior
study data
  • For a large treatment effect, it is easier - a
    clinical decision of how similar a response rate
    is needed to justify efficacy of a test treatment
    - e.g. anti-infectives is an example.
  • For modest and variable effects, it is more
    difficult and some approaches suggest margin
    selection based upon several objectives.

41
Complexities in choosing the margin (how much of
the control treatment effect to give up)
  • Margins can be chosen depending upon which of
    these questions is addressed
  • how much of the treatment effect of the
    comparator can be preserved in order to
    indirectly conclude the test treatment is
    effective - a clinical decision for very large
    effects a statistical problem for small and
    modest effects
  • how much of a treatment effect would one require
    for the test treatment to be superior to placebo,
    had a placebo been used in the current active
    control study - a lesser standard than the above

42
How convincing is the prior evidence of a
treatment effect ?
  • Do clinical trials of the comparator treatment
    consistently and reliably demonstrate a treatment
    effect - when they do not, what is the reason ?
  • Study is too small to detect the effect - under
    powered for a modest effect size
  • The treatment effect is variable, and the
    estimate of the magnitude will vary from study to
    study, sometimes with NO effect in a given study
    - a BIG problem for active controlled studies
    (Sensitivity to drug effect)

43
How do you know which treatment effect size is
appropriate for the current active control ?How
much protection should be built into the choice
of the margin to account for unknown bias and
uncertainty in study differences ?
44
Inherently, the answer relies upon historical
controls and their applicability to the current
study
  • Choice of the margins should take into account
    all sources of variability as well as the
    potential biases associated with
    non-comparability of the current study with the
    historical comparisons.
  • A need to balance the building in of bias in
    the comparison and quantifying the amount of
    treatment effect preserved, as a function of the
    relative amount of data from the historical
    studies and the current study

45
Use of historical controls in current RCTs
  • Pocock,S. The combination of randomized and
    historical controls in clinical trials. J.
    Chronic Diseases 1976, 29 pp.175-188
  • Lists 6 conditions to be met for valid use of
    historical controls with controls in current
    trial
  • Only if all these conditions are met can one
    safely use the historical controls as part of a
    randomized trial. Otherwise, the risk of a
    substantial bias occurring in treatment
    comparisons cannot be ignored.

46
Importance of the assumption of constancy of the
active control treatment effect derived from
historical studies
  • It is relevant to the design and sample size of
    the current study, to the choice of the margin,
    to the amount of bias built into the comparisons,
    to the amount of effect size one can preserve
    (both of these are likely confounded), and to the
    statistical uncertainty of the conclusion.
  • Before one can decide on how much of the effect
    to preserve, one should estimate an effect size
    for which there is evidence of a consistent
    demonstration that effect size exists.

47
Explaining Heterogeneity among independent
studies Lessons from meta-analyses
  • Variation in baseline risk as an explanation of
    heterogeneity in meta-analysis, S.D. Walter,
    Stat. In Medicine, 16, 2883-2900 (1997)
  • An empirical study of the effect of the control
    rate as a predictor of treatment efficacy in
    meta-analysis of clinical trials,
    Schmid,Lau,McIntosh and Cappelleri, Stat. In
    Medicine, 17, 1923-1942 (1998)

48
Explaining Heterogeneity among independent
studies Lessons from meta-analyses (cont.)
  • Explaining heterogeneity in meta-analysis a
    comparison of methods. Thompson and Sharp, Stat.
    In Medicine, 18, 2693-2708 (1999)
  • Assessing the potential for bias in meta-analysis
    due to selective reporting of subgroup analyses
    within studies. Hahn, Williamson, Hutton, Garner
    and Flynn, Stat. In Medicine, 19, 3325-3336 (2000)

49
Explaining Heterogeneity among independent
studies Lessons from meta-analyses (cont.)
  • Large trials vs. meta-analysis of smaller trials
    - How do their results compare ? Cappelleri,
    Ioannidis, Schmid, de Ferranti, Aubert, Chalmers,
    Lau. JAMA, 16 1332-1338, 1996
  • Discordance between meta-analysis and large-scale
    randomized controlled trials examples from the
    management of acute myocardial infarction. Borzak
    and Ridker, Ann. Internal Med.,123, 873-877
    (1995)
  • Discrepancies between meta-analysis and
    subsequent large randomized controlled trials.
    LeLorier, Gregoire, Benhaddad, Lapierre,Derderian.
    NEJM, 337, 536-42 (1997)

50
Use of meta-analysis - necessary but not
sufficient
  • Distinguish under powered studies from well
    powered studies for a common effect size - if
    possible
  • How many trials are consistent with no effect,
    rather than an effect of some size
  • Determine between trial variability as an
    additional factor to consider in choosing a
    conservative margin
  • How do you know if the current study comes from
    the same trial population, and where does it rest
    in the trial distribution - critical to
    assumptions for control group rate and constancy
    of treatment effect
  • Resorting to meta-analysis of all studies, when
    few individual studies reject null, tells you
    something !

51
Three approaches to the problem
  • Indirect confidence interval comparisons (ICIC)
    (CBER/FDA type method, etc.)
  • - thrombolytic agents in the treatment of acute
    MI
  • Virtual method (Hasselblad Kong, Fisher, etc.)
  • - Clopidogrel, aspirin, placebo
  • Bayesian approach (Gould, Simon, etc.)
  • - treatment of unstable angina and non-Q wave MI

52
When may it not be possible to estimate a margin
or to use the non-inferiority design to infer
efficacy ?
  • There is a known creep in the standard of care
    over time and/or the active control treatment,
    which renders any past estimates of active
    control treatment effects not comparable or valid
    for the current comparison, under conditions of
    medical practice in the new current study
  • e.g. use of surfactants in neonatal treatment

53
ICH E5Ethnic Factors in the Acceptability of
Foreign Clinical Data
54
Key Features of E5
  • Operational definition of ethnic factors
  • Clinical Data Package Fulfilling Regulatory
    Requirements in New Region
  • Extrapolation of Foreign Clinical Data to New
    Region (role of ethnic factors)
  • Bridging Studies
  • Global Development Strategies

55
Ethnic Factor Definition
  • intrinsic factors characteristics associated
    with the drug recipient (ADME studies)
  • race, age, gender, organ dysfunction, genetic
    polymorphism
  • extrinsic factors characteristics associated
    with the environment and culture in which one
    lives (clinical outcomes)
  • clinical trial conduct, diet, tobacco and alcohol
    use, compliance with prescribed medications

56
Assessing a medicines sensitivity to ethnic
factors(part of the screening process)
  • Properties of a compound making it more likely to
    be sensitive
  • Metabolism by enzymes known to show genetic
    polymorphism
  • High likelihood of use in a setting of multiple
    co-medications

57
Assessment of the Clinical Data Package (CDP) for
acceptability
  • Question 1 Meets regulatory requirements -
    yes/no
  • Question 2 Extrapolation of foreign data
    appropriate - yes/no
  • Question 3 Further clinical study (ies) needed
    for acceptability by the new region - yes/no
  • Question 4 Acceptability in the new region -
    yes/no

58
Meets regulatory requirements
  • Issues of evidence
  • Confirmatory evidence two or more studies
    showing treatment effects
  • Interpreting results of foreign clinical trials
    which provide that evidence (may be one study, or
    all studies, or part of a study)
  • Which study designs provide evidence
  • Active control / non-inferiority designs
  • Placebo or active control / show a difference
    designs

59
The sources of data for an application
(implementation)
  • All clinical studies for efficacy performed in
    foreign region
  • One study in the United States, one or more
    foreign clinical studies
  • Multi-center/ multi-region clinical trials form
    the basis for efficacy

60
Considerations for evaluating clinical efficacy
between regions
  • Study design differences
  • Magnitude of treatment effect sizes
  • Effect size variability subgroup differences
  • Impact of intrinsic factors - determined when ?
  • Impact of Extrinsic factors
  • trial conduct and monitoring
  • usage of concomitant medications
  • protocol adherence

61
Bridging Studies
  • When
  • Why
  • What type

E5 is purposely vague on how to do this or what
their design should be
62
Study design and study objectives(need examples
and experience)
  • What type of bridging study would be helpful for
    extrapolation -
  • PK/PD
  • Another clinical trial of the primary clinical
    endpoint
  • equivalence/non-inferiority treatment effect
    acceptably close - margin or delta
  • dose response study
  • superiority design - estimate treatment effect
    size for comparison

63
E5 allows for a new study in the new region -
why is that needed ?
  • When all the clinical data is derived from a
    foreign region and extrapolation is an issue
  • When the experience with clinical trials in that
    region is minimal
  • When there is concern with ability to confirm a
    finding from a study(ies)
  • A confirmatory clinical trial is the bridging
    study

64
Developmental Strategies for Global Development
  • Early vs. later strategies
  • Designing population pk/pd into clinical studies
  • Planning to explain effect size differences among
    regions
  • Design of bridging studies early in development

65
Study Design
  • Better planning in Phase I, II, III and more
    efficient study designs to address several
    subgroup questions simultaneously
  • Design Phase III with some knowledge of PK / PD
    differences in Phase I / II
  • Address multiple questions simultaneously for
    efficiency (age, gender, ethnic)

66
Study Design
  • Assessing the influence of ethnic factors in each
    study Phase (I, II, III) and to identify earlier
    and account for, by design, the influence of
    ethnic factors
  • Ethnic factors as another subgroup
  • Age, gender, renal status, etc.
  • Ethnic factors integrated with
  • Dose response
  • Geriatrics
  • Population exposure for safety

67
Remarks
  • Little experience at this time with bridging
    studies
  • Little experience with Japanese trials in NDA
    applications, or trials from Asia
  • More experience with foreign trials from Europe -
    possible heterogeneity of treatment effects being
    evaluated concern for experience in new regions
    like Eastern Europe

68
The future
  • Appears to be increasingly dependent on
    statistical input, methods, study design,
    interpretation , etc.
  • Statistical resources (people) are needed in the
    regulatory agencies in all countries/regions
    serious about inference - not always present ,
    maintained - cannot develop guidance documents
    and consensus positions without this,nor rely on
    guidances alone
  • Global drug development is beginning to recognize
    the need for early planning for multi-regional
    inference - the questions and study designs are
    just unfolding
Write a Comment
User Comments (0)
About PowerShow.com