PREDICTION DISCRIPANCIES FOR THE EVALUATION OF NONLINEAR MIXEDEFFECTS MODELS - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

PREDICTION DISCRIPANCIES FOR THE EVALUATION OF NONLINEAR MIXEDEFFECTS MODELS

Description:

on a validation data set not used for model building and estimation ... Problem of investigating whether a given null model H0 is compatible with data ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 24
Provided by: ipc196
Category:

less

Transcript and Presenter's Notes

Title: PREDICTION DISCRIPANCIES FOR THE EVALUATION OF NONLINEAR MIXEDEFFECTS MODELS


1
PREDICTION DISCRIPANCIES FOR THE EVALUATION OF
NONLINEAR MIXED-EFFECTS MODELS
  • France Mentré
  • Accepted for publication in Journal of
    Pharmacokinetics and Pharmacodynamics in June
    2005
  • Applied in two published studies
  • Mesnil, Mentré, Dubruc,Thénot Mallet.
    Population pharmacokinetic analysis of
    mizolastine and validation from sparse data on
    patients using the nonparametric maximum
    likelihood method, JPKPD (1998).
  • Comets, Ikeda, Hoff, Fumoleau, Wanders
    Tanigawara. Comparison of the pharmacokinetics of
    S-1, an oral anticancer agent, in Western and
    Japanese patients, JPKPD (2003).

2
Outline
  • Introduction
  • Standard validation methods
  • New validation method based on prediction
    discripancies
  • Evaluation by simulation
  • methods
  • results
  • Conclusion

3
Introduction (1)
  • Model evaluation, validation, adequacy,
    assessment, checking, appropriateness,
    performance
  • Large literature (especially in Bayesian
    statistics)
  • Gelfand (1996)
  • "Responsible data analysis must address the
    issue of model determination, which consists in
    two components model assessment or checking and
    model choice or selection. Since, in practice,
    apart from rare situations, a model specification
    is never correct we must ask (i) is a given
    model adequate? and (ii) within a collection of
    models under consideration, which is the best?
  • Here tools for evaluation of model adequacy
    topic (i)

4
Introduction (2)
  • FDA guidance on Population Pharmacokinetics
    (1999)
  • "If the population PK analysis results will be
    incorporated in the drug label, model validation
    is encouraged and model validation procedures
    should be an integral part of the protocol"
  • Validation is not only
  • analyze the same data set with two different
    estimation methods with the same statistical
    model
  • goodness-of-fit evaluation
  • Validation can be defined as (Mentré Ebelin,
    COST, 1997)
  • evaluate the predictability of the model and
    estimates from a learning data set
  • on a validation data set not used for model
    building and estimation
  • NB same definition in FDA guidance

5
Introduction (3)
  • Development of a criterion to evaluate the
    distance between observed values (in a validation
    set) and model predictions
  • Problem of investigating whether a given null
    model H0 is compatible with data where the
    assumed model has unknown parameters
  • Based on predictive distribution (Gelfand et
    al.,1992 Gelman et al., 1995)
  • Extension to non Bayesian estimation posterior
    predictive check (PPC) by Yano, Beal and Sheiner
    (J PKPD, 2001)
  • "Degenarate" distribution posterior distribution
    approximated by a discrete distribution with one
    location at the maximum likelihood estimate (SE
    not taken into account)
  • Approach called "plug-in" by Bayarri Berger,
    JASA, 2000 Robins, van der Vaart Ventura,
    JASA, 2000 )

6
Models and Notations
  • Structural Model
  • y f(t q) e h(t q , b)
  • q pharmacokinetic parameters
  • e N (0, s2 I)
  • h(t q , b) error function involving error
    parameters b
  • Parametric distribution
  • qi m ? exp(hi ) or qi m hi
  • m fixed effects hi random effects with hi
    N(0, W )
  • population parameters m, vec(W), s and b
  • Non-parametric distribution
  • no assumption on p(q)
  • p is discrete composed of K locations qk with
    probabilities ak
  • Estimation by maximum likelihood

7
Standard Validation method
  • Validation from data on a separate data set
  • composed of N observations yi at times ti
  • random split of the original set or subsequent
    data
  • Standard approach standardized prediction
    errors
  • for each observation yi
  • given the population parameters and the model
  • evaluation of mean predicted concentration mi
  • evaluation of the associated sd si
  • spei (yi - mi) / si
  • spei should be N(0, 1)
  • limitation
  • based on first-order approximation of the model
  • assume that yi normal
  • assume E(y) f( E(q), t)

8
New validation method (1)
  • Predictive distributions
  • for each observation yi
  • given the population parameters and the model
  • predictive distribution pi (y) at time ti
  • cumulative distribution function Fi (y)
  • New approach evaluation of pseudo-residuals (or
    prediction discripancies)
  • based on the cdf Fi (y) evaluated at yi
  • pri Fi (yi )
  • pri are U0,1
  • F cdf of N(0,1)
  • . F-1(pri) normalized pseudo-residuals
  • . npri N(0,1)

9
New validation method (2)
  • Evaluation of the pseudo-residuals
  • approximation of parametric distribution by a
    discrete distribution
  • stochastic simulation of K values hk in N(0, W )
    then qk
  • ak 1/K
  • from assumption on the error model
  • p(y q,t) normal pdf with mean f(t q) and SD
    s h(t q,b)
  • therefore pri
  • where F is the cdf of N(0,1)

10
New validation method (3)
  • Diagnostic
  • qqplots, histograms of pri or npri
  • plots of pri versus mi, versus ti
  • Statistical test
  • if all yi independent
  • one-sample Kolmogorov Smirnov test
  • test whether pri are U0,1
  • if several observations in same patient
  • correlation between pri within a patient
  • use of KS test ?

11
Evaluation by simulation (1)
  • Simulation Features
  • PK model 1 cp with first-order absorption
  • Ka, CL, V with exponential random effects (CV
    30)
  • constant CV error (15)
  • 6 sampling times (chosen randomly)
  • Validation sample of 300 observations
  • two cases
  • case 1 300 patients with 1 observation
  • case 2 100 patients with 3 observations
  • 1000 replications
  • evaluation of pseudo residuals
  • KS test
  • case 1 type I error 0.05, threshold D 0.078
  • case 2 for type I error 0.05, empirical
    threshold D defined from simulations

12
Example of observations in one simulated sample
13
Evaluation by simulation (2)
  • In three cases
  • - Case I 300 x 1
  • - Case IIa 100 x 3
  • - Case Iib 100 x 1 (randomly chosen)
  • Simulation under H0 evaluation of type I error
  • Simulation under H1 evaluation of the power of
    the KS test
  • Several alternative models
  • mean parameters multiplied by 2
  • CV of random effects multiplied by 2
  • CV of error model multiplied by 2
  • 2 cp PK model (with same Ka, CL and V)
  • random effects of CL from a mixture of normal
    distributions (with same total variance)

14
Validation results under H0 in one set of Case
IIa (spe and pd)
15
Validation results for CV of CL multiplied by 2
16
Validation results for 2 cp PK model
17
(No Transcript)
18
(No Transcript)
19
Conclusion (1)
  • Good estimation method for nonlinear
    mixed-effects models
  • Not yet good statistical tests for validation
    and/ or goodness-of-fit in nonlinear
    mixed-effects models
  • Limitation of validation using standardized
    prediction errors
  • New validation method based on the whole
    predictive distribution
  • definition of pseudo-residuals
  • diagnostic plots
  • KS test with good performance in this simulation
  • Further developments
  • take into account correlations within patients ?
  • use other tests (better power) ?
  • Anderson and Darling Cramer von Mises

20
Conclusion (2)
  • Validation depends upon objectives of the
    analysis
  • Standardization of validation procedure is needed
    (?)
  • Statistical ongoing developments
  • Definition of validation set
  • Evans, JASA, 2000 25 of original data set
  • Model building and validation results also depend
    on the data
  • validation of a model and design are strongly
    linked
  • One can only invalidate a model ?
  • find the experimental conditions

21
FDA guidance on Pop PK (99)
  • There is no right or wrong model, nor is there a
    right or wrong method of fitting
  • Subjectivity, therefore play a large role in
    model choice, validation and interpretation of
    the results
  • Currently, there is no consensus on an
    appropriate statistical approach for validation
    in pop PK models
  • The choice of a validation approach depends on
    the objective of the analysis
  • Validation methods are still being evaluated an
    may require future testing
  • Innovative approaches are strongly encouraged

22
  • We do not like to ask , Is our model true or
    false ?, since probability models in most data
    analyses will not be perfectly true.
  • The most relevant question is, Does the models
    deficiencies have a noticeable effect on
    substantive inferences ? 
  • (Gelman et al., 1995)

23
  • Modelling in science remains, partly at least,
    an art.
  • A first principle, is that all models are wrong
    some, though, are more useful than others and we
    should seek those.
  • A second principle (which applies also to artists
    !) is not to fall in love with one model to the
    exclusion of alternatives.
  • A third principle recommends thorough checks on
    the fit of a model to the data.
  • Such diagnostic procedures are not yet fully
    formalized, and perhaps never will be.
  • Some imagination or introspection is required in
    order to determine the aspects of the model that
    are most important and most suspect.
  • (Mc Cullagh and Nelder, 1983)
Write a Comment
User Comments (0)
About PowerShow.com