PROFILING, RANKING - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

PROFILING, RANKING

Description:

LETTERMAN'S. TOP 10 LIST. Carlin & Louis: Bayes. 5. July-August 2006. NEW YORK'S MOST DEADLY ... The process of comparing 'units' on an outcome measure with ... – PowerPoint PPT presentation

Number of Views:140
Avg rating:3.0/5.0
Slides: 53
Provided by: thomas54
Category:

less

Transcript and Presenter's Notes

Title: PROFILING, RANKING


1
  • PROFILING, RANKING
  • League Tables

2
(No Transcript)
3
RANKING IN THE NEWS
4
LETTERMANS TOP 10 LIST
5
NEW YORKS MOST DEADLY CARDIAC SURGEONS!!!!
6
(No Transcript)
7
HOPKINS IS THELEADING SPH!!!
8
PROFILING(League Tables)
  • The process of comparing units on an outcome
    measure with relative or normative standards
  • Quality of care, use of services, cost
  • Educational quality
  • Disease rates in small areas
  • Gene expression
  • Best of breed livestock
  • Developing and implementing performance indices
    to compare physicians, hospitals, schools,
    teachers, genes, ........

9
PROFILING OBJECTIVES(in health services)
  • Estimate and compare provider-specific
    performance measures
  • Utilization/cost
  • Process measures
  • Clinical outcomes
  • Patient satisfaction/QoL
  • Compare using a normative (external) or
  • a relative (internal) standard

10
(No Transcript)
11
RANKING IS EASY
  • Just compute estimates order them

12
MLE ESTIMATED SMRs
13
RANKING IS DIFFICULT
  • Need to trade-off the
  • estimates and uncertainties

14
MLE ESTIMATED SMRs 95 CIs
15
Statistical Challenges
  • Need a valid method of adjusting for
  • case mix and other features
  • Patient, physician and hospital characteristics
  • But, beware of over adjustment
  • Need a valid model for stochastic properties
  • Account for variation at all levels
  • Account for within-hospital, within-patient
    correlations
  • Need to
  • Adjust for systematic variation
  • Estimate and account for statistical variation

16
PROPER USE OF STATISTICAL SUMMARIES
  • The challenge
  • Differences in standard errors of
    hospital-specific estimates invalidate direct
    comparisons
  • In any case, large SEs make comparisons imprecise
  • Consequence
  • Even after valid case mix adjustment, differences
    in directly estimated performance are due, in
    part, to sampling variability
  • (Partial) Solution, use
  • Shrinkage estimates to balance and reduce
    variability
  • Goal-specific estimates to hit the right target

17
Comparing performance measures
  • Ranks/percentiles, of
  • Direct estimates (MLEs)
  • Shrunken estimates (BLUPs, Posterior Means)
  • Z-scores testing H0 that a unit is just like
    others
  • Optimal (best) ranks or percentiles
  • Other measures
  • Probability of a large difference between
    unit-specific true and H0-generated event rates
  • Probability of excess mortality
  • For the typical patient, on average or for a
    specific patient type
  • Z-score/P-value declarations
  • ....

18
(No Transcript)
19
USRDS
20
USRDS
21
(No Transcript)
22
(No Transcript)
23
(No Transcript)
24
MLE ESTIMATED SMRs CIs
25
Poisson-Normal Model(N, Yk , emortk) are
inputs
  • model
  • precdgamma(0.00001,0.00001)
  • for (k in 1N)
  • logsmrkdnorm(0,prec)
  • smrklt-exp(logsmrk)
  • rateklt-emortksmrk
  • Yk dpois(ratek)
  • Monitor the SMRk

26
MLE, SE POSTERIOR MEAN SMRs (using a
log-normal/Poisson model)
SE
MLE
PM
27
Posterior Mean estimated SMRs CIs using a
log-normal/Poisson model (original scale)
28
Posterior Mean estimated SMRs CIs using a
Gamma/Poisson model (expanded scale)
29
Caterpillar Plot (Hofer et al. JAMA 1999)
  • Estimated relative, physician-specific visit
    rate and 95 CI
  • Adjusted for patient demographic and case-mix
  • (1.0 is the typical rate)

30
  • Amount that physician-specific, laboratory costs
    for diabetic
  • patients deviates from the mean for all
    physicians /(pt. yr.)
  • Lines show the path from the direct estimate
    (the MLE) to the
  • shrunken estimate (Hofer et al JAMA 1999)

DIRECT
ADJUSTED
31
Example using BUGS forhospital performance
ranking
32
(No Transcript)
33
BUGS Model specification
  • model
  • for k in 1K
  • bkdnorm(0, prec)
  • rkdbin(pk, nk)
  • logit(pk) lt- mu bk
  • pop.meanlt-exp(mu bb)/(1exp(mu bb))
  • mudnorm(0, 1E-6)
  • precdgamma(.0001,.0001)
  • tausqlt-1/prec
  • adddnorm(0, prec)
  • bblt- mu add
  • Monitor the pk and ask for ranks

34
Summary Statistics
35
Posterior distributions of the ranks
36
(No Transcript)
37
(No Transcript)
38
(No Transcript)
39
LOS
X (Posterior Mean-Based Ranks) (Optimal Ranks)
?
40
LOS
41
BACK TO THE USRDS, SMR EXAMPLE
42
Relations among percentiling methods 1998 USRDS
Percentiles
43
False detection and non-detection
44
Minimize OC
45
Advantages of Pk
  • Relates percentiles to a substantive scale
  • Far easier to compute than
  • Shows that the Normand et al. (JASA 1997)
    approach is loss function based

46
(No Transcript)
47
(No Transcript)
48
(No Transcript)
49
K is large and we can use a completely
non-parametric prior
50
? (1-B) ?2/(?2 ?2) ICC
51
Probability of being in the upper 10 as a
function of true percentile ? intra-class
correlation
52
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com