Title: External Quality Assessment for AFB Smear Microscopy
1External Quality Assessment for AFB Smear
Microscopy
2WHY THIS SYSTEM?
- Not to
- confirm diagnoses
- rate individual technicians / labs
- prove that controllers are better
3RECHECKING MERITS
- Final aim improve quality
- screening for unacceptable performance
- problem solving
- Control of routine smears reality
- Continuous motivation and education
4RECHECKING PROBLEMS
- Disadvantage workload!
- Impossible / useless without good program
- regular supervision visits
- highly functional network intermediate labs !!
- corrective measures
5OVERVIEW
- First step correct identification
- of possible problem centres
- LQAS sampling for least effort
- not sure that all failing are weak
- feed-back visit evolution type / nos.
error - of type of problem FP or FN ?
- correct technique for QA !
- Laos example
6(No Transcript)
7OVERVIEW
- Second step investigation for causes
- False positives
- lack of training / experience
- very poor microscope
- administrative errors
- (crystals, artifacts)
- False negatives
- poor stain or staining technique
- very thick smear
- poor microscope / poor light
- poor examination or overload
8(No Transcript)
9ORGANIZATIONAL ASPECTS
- Integrated with supervision system
- Feed-back for maximum reliability
- Complementary levels of control
- Workload depends on
- total turn-over positivity-rate of the labs
- level of decentralization
- desired statistical accuracy
10SAMPLING PRINCIPLES
- As few as possible
- overload is counterproductive
- balance statistical and technical accuracy
- False positives none, or systematic error
- statistics not required for significance
- ---gt check only few
- at frequency of occurrence
- controllers / controlled directly comparable
11(No Transcript)
12SAMPLING PRINCIPLES (2)
- False negatives a few are unavoidable
- target minimal sensitivity
- ---gt Lot Quality Assurance Sampling
- critical value of false negative surpassed?
- Include low positives (scanty IUATLD/WHO)
- difficult to confirm, but sensitive
13TECHNICAL ISSUES
- Sampling random and representative
- all slides kept, properly identified
- sampling by supervisor, not technician!!
- select from register
- check on completeness of collection
- calculate sampling interval
14TECHNICAL ISSUES (2)
- Screening first controllers
- blind checking absolutely necessary
- avoid to overload the controllers
- better sacrifice statistical power
- max. 10 slides per day ( other work)
- same technique (i.e. no. of fields) good
microscopes
15TECHNICAL ISSUES (3)
- Re-staining
- fading of fuchsin colour
- true positives --gt false positives
- very fast under extreme conditions
- always restain discordants
16(No Transcript)
17(No Transcript)
18TECHNICAL ISSUES (4)
- Re-staining before first screening no consensus
- effect of fading more extreme on low positives
- so false negatives ---gt undetectable by
controllers - poorly stained AFB fade faster
- effect not only in extreme climates
- restain not to miss gross deficiencies of
stain/staining - so better restain all slides before first
controls ??
19(No Transcript)
20TECHNICAL ISSUES (5)
- Gold standard ???
- controllers also make errors
- countercheck second control on discordants only
- higher level, or head technician
- no new bias false positive --gt false negative
controller - false negative usually confirmed
- may require long search
- re-stain if not done earlier
- blinding only to origin of both results
21TECHNICAL ISSUES (6)
- Controllers must also be evaluated !
- sample from their routine work ??
- First controllers compare FN rates with
periphery - second control results ---gt error by whom? what
type? - rates denominators numbers of reported results
- only for sum of controlled vs. controller
- validity problem if
- no errors for a bigger number of centres
together - controller made more FN than periphery
- Evaluation second controllers
- send back slides with serious errors
22SAMPLE SIZE
- Sample size LQAS method
- from industry, smallest sample
- one-sided test no exact results, wide confidence
limits - except for bigger area, i.e. country
- example 3 false negative when sample size 90
- 1 centre 0.8 to 9.1
- average 12 centres 2.1 to 4.2
- average 850 centres 2.9 to 3.1
- probably not more than x error in the lot
- accept if not more than d error in the sample
- reject if more than d error in the sample
- but not sure to be the reverse !!
23SAMPLE SIZE (2)
- LQAS sample size tables parameters
- acceptance number "d" ( specificity, power)
- confidence level (95)
- definition size of a lot
- lab annual turn-over recommended
- critical value false negative calculate using
- positivity rate of labs
- acceptable lower limit of sensitivity
24(No Transcript)
25Critical value
- An upper threshold of the proportion of false
negatives - To be chosen from an estimate
- Historical false negative rates
- Calculation based on
- Prevalence of positives
- Expected parameters of sensitivity and
specificity comparing to controller.
26(No Transcript)
27(No Transcript)
28(No Transcript)
29Blinded Rechecking
- -Determine sample size
- -Properly store slides
- -Collect a random representative sample
- -Ensuring blinding
- -Recheck smears
- -Resolve discrepancies between original result
and result of rechecking laboratory - -Feedback to province, district and local labs
30Determining Sample Size
- Slide Positivity Rate (SPR)
- The number of AFB smears among all slides
(diag. and monit.) in the laboratory from which
the sample is being taken. - This number is estimated using the lab registers
from the previous year or the preceding four
quarters. - Sample sizes can be set using the average
positivity rate for laboratory, region, or
country. -
31Slide Positivity Rate (SPR)
- SPR
- (Number of positive smears per year/Annual slide
volume) X 100
32- SENSITIVITY
- The sensitivity, as defined here, is the
detection of all positives, including low
positives (1-9 AFB/100) - The expected performance in detecting positives,
as compared to the microscopists rechecking the
slides. - Acceptable sensitivity is determined by the
National Reference Laboratory ( NRL). -
- A sensitivity of 75-80 is recommended
-
33Determination of annual sample size
34Sample Size Determination Example
35 Sample Size Determination Example
36Sample Size Determination Example
Note If variation in slide volume or positivity
rate among the centers in a supervisors' area is
considered to be excessive, a few choices
depending on the ranges of volume and positivity
rate may be given. In areas with extreme
variability, collectors might even be given a
list with individual sample sizes per laboratory
based on each laboratorys performance the
previous year.
37Sample Size Determination Example
Note the sample size does not vary considerably
when the annual workload exceeds 1000 therefore
the rounding off will not affect the calculations.
38Sample Size Determination Example
39Sample Size Determination Example
40Sample Size Determination Example
41Sample Size Determination Example
42Collecting Slides (Sampling)
- Slide Storage
- Slide Collection
- Slide Selection
43-
- It is recommended that one quarter of the total
sample size be collected during the quarterly
supervisor visit.
44 Sampling Example
- Laboratory Register
- Begin with a random number - consider using
digit from currency (KZT) - Example - sample every 10th slide
45(No Transcript)
46ANALYSIS OF RESULTS
- Numbers of errors
- Grading of errors
- cut-off for positivity serious / minor error
- HFP, HFN vs. LFP, LFN
- priority for intervention serious errors
- quantification errors secondary
- indicative of stain problems if restaining was
done
47(No Transcript)
48ANALYSIS OF RESULTS
- Identify possibly poor centres
- FN acceptance number surpassed? HFP?
- Feed-back supervision visit
- listing slides with serious errors
- identify and solve problems
- further validation (especially FP)
- check on second level controllers
49(No Transcript)
50Interpretation
- If there are no errors, the performance goal has
been met. (95 confidence level) - Even though the sample size is based on
sensitivity, it is still likely that one or more
errors will be found even in laboratories that
are performing at or above the expected level. - If errors are detected, the interpretation and
appropriate action may be different depending on
the number and type of error, as well as the
resources and capacity of the program.
51LIMITATIONS OF THE SYSTEM
- False alarm chance errors
- LFP always doubtful
- (ignore when few equally distributed)
- Deliberate cheating ....
52CAUSES OF ERRORS
- Many HFP, also many HFN (nonsense results)
- does not know at all
- gross technical problem (microscope?)
53CAUSES OF ERRORS (2)
- HFP
- single administrative mistake
- few, with some LFP
- poor identification, registration
- confusing AFB with artefacts
- (contaminated stain)
54CAUSES OF ERRORS (3)
- LFP
- ignore if just a few equally divided
- majority of scanty is false, plus some HFP
- confusing AFB with artefacts
- excessive decentralization ?
55CAUSES OF ERRORS (4)
- HFN ( LFN)
- poor stains
- poor technique (smearing, staining)
- poor equipment (light, fungus)
- often poor work/overload
- rarely poor training
- chance finding (by exclusion follow-up)
56CAUSES OF ERRORS (5)
- QE
- little experience
- poor work/overload
- poor equipment (light, fungus)
- may point to
- stain(ing) problems (if re-stained for controls)
---gt too low - fading problem (if not re-stained) ---gt too high