Title: Widely Researched Instruments
1Widely Researched Instruments
- Interviews (SCID, DIS, etc.)
- Rating scales (Achenbach CBCL, Conners, etc.)
- Cognitive tests
- Those intended primarily for normal-range
assessment - Learning Disability/Specific skills-reading,
math, etc. - Neuropsychology-traditional, and laboratory-type
tests - Personality/psychopathology tests
- Objective-MMPI, Millon, personality, personality
disorders - Projective-Rorschach, TAT, DAP, sentence
completion
2Things Too Boring to Discuss
- Developing brand-new test from scratch
- Re-norming, updating old test
- Translating test into Urdu, etc.
- Short vs. long form of test-reliability, validity
- Does keyboard spring strength affect results of
computerized administration?
3More Interesting Topics
- Reliability, test bias, etc. in realistic field
settings (not highly trained/ selected raters,
etc.) - Predictive validity for important practical
purpose (e.g., performance in combat, success in
therapy) - Validity for constructs in theories with decent
empirical track records (e.g., schizotypy) - Incremental validity (getting newtrue info)
4More Interesting Topics, contd
- Clinicians cognitive processing of
infoaccuracy, biases, reasoning processes
attempts to eliminate biases/minimize errors - Malingering/defensiveness in high stakes
situations (prevalence, detectability) - Application of test to socially important new
group/problem (e.g., MMPI with Hmong immigrants)
5More Interesting Topics, contd
- Structure of a test (construct/s)
- Does the test tap dimensions or categories?
- How many? What are they?
- Computerized assessmentadministration, data
integration, report writing - Computer as decision aid (advice from
computer vs. replacement by computer) - Demonstration that a giving a test actually
improves ultimate client well-being, functioning,
etc.
6Research Designs
- Simple reliability test-retest, interrater
- Correlational studies-individual
- Convergent/discriminant validity, pre- and
postdiction - Correlational studies-group (case-control
- Pre- and postdiction, test bias, diagnostic
validity - Experimental studies
- Experimental studies of personality, with
instruments - Comparing validity (computer vs. clinician)
7Convergent/Discriminant Validity
- Get bunch of measures of 3 traits (e.g.,
extraversion) - Extraversion 1, Ex 2, Ex 3,
- Do same for other traits like Constraint (Con 1,
Con 2, Con 3, ), Stress-Reactivity (SR 1, SR 2,
SR3, ) - Make sure measures use 3 different methods
- Ex 1 interview measure of extraversion
- Ex 2self-report checklist measure of
extraversion - Ex 3 behavioral measure of extraversion,
8Convergent/Discriminant Validity, contd
- Use same methods across all traits
- Administer all measures to heterogeneous sample
Get multitrait, multimethod (correlation) matrix - Examine intercorrelations of Ex measures
(monotrait, heteromethod), checklist measures
(heterotrait, monomethod), etc. Pray that
monotrait correlations high, monomethod low. - When this doesnt work invent post-hoc excuses
9Incremental Validity
- Definition
- Does assessment give increment in validity of
judging/ predicting criterion? - Is what it tells us NEW, and TRUE?
- Therefore, inherently relative to prior info
- Pragmatic questions
- How big an increment?
- Relative to what prior info?
- Importance in cost-effective health care?
10Incremental Validity, contd
- Statistical vs. clinical incremental validity
- Statistical does adding rating score to equation
improve statistical prediction of criterion? - Clinical does giving rating score to clinicians
increase their ability to predict criterion? - Statistical I.V. never negative, can be zero (or
tiny) - Clinical I.V. can be negative, zero, or positive
11Incremental Validity Designs (Within-Judge)
- Give judge/group info a piece at a time, e.g.
- No info (stereotype only, Mean Average Patient)
- Face sheet (biographical data sheet, BDS)
- Face sheet MMPI
- Face sheet MMPI Rorschach
- Face sheet MMPI Rorschach interview
- Compare prediction accuracy at each step
12Incremental Validity Design (Between-Judge)
- If 4 info sources, 24 16 different info sets
- Face sheet MMPI Rorschach interview
- Face sheet
- Face sheet Rorschach interview
- MMPI Rorschach interview
- etc.
- Randomize info sets to judges
- Compare prediction accuracy across sets
13Sines (1959) Study
- Subjects VA outpatients
- Judges U of MN clinical students
- Judgments describe personality/psychopathol-ogy
by Q-sort - Q-sort sort descriptors into 7 piles by fit
(pile height follows normal distribution) - Criterion personality/psychopathology as rated
by therapist Q-sort after 10 sessions
14Sines (1959) study, contd
- Sources of info
- base rate info (Mean Patient Stereotype)
- face sheet info (Biographical Data Sheet)
- MMPI
- Rorschach
- Sentence completion
- Interview
15Sines (1959) Study, contd
- Design design II (between judge)
- Strengths rigorous separation of I.V. across
several common assessment sources - Inclusion of Bio. Data Sheet, stereotype info,
interview - Limitations between-judge effect is noise, use
of inexpert judges, small N, crummy criterion
problem
16Sines (1959) Study Findings
- Mean average patient stereotype not too bad
- BDS has appreciable validity
- Tests, interview dont improve much on this
- Interview works a little better than tests
- MMPI very small positive I.V.
- Rorschach zero to negative I.V.
- Validity ceiling pretty low
17I.V. Studies Implications
- Potential wide usefulness
- Neuropsychology batteries
- Multistage screening (EAPs, community)
- Forensics (where long, expensive evaluations
common) - Limited actual use
- Results often go counter to what many believe
(more is better) - Goes counter to what test/interview proponents
what you to believe (MMPI is wonderful)
18Diagnostic/Malingering Studies
- Correlational vs. experimental designs
- Diagnostic correlational (case-control)
- Malingering experimental (instructed faking)
- Very hard to study minimization/denial
(opposite of malingering)-crummy criterion
problem - Diagnostic study design example
- Detection of psychosis vs. neurosis vs.
normal by MMPI, Rorschach (or both)
19Little Schneidman (1959)
- Designed in part to rebut studies like Sines
(1959) which used novice clinicians - Idea Evaluate each test separately, using
several very expert clinicians with each test - Criterion diagnosis (N3 each normal,
neurotic, psychotic, psycho-physiological)
20Little Schneidman, contd
- Blind test readings, no background info
- Readers told of groups, but not base rates
- A major diagnostic result
- All normal Ss called psychotic by majority of
Rorschach readers (usual diagnosis paranoid
schizophrenia) none called psychotic by MMPI
readers - MMPI readers (almost) all called normal
subjects normal
21Albert, Fox Kahn (1980)
- (Pretty) good malingering study
- Background Repeated decades-old assertion that
Rorschach is unfakeable - Idea Test this fairly, by using expert test
readers (fellows of the Rorschach society) - Also look at coached vs. uncoached faking
22Albert, et al. Study Groups (N6)
- Psychotic inpatients non-faking normals
- Uncoached fakers (act like paranoid
schizophrenic) - Coached fakers (heres info about paranoid
schizophrenics---now act like one) - Pay fakers for success (standard procedure)
23Albert, et al. Judgment Task
- Judges (N46 fellows of SPA, 20 year average
experience) told protocols included fakers and
patients, but not types or base rates - No other info on testees at all
- Relied solely on written protocols (pre-videotape
era study!) judges did not administer Rorschachs
themselves
24Albert, et al. Diagnostic Results