Psychology 328: Psychological Assessment - PowerPoint PPT Presentation

1 / 156
About This Presentation
Title:

Psychology 328: Psychological Assessment

Description:

... a a papGa p SaaaapasGSa Ga SaGGaG aSS a G s Gp pp p StG ... TFdTOOTdTdfTOTdOOdOOfOdfOOOffOfddfdddffdfdeedeeefffefff ef C ... – PowerPoint PPT presentation

Number of Views:640
Avg rating:3.0/5.0
Slides: 157
Provided by: psych181
Category:

less

Transcript and Presenter's Notes

Title: Psychology 328: Psychological Assessment


1
Psychology 328Psychological Assessment
  • Department of Psychology
  • University of Michigan-Flint
  • Fall 2005

2
Chapter 1
  • Psychological Testing and Assessment

3
I. Testing and Assessment
  • A. Definitions of Testing and Assessment
  • B. Tools of Assessment
  • 1. The Test
  • 2. The Interview
  • 3. The Portfolio
  • 4. Case History Data
  • 5. Role play test
  • 6. Behavioral Observation
  • 7. Computers

4
II. Who, What, and Why?
  • A. Who are the Parties?
  • 1. The test developer
  • 2. The test user
  • 3. Society at large
  • B. Where and Why
  • 1. Education Settings
  • 2. Counseling Settings

5
II. Who, What, and Why?
  • B. Where and Why
  • 1. Education
  • 2. Geriatric Settings
  • 3. Counseling
  • 4. Clinical
  • 5. Business
  • 6. Governmental and Organizational Credentialing

6
III. Evaluating the Quality of Tests
  • B. Reference Sources for Tests
  • 1. Test Catalogues
  • 2. Test Manuals
  • 3. Test Reviews
  • a. Mental Measurements Yearbook
  • b. Tests in Print
  • 4. Journal articles
  • 5. On-line

7
Chapter 2
  • Historical, Cultural, and Legal/Ethical
    Considerations

8
I. A Historical Perspective
  • A. Antiquity to the Nineteenth Century
  • 1. China
  • 2. Ancient Greece

9
I. A Historical Perspective
  • B. The Nineteenth Century
  • 1. Charles Darwin (1809 - 1882)
  • 2. Francis Galton (1822 - 1911)
  • 3. Karl Pearson (1857 - 1936)
  • 4. Wilhelm Max Wundt (1832 - 1920)
  • 5. James McKeen Cattell (1860 - 1944)

10
I. A Historical Perspective
  • C. The Twentieth Century
  • 1. Intelligence Assessment
  • a. Alfred Binet (1857 - 1911)
  • b. David Wechlser
  • c. Group testing
  • 2. Personality Assessment
  • a. Personal Data Sheet (Woodworth)
  • b. Projective testing
  • 3. Academic vs. Applied Settings

11
II. Culture and Assessment
  • A. Issues
  • 1. Language
  • 2. Non-verbal communication
  • 3. Standards used
  • B. Tests and Group Membership

12
III. Legal and Ethical Issues
  • A. Public concerns
  • 1. Legislation
  • B. Professions Concerns
  • 1. Test-user qualifications
  • 2. Testing people with disabilities
  • 3. Computerized testing

13
III. Legal and Ethical Issues
  • C. The Rights of Testtakers
  • 1. Informed consent
  • 2. Informed of findings
  • 3. Privacy not invaded
  • 4. Least stigmatizing label
  • 5. Confidentiality

14
Chapter 2
  • Historical, Cultural, and Legal/Ethical
    Considerations

15
I. A Historical Perspective
  • A. Antiquity to the Nineteenth Centry
  • 1. China
  • 2. Ancient Greece

16
I. A Historical Perspective
  • B. The Nineteenth Century
  • 1. Charles Darwin (1809 - 1882)
  • 2. Francis Galton (1822 - 1911)
  • 3. Karl Pearson (1857 - 1936)
  • 4. Wilhelm Max Wundt (1832 - 1920)
  • 5. James McKeen Cattell (1860 - 1944)

17
I. A Historical Perspective
  • C. The Twentieth Century
  • 1. Intelligence Assessment
  • a. Alfred Binet (1857 - 1911)
  • b. David Wechlser
  • c. Group testing
  • 2. Personality Assessment
  • a. Personal Data Sheet (Woodworth)
  • b. Projective testing
  • 3. Academic vs. Applied Settings

18
II. Culture and Assessment
  • A. Issues
  • 1. Language
  • 2. Non-verbal communication
  • 3. Standards used
  • B. Tests and Group Membership

19
III. Legal and Ethical Issues
  • A. Public concerns
  • 1. Minimum competency testing
  • 2. Truth-in-testing
  • B. Professions Concerns
  • 1. Test-user qualifications
  • Level A (manual), Level B (special knowledge),
  • Level C (knowledge, experience)
  • 2. Testing people with disabilities
  • 3. Computerized testing

20
III. Legal and Ethical Issues
  • C. Legislation
  • 1. 1990 American with Disabilities Act
  • 2. Civil Rights Act
  • 3. No Child Left Behind Act (2001)
  • D. Litigation
  • Hobson v. Hanson (1967)
  • Larry P. v. Riles (1979)
  • Griggs v. Duke Power Company (1971)
  • Regents of the University of California v. Bakke
    (1978)
  • Grutter v. Bollinger (2003)

21
III. Legal and Ethical Issues
  • E. The Rights of Testtakers
  • 1. Informed consent
  • 2. Informed of findings
  • 3. Privacy not invaded
  • 4. Least stigmatizing label
  • 5. Confidentiality

22
Chapter Three
  • A Statistics Refresher

23
I. Scales of Measurement
  • A. Nominal
  • B. Ordinal
  • C. Interval
  • D. Ratio

24
II. Describing Data
  • A. Frequency Distributions
  • 1. Normal
  • 2. Bimodal
  • 3. Positively skewed
  • 4. Negatively skewed
  • 5. J-shaped
  • 6. Rectangular
  • B. Graphs Histogram, Bar Graph, Frequency
    Polygon

25
(No Transcript)
26
II. Describing Data
  • C. Measures of Central Tendency
  • 1. Mean
  • 2. Median
  • 3. Mode
  • D. Measures of Variability
  • 1. Range
  • 2. Interquartile range
  • 3. Average Deviation
  • 4. Standard Deviation and Variance

27
II. Describing Data
28
II. Describing Data
  • D. Skewness
  • E. Kurtosis

29
III. The Normal Curve
30
III. The Normal Curve
  • B. Standard Scores
  • 1. Z Scores Mean 0 SD 1
  • 2. T Scores Mean 50 SD 10
  • 3. Other Standard Scores

31
III. The Normal Curve
32
Chapter Four
  • Of Tests and Testing

33
I. Assumptions in Testing and Assessment
  • A. Psychological Traits and States Exist
  • B. Traits and States can be measured
  • C. Test behavior predicts non-test behavior
  • D. Tests have Strengths and Weaknesses
  • E. Various Sources of Error are part of the
    Assessment Process
  • F. Testing and Assessment Can Be Conducted in a
    Fair and Unbiased Manner
  • E. Testing and Assessment Benefit Society

34
II. Good Tests
  • A. Reliability
  • B. Validity
  • C. Good Norms

35
III. Norms
  • A. Introduction
  • B. Standardization and Sampling
  • C. Types of Norms
  • 1. Percentiles
  • 2. Age Norms
  • 3. Grade Norms
  • 4. National Norms/National Anchor Norms
  • 5. Subgroup Norms
  • 6. Local Norms

36
III. Norms
  • C. Fixed Reference Group Scoring Systems
  • D. Norm-Referenced Versus Criterion-Referenced
    Interpretation

37
IV. Correlation and Regression
  • A. Introduction
  • B. Pearson r

38
IV. Correlation and Regression
  • C. Spearman rho Ranks
  • D. Biserial Contiuous with Dichotimized
  • E. Point-biserial Continuous with
    Dichotomy
  • F. TetrachoricTwo Dichotomized
  • G. Phi Two Dichotomies

39
V. Multiple Regression
  • A. Regression
  • Y a bX
  • B. Multiple Regression
  • Y a b1X1 b2Xx

40
(No Transcript)
41
(No Transcript)
42
(No Transcript)
43
Chapter Five
  • Reliability

44
I. The Concept of Reliability
  • A. Introduction
  • 1. True Variance X T E
  • 2. Error Variance
  • B. Sources of Error Variance
  • 1. Test Construction
  • 2. Test Administration
  • 3. Test Scoring and Interpretation
  • 4. Other Sources of Error

45
II. Reliability Estimates
  • A. Test-Retest Reliability Estimates
  • B. Parallel-Forms and Alternate Forms
  • C. Split-Half Reliability Estimates
  • 1. Spearman-Brown Formula
  • D. Internal Consistency
  • 1. The Kuder-Richarson Formula
  • 2. Coefficient alpha
  • E. Inter-Scorer Reliability

46
III. Interpreting a Coefficient of Reliability
  • A. Introduction
  • B. Choice of a Reliability Coefficient
  • 1. Homogenity versus heterogenity of test
  • 2. Dynamic versus static
  • 3. Restriction versus Inflation of range
  • 4. Speed versus Power
  • 5. Criterion-referenced
  • C. Recent developments in reliability

47
IV. Reliability and Individual Scores
  • A. Standard Error of Measurement
  • s meas s v__1 - rxx
  • B. Standard Error of the Difference

48
Chapter Six
  • Validity

49
I. The Concept of Validity
  • A. Introduction
  • 1. Types Content, Criterion, Construct
  • 2. Face Validity
  • B. Content Validity
  • 1. Estimates
  • 2. Cultural Relativity

50
II. Criterion-Related Validity
  • A. The Criterion Problem
  • 1. Characteristics
  • B. Concurrent Validity
  • C. Predictive Validity
  • 1. Validity Coefficient
  • 2. Incremental Validity
  • 3. Expectancy Data
  • 4. Decision Theory

51
(No Transcript)
52
Multitrait-Multimethod Matrix
53
(No Transcript)
54
(No Transcript)
55
(No Transcript)
56
(No Transcript)
57
(No Transcript)
58
(No Transcript)
59
III. Criterion-Related Validity
  • A. Evidence
  • 1. Homogenity
  • 2. Changes with Age
  • 3. Pre/Posttest Changes
  • 4. Contrasting Groups
  • 5. Convergent Validity
  • 6. Divergent Validity
  • 7. Factor Analysis

60
III. Test Bias
  • A. Definitions of Bias
  • 1. Rating Error
  • a. Leniency
  • b. Severity
  • c. Central Tendency
  • d. Halo
  • 2. Legal Status
  • B. Fairness

61
Chapter Seven
  • Test Development

62
I. Test Conceptualization
  • A. Preliminary Questions
  • 1. What is the test designed to measure?
  • 2. What is the objective of the test?
  • 3. Is there a need for this test?
  • 4. Who will use this test?
  • 5. Who will take this test?
  • 6. What content will the test cover?
  • 7. How will the test be administered?

63
I. Test Conceptualization
  • A. Preliminary Questions
  • 8. What is the ideal format of the test?
  • 9. Should more than one form of the test be
    developed?
  • 10. What special training will be required of
    test users for administering or interpreting the
    test?
  • 11. What types of responses will be required by
    testtakers?

64
I. Test Conceptualization
  • A. Preliminary Questions
  • 12. Who benefits as the result of an
    administration of this test?
  • 13. Is there any potential for harm as the
    result of an administration of this test?
  • 14. How will meaning be attributed to scores on
    this test?
  • B. Norm-referenced vs. Criterion-referenced

65
I. Test Conceptualization
  • C. Pilot Work

66
II. Test Construction
  • A. Scaling
  • 1. Types of scales
  • 2. Scaling methods
  • a. Rating, summative
  • b. Likert
  • c. Paired comparisons
  • d. Categorical
  • e. Guttman

67
II. Test Construction
  • B. Writing Items
  • 1. Item formats
  • a. Selected Response, multiple choice, matching,
    true/false
  • b. Stem, Correct alt., Distractors
  • 2. Constructed-response format
  • Completion, short answer, essay
  • 3. Computerized Adaptive Testing
  • C. Scoring Items
  • 1. Category
  • 2. Cumultive

68
III. Test Tryout
  • Evaluation of Items

69
IV. Item Analysis
  • A. Item-Difficulty Index
  • B. Item-Validity Index
  • C. Item-Reliability Index
  • D. Item Discrimination Index
  • E. Item-Characteristic Curves
  • 1. Latent-trait model

70
IV. Item Analysis
  • F. Other factors
  • 1. Guessing
  • 2. Item fairness
  • 3. Speeded tests
  • G. Qualitative Item Analysis

71
V. Test Revision
  • A. Need for Revision
  • 1. Test Materials
  • 2. Norms
  • 3. Revised theory
  • B. Cross-Validation
  • 1. Shrinkage

72
Chapter Eight
  • Intelligence and Its Measurement

73
I. What Is Intelligence?
  • A. Introduction
  • B. Francis Galton
  • C. Alfred Binet
  • D. David Wechsler
  • E. Jean Piaget
  • F. Factor Analysis
  • G. Information-Processing

74
Factor Analysis
75
(No Transcript)
76
(No Transcript)
77
(No Transcript)
78
Spearmans Theory
79
L.L. Thurstones Primary Mental Abilities
80
Vernons Model of Intelligence
81
Guilford Structure of Intelligence
82
Gardners Theory of Seven Intelligences
  • Linguistic
  • Logical-Mathematical
  • Bodily-kinesthetic
  • Spatial
  • Musical
  • Interpersonal
  • Intrapersonal

83
Sternbergs Triarchic Theory
84
II. Measuring Intelligence
  • A. In Infancy
  • B. In Children
  • C. In Adults
  • D. Special Populations
  • 1. Disabilities
  • 2. Psychological Disorders
  • 3. Gifted

85
III. Intelligence Issues
  • A. Nature vs. Nurture
  • B. Stability of Intelligence
  • C. Issues
  • 1. Measurement Process
  • 2. Personality
  • 3. Gender
  • 4. Family environment
  • 5. Culture

86
Chapter Nine
  • Tests of Intelligence

87
I. The Stanford Binet Fifth Edition.
  • A. Introduction
  • 1. 1905 Binet
  • 2. 1916 Standford-Binet
  • 3. 1937, Form L and Form M
  • 4. 1960/ 1973 Form LM
  • 5. 1986 SB Fourth Edition
  • 6. 2003 SB Fifth Edition

88
Subtests
  • 10 subtests (5 verbal, 5 nonverbal)
  • 8 subtests are comprised of 5 or 6 testlets and
    use the functional level format
  • 2 subtests (routing subtests) use a point-scale
    format and do not have testlets

89
(No Transcript)
90
Fluid Reasoning Knowledge Quantitative
Reasoning Visual-Spatial Processing Working
Memory
91
Fluid Reasoning
Ability to solve verbal and nonverbal problems
using inductive or deductive reasoning
NEW
NEW
92
Knowledge
  • Accumulated store of general information acquired
    at home, school, work, or in life
  • Often referred to as crystallized ability

NEW
93
Quantitative Reasoning
  • Facility with numbers and numerical problem
    solving, whether with word problems or with
    figural relationships
  • Emphasizes problem solving-process more than
    academic mathematical knowledge

94
Visual-Spatial Processing
Ability to see patterns, relationships, spatial
orientation, or the gestalt among diverse
pieces of a visual display
NEW
NEW
95
Working Memory
Short-term processing of information, whether
verbal or visual, emphasizing transformations
or sorting out of diverse information
Subtest Activities
Verbal
Nonverbal
Delayed Response Block Span
Memory for Sentences Last Word
NEW
NEW
96
Verbal Domain
97
Nonverbal Domain
Nonverbal Subtests
Activities (Levels)
Activities are shown with the levels at which
they appear.
98
Hierarchy of Components in the SB5
99
Development of the SB5 5 Major Stages
1. Planning 2. Pilot studies 3. Tryout edition 4.
Standardization edition 5. Final publication
100
Development of the SB5
  • 7 year project began in 1995
  • Pilot tryout phases, 1000 items
  • Standardization phase, 375 items
  • Final publication, 284 items

101
Technical Qualities
  • Norming sample of 4,800 individuals between the
    ages 285
  • Additional 3,000 included in special studies
  • Representative of the 2001 U.S. Census update
  • Co-normed with the Bender Visual-Motor Gestalt
    Test, Second Edition II

Education level was based on 1999 data.
102
Standardization Sample
Nationally representative and matched to
stratification variables in U.S. Census Bureau
(2001) publications
  • Size 4,800
  • Ages 2 to 85 years
  • Collected 12-month period (2001-2002)
  • Approximately 5 of norm sample was enrolled in
    special education (mainstreamed for gt50 of the
    school day)
  • Stratification variables age, gender, ethnicity,
    geographic region, and socioeconomic level

103
Age and Gender
For stratified sampling purposes, 30 age groups
were defined. Smaller intervals were used at
ages where cognitive abilities change rapidly.
  • by 6 month intervals at 2-4 years
  • by 12 month intervals at 5-19
  • by 5 year intervals at 20-29 and above 60
  • by 10 year intervals at 30-59

Gender 50 split between male and female, except
at elderly age levels where census data shows a
higher percentage of females.
104
Ethnicity/Race
  • American Indian or Alaskan Native
  • Asian
  • Native Hawaiian or other Pacific Islander
  • African American
  • Anglo-American

Included as a separate question
  • Hispanic, Latino, Spanish

105
Geographic Region
  • 4 census regions in the U.S.
  • Northeast
  • Midwest
  • South
  • West
  • Rural vs. urban data were collected but not used
    as a stratification variable

106
Socio-economic Levels
  • Education attainment was used as an indicator.
  • Adults years of education completed
  • Children lt18 years of education completed by
    parents or guardians
  • Use of occupation and income as indicators was
    determined to be problematic.

107
Final Item Selection
1. Extensive item analyses conducted 2. Excellent
fit to the one parameter logistic (Rasch) for
each of the 5 dimensions 3. Strong
recommendations from examiners 4. High subtest
internal-consistency and inter-scorer reliability
and high discrimination indexes 5.
Appropriateness of difficulty and range of
difficulty 6. Positive contribution to the factor
structure and total test 7. Evidence of validity
(content, criterion, and construct) 8. High
ratings by users of previous versions
108
Fairness
Quantitative analyses were performed to ensure
items had no bias across groups. Qualitative bias
reviews were performed by experts.
109
Reliability Internal Consistency
Split-half reliability
Note Mean reliabilities across all ages
110
ReliabilityTest-Retest
  • SB5 measures abilities that are relatively stable
    across time
  • Retest scores may show some increase due to
    practice effects and familiarity of testing
    procedures
  • Overall, IQ scores on the SB5 appear to be quite
    stable and less affected by practice effects
  • Retesting may be possible after 6 months vs. the
    typical 12-month interval

111
ReliabilityInter-Scorer Agreement
Refers to how two or more examiners score the
multiple-point responses of the same
examinee. Items with poor inter-scorer
agreement were deleted from the final
edition. Median inter-scorer
correlation is .90.
112
Content Validity
  • Professional judgment of content
  • researches, experts, examiners reviewed content
  • item bank of all SB5 items
  • Coverage of important constructs
  • Items reviewed and rated by experts in CHC theory
  • design and test specification developed
  • Empirical item analyses
  • Classical and item-response methods employed
  • Item discrimination, percentage correct at
    successive age levels, model-data-fit statistics,
    and differential item functioning analyses

113
Concurrent Validity Full Scale IQ
114
Construct-Related Evidence of Validity
  • Age trends
  • Intercorrelations of tests, factors, and IQs
  • Evidence for general ability, or g
  • Confirmatory factor analysis
  • Cross-battery factor analysis

115
Age Trends
116
Evidence of General Ability, or g
Average across all ages, g loadings are .70
or higher on all subtests except Nonverbal Fluid
Reasoning (.66). .70 or greater is considered
good .50 to .69 is considered fair
The proportion of variance attributed to g
within the SB5 ranges from 56 to 61 of total
subtest variance.
117
Construct ValidityConfirmatory Factor Analysis
  • The five factors were confirmed for all age
    groups. Factor loadings exceed .40 at all ages.
  • Ages 2-5
  • Ages 6-10
  • Ages 11-16
  • Ages 17-50
  • Ages 51-85

118
(No Transcript)
119
II. Wechsler Tests Verbal Subtests
  • 1. Information
  • 2. Comprehension
  • 3. Similarities
  • 4. Arithmetic
  • 5. Vocabulary
  • 6. Receptive Vocabulary
  • 7. Picture Naming
  • 8. Digit Span/Sentences (WPPSI)
  • 9. Letter-Number Sequencing (WAIS-III)

120
II. Wechsler Tests Performance Subtests

121
II. Wechsler Tests Performance Subtests
  • 1. Picture Arrangement
  • 2. Picture Completion
  • 3. Block Design
  • 4. Object Assembly
  • 5. Digit Symbol-Coding
  • 6. Symbol Search
  • 7. Matrix Reasoning
  • 8. Word Reasoning
  • 9. Picture Concepts
  • 10. Cancellation

122
II. Wechsler Tests
  • WPPSI-III Receptive Vocabulary, Picture Naming,
    Word Reasoning, Picture Concepts
  • WISC-III Cancellation Mazes, Symbol Search
  • WAIS-III Digit Symbol

123
II. Wechsler Tests WAIS-III
  • Verbal Tests
  • Vocabulary
  • Similarities
  • Arithmetic
  • Digit Span
  • Information
  • Comprehension
  • (Letter-Number Sequencing)
  • Performance Tests
  • Picture Completion
  • Digit Symbol-Coding
  • Block Design
  • Matrix Reasoning
  • Picture Arrangement
  • (Symbol Search)
  • (Object Assembly)

124
II. Wechsler Tests
  • C. WAIS-III
  • 1. WB, WAIS, WAIS-R
  • 2. Standardization
  • a. N2450
  • b. Age 16 to 89
  • 3. Psychometrics
  • a. Reliability
  • b. Validity

125
II. Wechsler Tests
  • D. WISC-IV
  • 1. WISC, WISC-R
  • 2. Development
  • E. WPPSI-III
  • 1. Standardization
  • 2. Development
  • 3. Psychometrics

126
WISC-IV Factor Structure
127
III. Other Measures of Intelligence
  • A. Individual
  • 1. Kaufman Assessment Battery for Children
  • 2. Kaufman Brief Intelligence Test
  • 3. Kaufman Adolescent and Adult Int. Test
  • B. Group
  • 1. Advantages and Disadvantages
  • 2. In Schools
  • 3. In the Military Armed Services Vocational
    Aptitude Battery

128
Chapter Ten
  • Preschool and Educational Assessment

129
I. Preschool Assessment
  • A. Issues in Preschool Assessment
  • B. Preschool tests

130
II. Achievement Tests
  • A. Measures of General Achievement
  • B. Measures of Achievement in Specific Subject
    Areas

131
III. Aptitude Tests
  • A. Issues
  • B. Elementary School
  • C. Secondary School
  • D. College Level

132
IV. Diagnostic Tests
  • A. Reading
  • 1. Woodcock reading Mastery Tests-Revised
  • B. Math
  • C. Learning disabilities

133
V. Psychoeducational Test Batteries
  • A. Kaufman Assessment Battery for Children
  • 1. Standardization
  • 2. Administration, Scoring and Interpretation
  • B. Woodcock-Johnson Psycho-Educational
    Battery-Revised (WJ-R)
  • C. The Cognitive Assessment System
  • D. The Differential Ability Scales

134
VI. Other Assessment Tools
  • A. Performance, Portfolio, and Authentic
    Assessment
  • B. Peer Appraisal Techniques
  • C. Study Habits, Interests, and Attitudes

135
Chapter Eleven
  • Personality Assessment
  • An Overview

136
I. Personality and Personality Assessment Defined
  • A. Defined An individuals unique
    constellation of psychological traits and states.
  • B. Personality Assessment
  • C. Traits, Types and States
  • 1. Personality traits Relatively enduring
  • 2. Personality types a category within a
    taxonomy
  • 3. Personality states Temporary

137
II. Personality Assessment Some Basic Questions
  • A. Who
  • 1. Self-report
  • 2. Another parent, teacher, peer, spouse, etc.
  • B. What
  • 1. Content sampled
  • 2. Response styles Socially Desirable,
    Acquiescence, Nonacquiescence, Deviance
    (Fake-bad), Extreme, Gambling, Positive
    (Fake-good)

138
II. Personality Assessment Some Basic Questions
  • C. Where
  • D. How Structured vs. Unstructured
  • 1. True/False
  • 2. Like/Dislike
  • 3. Forced Choice
  • 4. Adjective Checklist
  • 5. Pictures
  • 6. Ambiguous Stimuli
  • 7. Situations

139
Developing Tools to Assess Personality
  • A. Rational Approach Logic and Reason
  • 1. Symptom Checklist 90R
  • B. Theoretical Approach
  • 1. Edwards Personal Preference Schedule
  • C. Factor Analytic Approach
  • 1. 16 PF
  • D. Criterion Approach
  • 1. Contrasting Groups

140
IV. The MMPI
141
(No Transcript)
142
V. Trends in Personality Assessment
  • A. Self-Report
  • B. Peer Ratings
  • C. Computerized Testing
  • 1. Administration
  • 2. Scoring
  • 3. Interpretation Actuarial Prediction

143
Chapter Twelve
  • Personality Assessment Methods

144
I. Objective Methods
  • A. Structured

145
II. Projective Methods
  • A. The Rorschach
  • 1. Administration
  • a. Free Association
  • b. Inquiry
  • c. Testing the Limits
  • 2. Scoring and Interpretation Exner System
  • a. Location W, D, Dd, S
  • b. Determinants F, C, C, T.
  • c. Content H, Hd, A, Ad.

146
Herman Rorschach, MD
Brad Pitt
147
II. Projective Methods
  • B. Pictures
  • 1. Thematic Apperception Test
  • 2. Childrens Apperception Test
  • C. Words
  • 1. Word Association
  • 2. Sentence Completion
  • D. Drawings

148
III. Behavioral Assessment
  • A. Characteristics
  • B. Who, What, When, Where, and How
  • 1. Stimulus
  • 2. Organismic Variables
  • 3. Response
  • 4. Contingencies
  • 5. Concenquences

149
III. Behavioral Assessment
  • B. Approaches to Behavioral Assessment
  • 1. Behavioral observation and rating scales
  • 2. Analogue studies
  • 3. Self-Monitoring
  • 4. Situational performance measures
  • 5. Role play
  • 6. Psychophysiological methods
  • 7. Unobtrusive measures

150
Chapter Fourteen
  • Neuropsychological Assessment

151
I. The Nervous System and Behavior
  • A. Structures
  • 1. Occipital Lobes
  • 2. Parietal Lobes
  • 3. Temporal Lobes
  • 4. Frontal Lobes
  • 5. Thalamus
  • 6. Hypothalamus
  • 7. Cerebellum

152
I. The Nervous System and Behavior
  • A. Structures
  • 8. Reticular Formation
  • 9. Limbic System
  • 10. Spinal Cord
  • B. Neuropsychological Damage

153
II. The Neuropsychological Examination
  • A. History
  • B. Mental Status Examination
  • C. Physical Examination

154
III. Tools of Neuropsychological Assessment
  • A. Interviews and Rating Scales
  • B. Case History
  • C. Tests
  • 1. General Intellectual Ability WAIS-III
  • 2. Verbal Functioning Aphasia Screening
  • 3. Memory WMS-III
  • 4. Perceptual and Motor Skills Bender-Gestalt

155
IV. Neuropsychological Test Batteries
  • A. Halstead-Reitan Neuropsychological Battery
  • 1. Category
  • 2. Tactual Performance Test
  • 3. Seashore Rhythm Test
  • 4. Speech Sounds Perception Test
  • 5. Finger-Tapping Test
  • 6. Trails A B

156
IV. Neuropsychological Test Batteries
  • B. Luria-Nebraska Battery
Write a Comment
User Comments (0)
About PowerShow.com