Title: Developing General Education Course Assessment Measures
1Developing General Education Course Assessment
Measures
- Anthony R. Napoli, PhD
- Lanette A. Raymond, MA
- Office of Institutional Research Assessment
- Suffolk County Community College
- http//sccaix1.sunysuffolk.edu/Web/Central/IT/Inst
Research/
2Why Validity Reliability ?
- Assessment results must represent student
achievement of course learning objectives - Evaluation of the validity and reliability of the
assessment measure provides the evidence that it
does so
3Types of Measures
- Performance Measures
- Objective Measures
4Validity for Performance Measures
- Identified learning outcomes represent the course
(domain sampling) - The measure addresses the learning outcomes
(content validity) - There is a match between the measure and the
rubric (criteria for evaluating performance) - Rubric scores can be linked to the learning
outcomes, and indicate the degree of student
achievement within the course
5Validity for Objective Measures
- Identified learning outcomes represent the course
(domain sampling) - The items on the measure address specific
learning outcomes (content validity) - Scores on the measure can be applied to the
learning outcomes, and indicate the degree of
student achievement within the course
6Content Validity (MA23)
7Content Validity (MA23)
8Content Validity (MA23)
9Content Validity (MA61)
10Content Validity (MA61)
11Content Validity (SO11)
- Description
- Identify the basic methods of data collection
- Demonstrate an understanding of basic
sociological concepts and social processes that
shape human behavior - Apply sociological theories to current social
issues
A 30-item test measured students mastery of the
objectives
12Content Validity (SO11)
13Content Validity (SO11)
14Content Validity (SO11)
15Reliability
- Can it be done consistently?
- Can the rubric be applied consistently across
raters -- Inter-rater reliability - Can each of the items act consistently as a
measure of the construct -- Inter-item
reliability
16Inter-Rater Reliability (MA23) The Rubric
Item 1A
17Inter-Rater Reliability (MA23) The Rubric
Item 1B
18Inter-Rater Reliability (MA23) The Data Set
19Inter-Rater Reliability (MA23) Results
20Inter-Item Reliability (MA61)
21Face Validity and Reliability Is this enough?
- Measures with face validity adequate levels of
reliability can produce misleading/inaccurate
results. - Even content valid measures cannot guarantee
accurate estimates of student achievement - MA23 earlier pilot
22Criterion-Related Validity (MA23 earlier pilot)
23Criterion-Related Validity (MA23)
24Criterion-Related Validity (MA61)
25Motivational Comparison (PC11)
- 2 Groups
- Graded Embedded Questions
- Non-Graded Form Motivational Speech
- Mundane Realism
26Motivational Comparison (PC11)
- Graded condition produces higher scores (t(78)
5.62, p lt .001). - Large effect size
- (d 1.27).
27Motivational Comparison (PC11)
- Minimum competency
- 70 or better
- Graded condition produces greater competency (Z
5.69, p lt .001).
28Motivational Comparison (PC11)
- In the non-graded condition this measure is
neither reliable nor valid - KR-20N-g 0.29
29Motivational Comparison (PC11)
30Developing General Education Course Assessment
Measures
- Anthony R. Napoli, PhD
- Lanette A. Raymond, MA
- Office of Institutional Research Assessment
- Suffolk County Community College
- http//sccaix1.sunysuffolk.edu/Web/Central/IT/Inst
Research/