Using Multiple Choice Tests for Assessment Purposes: Designing Multiple Choice Tests to Reflect and Foster Learning Outcomes - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Using Multiple Choice Tests for Assessment Purposes: Designing Multiple Choice Tests to Reflect and Foster Learning Outcomes

Description:

Using Multiple Choice Tests for Assessment Purposes: Designing Multiple Choice Tests to Reflect and Foster Learning Outcomes Terri Flateby, Ph.D. – PowerPoint PPT presentation

Number of Views:124
Avg rating:3.0/5.0
Slides: 26
Provided by: Teres152
Category:

less

Transcript and Presenter's Notes

Title: Using Multiple Choice Tests for Assessment Purposes: Designing Multiple Choice Tests to Reflect and Foster Learning Outcomes


1
Using Multiple Choice Tests for Assessment
Purposes Designing Multiple Choice Tests to
Reflect and Foster Learning Outcomes
  • Terri Flateby, Ph.D.
  • tlflateby_at_gmail.com

2
Overview of Assessment Process
  • Select or develop measureable learning outcomes
    (course or program)
  • Select or develop measures consistent with the
    outcomes
  • Measure learning outcomes
  • Analyze learning results
  • Make adjustments in curriculum, instructional
    strategies, or activities to address weaknesses
  • Re-evaluate learning outcomes

3
Purposes of Classroom Achievement Tests
  • Measure Individual Students Learning
  • Evaluate Class Performance
  • Evaluate Test and Improve Learning
  • Support Course and Program Outcomes

4
Why Use Multiple-Choice Tests to Measure
Achievement of Learning Outcomes?
  • Efficient
  • More content coverage in less time
  • Faster to evaluate
  • Methods to evaluate test items
  • In some cases, can provide a proxy to Constructed
    Response measures

5
Above All
  • Testing and Assessment should Promote Learning

6
To Promote Learning, Tests Must Be
  • Valid Tests should be an Accurate Indicator of
    Content and Level of Learning (Content validity)
  • Reliable Tests Should Produce Consist Results

7
Validity
  • Tests must measure what you want your students to
    know and be able to do with the content (reach
    the cognitive demands of the outcomes).
  • Tests must be consistent with instruction and
    assignments, which should foster the cognitive
    demands.

8
Process of Ensuring Validity
  • Table of Item Specifications also called Test
    Blue Print useful for classroom tests and
    guiding assessment
  • Review item performance after administering test

9
Test Blue Print Reflects the Important Content
and Cognitive Demands
Content/components of outcomes Knowledge Comprehension Application and above Analysis
1
2
3
4
10
Blooms Taxonomy of Educational Objectives (use
to develop tests and outcomes)
  • Evaluation
  • Synthesis
  • Analysis
  • Application
  • Comprehension
  • Knowledge

11
Develop Tests to Reflect Outcomes at Program or
Course Levels
  • Create summative test
  • Develop sets of items to embed in courses
    indicating progress toward outcomes (formative)
  • Develop course level tests that reflect program
    level objectives/outcomes

12
Institutional Outcome/Objective
  • Students will demonstrate the critical thinking
    skills of analysis and evaluation in the general
    education curriculum and in the major.

Course Outcome
  • Students will analyze and interpret
  • multiple choice tests and their results.

13
Constructing the Test Blue Print
  1. List important course content or topics and link
    to outcomes.
  2. Identify cognitive levels expected in outcomes.
  3. Determine number of items for entire test and
    each cell based on emphasis, time, and
    importance.

14
Base Test Blueprint on
  • Actual Instruction
  • Classroom Activities
  • Assignments
  • Curriculum at the Program Level

15
Questions
  • Validity
  • How to use Test Blueprint

16
Reliability Repeatable or Consistent Results
  • If a test is administered one day and an
    equivalent test is administered another day the
    scores should remain similar from one day to
    another.
  • This is typically based upon the correlation of
    the two sets of scores, yet this approach is
    unrealistic in the classroom setting.

17
Internal Consistency ApproachKR-20
18
Guidelines to Increase Reliability
  • Develop longer tests with well-constructed items.
  • Make sure items are positive discriminators
    students who perform well on tests generally
    answer individual questions correctly.
  • Develop items of moderate difficulty extremely
    easy or difficult questions do not add to
    reliability estimations.
  • Guide for Writing and Improving Achievement
    Tests

19
Multiple Choice Items
  • Refer to handout

20
Guidelines for Developing Effective
ItemsResources
  • In Guide for Improving Classroom Achievement
    Tests, T.L. Flateby
  • Assessment of Student Achievement, 2008, N.E.
    Gronlund Allyn and Bacon
  • Developing and Validating Multiple-Choice Test
    Items, 2004, Thomas Haladyna Lawrence Erlbaum
    Associates
  • Additional articles and booklets are available at
    http//fod.msu.edu/OIR/Assessment/multiple-choice.
    asp

21
Questions
  • How to ensure Reliability and Validity

22
Evaluate Test Results
  1. Kr-20 An outcome of .70 or higher.
  2. Item discriminators should be positive
  3. Difficulty Index P-Value.
  4. Analysis of Distracters.

23
Item Analysis
  • Refer to 8 Item handout

24
Use Results for Assessment Purposes
  • Analyze performance on each item according to the
    outcome evaluated.
  • Determine reasons for poor testing performance.
  • Faulty Item
  • Lack of Student Understanding
  • Make adjustments to remedy these problems.

25
Questions
  • Contact Terri Flateby at tlflateby_at_gmail.com,
    813.545.5027, or http//teresaflateby.com
Write a Comment
User Comments (0)
About PowerShow.com