An automated feedback system based on adaptive testing: a case study PowerPoint PPT Presentation

presentation player overlay
1 / 37
About This Presentation
Transcript and Presenter's Notes

Title: An automated feedback system based on adaptive testing: a case study


1
  • An automated feedback system based on adaptive
    testing a case study
  • Dr. Trevor Barker
  • Dr Mariana Lilley
  • Dr Iain Werry
  • Department of Computer Science
  • University of Hertfordshire

2
Contents
  • Background
  • Computer Adaptive Testing (CAT)
  • Previous research
  • Feedback and CAT
  • Extending the model
  • Simplifying and generalising
  • Discussion
  • Questions

3
Reasons for automated approaches to testing and
learning
  • Vast investment in infrastructure
  • Availability of MLE systems such as UH Studynet
  • Changes in nature of Higher Education
  • Online and distance education
  • Increase in student numbers (SSR)
  • Increasing pressures on time and cost

4
Computer-Adaptive Test
  • Based on Item Response Theory (IRT)
  • If a student answers a question correctly, the
    estimate of his/her ability is raised and a more
    difficult question is presented
  • If a student answers a question incorrectly, the
    estimate of his/her ability is lowered and an
    easier question follows

5
Computer Adaptive Testing
  • Computer-Based Tests (CBTs) mimic aspects of a
    paper-and-pencil test
  • Accuracy and speed of marking
  • Predefined set of questions presented to all
    participants and thus questions are not tailored
    for each individual student
  • Computer-Adaptive Tests (CATs) mimic aspects of
    an oral interview
  • Accuracy and speed of marking
  • Questions are dynamically selected according to
    student performance

6
Benefits of the adaptive approach
  • Questions that are too easy or too difficult are
    likely to
  • Be de-motivating
  • Provide little or no valuable information about
    student knowledge
  • The CAT level identifies a unique boundary
    between what the student knows and what he or she
    does not know

7
Level of difficulty
  • One of the underlying ideas within Bloom's
    taxonomy of cognitive skills (Anderson
    Krathwohl, 2001) is that tasks can be arranged in
    a hierarchy from less to more complex.

8
Blooms taxonomy
9
Graphical user interface
10
Calibration of the question database
  • This is undertaken in the first place by subject
    experts who rank questions according to
    difficulty and Blooms levels
  • The database is then adapted after each run of
    the assessment. Questions that are answered
    correctly more often have their difficulty level
    lowered and vice versa.

11
Summary of research to date
  • Our research so far has related to
  • Establishment of test conditions
  • E.g. ability to review questions, test stopping
    conditions
  • The reliability of CAT measures
  • Test-retest (reliability studies)
  • The fairness of the method
  • Comparison to other testing methods (validity
    studies)
  • Student perception of test difficulty
  • Student and staff attitude to CAT method
  • The adaptive questions database
  • Use of CAT in formative and summative tests
  • Using CAT model to provide automated feedback

12
Providing individual feedback based on CAT.
  • An application of the CAT approach is in the
    provision of automated individual feedback
  • This approach has been in operation for several
    years at the University of Hertfordshire in two
    BSc. Computer Science modules
  • Recently this model has been extended to make it
    easier to use on other modules

13
About the Feedback
  • Learners received feedback on
  • Overall proficiency level
  • Performance in each topic
  • Recommended topics for revision
  • Cognitive level (Bloom)
  • Feedback on assessment performance was initially
    made available to learners via a web-based
    application

14
(No Transcript)
15
Extending the model
  • Need to simplify
  • Setting questions and feedback
  • Calibration of database
  • Delivery of feedback to learners
  • Simple guidelines provided for tutors
  • Examples and simple training available

16
Setting questions and feedback
  • Important to consider question topics
  • Test should cover each topic at a range of
    difficulty levels
  • A good guide is to have approximately four times
    as many questions as are to be presented
  • A 20 question test needs an item bank with a
    minimum of 80 questions

17
Setting up feedback database
  • For each question write
  • A general comment about the topic area
  • A statement if the question was answered
    correctly
  • A statement if the question was answered
    incorrectly
  • A link to course material / information where the
    topic was covered
  • A link forwards to related / more challenging
    materials

18
Calibration of database
  • Rank the questions in order of difficulty
    yourself
  • Consider Blooms levels
  • Suggest easier questions relate to knowledge
  • Harder questions to understanding
  • Most difficult questions to application
  • Ask a colleague nto do the same
  • Take the average ranking

19
Example questions
20
(No Transcript)
21
CAT variables
22
Nature of feedback to learners
  • The CAT application produces the following
  • An overall grade for each topic
  • A set of feedback items based on the questions
    answered by an individual
  • Statements related to Blooms levels for each
    topic in the test for an individual
  • Revision and Feed-forward links

23
Feedback in the database
24
Delivery of feedback
  • Simple feedback was accomplished using email
    merge in Word
  • This was delivered to each students university
    email account individually soon after the test
    results were published

25
(No Transcript)
26
Login page
27
Overall proficiency level
28
Performance Summary
29
Points for Revision
30
Student attitude to feedback
31
Staff opinion Three studies
  • Session 1 Group of 10 computer scientists
    teachers, experts in software design and also
    interested in the provision of online educational
    systems.
  • A 30 minute presentation followed by a 30 minute
    moderated, focussed discussion.
  • Session 2 Group of 50 university lecturers at
    university conference presentation on MLE.
  • A 25 minute presentation followed by a 5 minute
    question session and a short questionnaire.
  • Session 3 Group of 20 university teachers
    interested in online and blended teaching and
    learning A
  • 30 minute presentation and 30 minute moderated,
    focussed discussion.

32
Questionnaire
33
Questionnaire
34
Results summary
  • Tutors consider that the fast feedback provided
    by a CAT is as good as or better than that
    currently provided in many cases.
  • The link to Blooms levels was positive
  • The approach was considered to be efficient,
    possibly freeing time for other activities
  • CAT considered to be best as a formative tool,
    rather than for summative assessment
  • Some tutors were concerned that the approach was
    impersonal
  • There is a need for a monitoring role for tutors,
    for practical and ethical reasons

35
In summary
  • Larger class sizes, greater use of online and
    distance assessment ensures that feedback is
    often too slow and too general to be of any real
    use to learners.
  • Personalised automated feedback is likely to
    become increasingly important in the future. It
    is being used in two modules currently at UH.
  • Learners and tutors accept the need for automated
    feedback and most appreciate the benefits of such
    systems.

36
In summary
  • A CAT is an ideal tool upon which to base
    automated feedback since
  • It identifies a unique boundary between what an
    individual knows and what he or she does not know
  • The method is objective, fair, valid and reliable
  • It can be applied widely to a range of subject
    areas
  • It can relate to cognitive skills as well as
    subject knowledge

37
Contact Details
  • Dr Trevor Barker
  • T.1.Barker_at_herts.ac.uk
  • Dr Mariana Lilley
  • M.Lilley_at_herts.ac.uk
Write a Comment
User Comments (0)
About PowerShow.com