Title: Evaluating uses of Learning Technology
1Evaluating uses of Learning Technology
- Martin Oliver,MST/London Knowledge Lab
2Overview
- Some general definitions and history
- What is it were discussing, exactly?
- Issues for evaluating learning technology
- What problems do people face when doing this?
- Data and distance
- What can we find, and what does it tell us?
- Tools to support people (very brief)
- What resources exist to support people?
- The ELT toolkit project (A detailed look at a
tool) - How can tools like these make a difference?
- What can we learn from them?
- Conclusions
3Prelude
- This first sections really about orientation
- Think about your experiences of evaluation
- If you can think of a positive experience
- What was it that made it good?
- even if you cant think of one
- What was it about your experiences that were bad?
- Capture these and revisit them later
4What do I mean by evaluation?
- A contested term
- Judgements about the value (benefits) and worth
(costs) of something - A way of describing something (ethnographic)
- What evaluators do
- What I dont mean
- Entirely personal judgements like reviews (or
checklists), without data collection - Assessing student learning
- entirely valid things, but not what Im talking
about
5A brief history of educational evaluation
- The beginning
- A tradition that grew from measurement theory
- Firmly rooted in the experimental method
- Educational interventions as things applied to
populations - and the backlash (c. 1970s)
- Alternative traditions rejected this approach
- Illuminative, ethnograpic, naturalistic
approaches arose - Sought to re-define what counted as valid
evaluation, but politically weaker - This pattern still evident
- US legislation endorses controlled experiments
6A paradigm war?
- Certainly two opposed traditions
- Battle lines seem drawn around methods
- but Hammersley suggests its more about
philosophy - Logical positivists looking for stable,
controllable interventions - even if theyre
doing qualitative grounded theory - Relativists looking to interpret whats happening
and make recommendations based on personal
judgements - Points out that many people are actually
eclectic, rather than hidebound (principled?)
7A third way
- Patton utilization-focused evaluation
- Most commissioned reports are never read
- A good evaluation isnt one thats
methodologically rigorous - its one that helps
people act (make decisions) - Principle of designing for intended use by
intended users - No good working for people who wont act - choose
a different audience - No good working for those who are powerless to
act - instead, influence those with power(There
are ethical issues here)
8Repositioning evaluation
- Evaluation positioned as a social, political
activity, not as value-free science - Emphasis on rhetoric - persuading an audience
(understood not as a type or role but as a
list of names) - Stakeholders - whose voice will be included in
this process? What authority, if any, will it
have? (Will they just provide data, or help frame
the study, interpret, present, etc.?) - Recognition that the evaluator has a stake in the
process too - reflects on their credibility and
integrity
9Repositioning evaluation
- Also treating evaluation as an educational
intervention - process use - What can those involved learn? From negotiating
its scope, gathering data, contributing to
analysis, debating interpretations? - Looking at opportunities for feedback - ongoing
interventions for improvement (action), not just
summative judgement (cf. Feedback in learning) - Creating opportunities for dialogue between
stakeholders - socially constructed
understandings, involving multiple perspectives
10Revisiting your experiences
- Review the list from the start
- To what extent are the good features consistent
with utilization-focused evaluation? - Are the bad experiences linked to a particular
approach? - Are any of you unwitting utilization-focused
sympathisers?!
11End of the first part
- Overview of relevant issues from educational
evaluation - Intended to provide a common ground and insights
into the specific problems of learning technology - Also directly relevant to any other evaluation
you do - Next, on to our specific concern
12So what about Learning Technology?
- This section will look at the things that are
distinctive about evaluation in relation to
learning technology - Evaluation within this context echoes wider
shifts in educational evaluation - although usually a few years later
- Same contestation between paradigms
13Evaluating Learning Technology
- Some novel features
- Large number of practitioner-researchers
- No formal training as evaluators
- Common sense evaluators - no theoretical
foundation to their work - Large number of funded projects are told to
evaluate their work(esp. after TLTP phase I -
lots of development, no information about its
value!) - Many have to evaluate the project that pays their
salary, sometimes with an external check
14Evaluating Learning Technology
- Same power-laden confrontation between paradigms
- Qualitative, interpretative perspective common in
action research/practitioner studies - Quantitative, positivistic perspective championed
by policymakers and Evidence-Based Practice - Say what works, not explain why (easy answers
for funders) - Hierarchy of evidence from RCTs to qualitative
methods and GOBSAT
15Evaluating Learning Technology
- Particular problems with comparative studies
- A difficulty for all educational evaluation
- If one condition is believed to be better, can
you justify withholding it, ethically? - If outcomes are affected by teaching, changing
outcomes changes what was learnt so cant use
same assessment as no longer appropriate
(Constructive Alignment) - Particular difficulty for Learning Technology
- What is E-Learning, Blended Learning, etc
anyhow?
16Evaluating Learning Technology
- Is e-learning better than traditional forms of
learning? - What we mean by e-learning today isnt what we
meant a year ago - Its not the type, its the specific instance
is this well designed? - Its not what it is, its what you do with it
are you using it well? - What exactly is traditional learning? Do we
really want to assume this is a stable point of
comparison? - No significant difference phenomenon tends to
be different, not better, unless study is
designed to measure what e-learning does well - Are books better than other resources for
learning?
17Evaluating Learning Technology
- The difficulty of attribution
- One innovation amongst many post-compulsory
education is riddled with new initiatives whats
the root of any particular change? - False negatives students learn differently but
cover this up in order to perform on normal
tests (learn new and old forms of knowledge) - False positives technology a symptom of a wider
change in attitudes or practice (is technology
the symptom or cause of widening access in
post-compulsory education, or are both symptoms
of something else?) - Coincidence groups may just be different (even
if randomly created, but particularly if cohorts)
18Evaluating Learning Technology
- What kind of comparisons can we draw?
- People can and do undertake comparative studies
- Tend to compare preferences for one intervention
or another - Sometimes measure group performance against some
invariant test (e.g. standardised exam) point
of reference - Typically try to control out things
interpretative researchers find interesting
(influence of teacher, etc.) - Can be informative and useful if youre treating
this as a case study an insight into specific
performance, rather than building a general law
19The problem of impact
- A common request
- Establish the impact of this new form of
teaching - Remarkably hard to answer
- A brief exercise (2-3 minutes)
- List ways in which introducing a new form of
technology can have an impact - Consider a variety of roles/people
- What kinds of evidence might you look for in each
case?
20The problem of impact
- The TLTP III EFFECTS project
- National initiative to see whether accreditation
and staff support was an incentive to lecturers
to adopt new technology - Required evidence of selecting, planning,
implementing, evaluating, disseminating
technology use - Now established as a SEDA award (PDF-ELT)
- External evaluation establish the impact of the
project - How could we make sense of this?
21The problem of impact
- Multi-layered model
- What was the impact on learners?
- What was the impact on lecturers?
- What was the impact on the organisation?
- What was the impact nationally?
22The problem of impact
- Initial plan was different evaluators for
different levels - Lecturers evaluate impact on learners
- Local project team evaluate impact on lecturers
and organisation - Project evaluators evaluate national impact
- Worked ok, but
- Lecturers hadnt the time and found this hard
- Evaluation the weakest outcome in assessment
across all sites - A picture with lots of holes
23The problem of impact
- As an example impact on staff
- Evidence of change in teaching practice
(observation, documentation) - Evidence of changed role(Promotion, involvement
in committees) - Evidence of change in career direction(Publishing
educational research) - Evidence of change in attitudes(Self report,
changed use of discourse) - Detailed picture, but not amenable to widespread
study relied on snapshots
24The problem of impact
- To summarise
- Easy to find evidence of change, hard to
establish what it means for our study - Hard to draw out general conclusions when the
focus is so ill-defined - Much can be said thats relevant to local
practice evidence of some kind of impact is
almost unavoidable (but is it the right kind?) - Need a position to interpret this against and
good rhetoric to present this to others - http//ifets.ieee.org/periodical/vol_3_2002/v_3_20
02.html
25End of the second part
- This sections focused on the specifics of
evaluating learning technology - Highlighted issues of design and interpretation
facing studies in this area - Raised a number of general concerns, including
disposition and approach of those asked to
evaluate - Next section will look at the issues of gathering
data
26Data and distance
- The previous section has explained why its hard
to establish impact - This section will look at some of the problems of
getting hold of data at all - Issues arise in relation to the point of new
technologies - Many introduced to increase flexibility
- Many claim to support new forms of learning
- What can we gather as evidence and what can we
infer from it?
27Data and distance
- Informating (Zuboff)
- Computerised activities make things explicit and
generate information - Such information is, potentially, data
- Discussion archives, use logs, site hits, etc.
- A helpful source of information?
- Ready for processing - in electronic form
- Only a partial account of what took place
28Data and distance
- Can we see what we need to?
- Chris Jones ethnographic study of a course with
online collaboration - Incident of cheating observed face-to-face
- Calls into doubt the veracity of easy data
- Distance Pedagogics (Peters)
- Lecturing at a distance (e.g. video link) is
still lecturing and needs no new pedagogy - Private study may be on-campus but provides
flexibility and choice to students and raises
issues familiar to traditional distance educators - Similar issues here public or private?
- Whose context?
29Data and distance
- Importance of triangulation
- Each part reveals an element of the wider picture
- Interpreting multiple sources of data reassures
and (potentially) explicates - However
- Ongoing problem most of whats important with
learning is private, so how can we learn about
this?
30Data and distance
- Old methods
- Travel for observations - time intensive fine
for cases, but less good for general conclusions - Travel for interviews, or interview by phone
- Surveys etc
- All are opportunities, but access becomes a
serious issue (and no guarantees data will be
provided)
31Data and Distance
- Old methods in new formats
- Online survey - higher response rates, but
caution about missing out those least happy with
technology (usually an important group) - Online interviews/focus groups - readily-captured
data, but different skills and pace required
more thoughtful, less spontaneous if open
(rather than selected), vocal minority an issue - However, mostly self-report what else can be
accessed?
32Data and distance
- New(er) methods
- Traces - hit logs dumb data (about access, not
use or intention) that needs interpretation - Discussion archives - access to exchanges that
are fleeting in traditional settings - Can raise ethical issues
- spyware for data
- status of comments (as permanent, as data) in
online discussions - data protection act - Can be easy but inappropriate to gather lots of
data
33Summary of part three
- General issues
- Getting the data you need is harder, as its
private and distributed - Online data collection methods harder to control,
which may raise questions about interpretation - Questions of interpretation raised ethical
justifications, relationship to context(s) - Next section will look at tools designed to help
with this
34Supporting people who evaluate learning technology
- This section builds upon a problem mentioned in
passing in EFFECTS evaluation - A pressure to evaluate technology use
- Lots of conceptual and methodological issues
- No support or training for teachers
35Supporting people who evaluate learning technology
- The response
- A plethora of tools to fix this
- Assumption that a sensible teacher with a bit of
information and guidance will get through ok - Relevance to this
- Raise awareness of tools like this
- Highlight good (sanctioned) approaches to
practice
36Tools to support evaluation
- Existing tools support data analysis
- SPSS, NVivo, etc.
- but only if you know what they should be used
for - Another collection of tools focuses on evaluation
design - LTDI Evaluation cookbook
- TLT Flashlight project (US)
- MEDA Evaluation tool for training software
- ELT Toolkit
37Different kinds of tool
- MEDA - a handbook of questions
- Flashlight - a database of question
- Works on the assumption that very different
educators need to ask similar questions, using
surveys - LTDI - a cookbook of methods
- Assumes educators can choose a topic but might
need help with methods - ELT toolkit - tries to do both
- http//www.elt.ac.uk/materials.htmevaldiss
38The evolution of the ELT toolkit
- The brief produce something that will help
practitioners evaluate in spite of the
difficulties - The initial idea a structured walkthrough
supporting study design - Selection of methodology
- Selection of methods (guided by methodology)
- Selection of data analysis methods
39The evolution of the ELT toolkit
- It didnt work
- First study couldnt even get to using the
toolkit - Participant didnt know what they wanted to know
- Goals kept shifting
- Discussion led to re-framing endlessly
- Pattons process use
- Being involved in evaluation design was educating
for the participant but this didnt actually help
them carry out a study!
40The evolution of the ELT toolkit
- Revision of the tool
- Introduced new steps to address context
- Identification of stakeholders (individuals)
- List their concerns and turn these into questions
- (Existing three steps methodology, data
collection methods, data analysis methods) - Approaches to communicate findings to audience(s)
- Explicitly framing this as a social process
- Similar to Draper/TILTs inner and outer
steps for evaluation design
41The evolution of the ELT toolkit
- Developed a paper-based version of the tool
- Tested it
- It worked fairly well, but took a long time
- Received funding from the JISC
- Implemented as the online Evaluation of Learning
and Media Toolkit(Incorporated another tool for
curriculum design - badly) - http//www.ltss.bris.ac.uk/jcalt/
- Expanded functionality
42The evolution of the ELT toolkit
- Three main sections
- Evaluation planner
- Evaluation adviser
- Evaluation presenter
- Plus things we didnt want lost
- The methodology section JISC didnt like
- References and links
- Within each section three types of activity
- Tell us something (free text entry) - usually
context - Make an open choice (list of suggestions with
other please specify option) - Enter data and a model recommends things
43The evolution of the ELT toolkit
- Data entered pulled together in a final report
- Printed off as 2-4 page evaluation plan
- Summarises decisions made
- Captures contextual information
- Presents this in an ordered way
- Amenable to sharing with others (managers,
funders, research assistants) or using as an
outline workplan - Simple idea huge success with users!
- Further development option to share plans
44An example
- Forming questions had been a problem
- Developed an activity to support question framing
- Start with concerns
- Rephrase these as a series of questions
- Combine or contrast different types of questions
- Pick the one that seems most useful
45An example
- Concern student learning
- Exploratory questions
- What do students learn? Who learns best with this
resource? How do students use it? What do they
think about it? - Comparative questions
- Do some students use it differently to others?
Does this group perform better on tests than a
group that doesnt use it? - Measurement questions
- How long do they use it for? Do they use it all?
If students do better on tests, how much by? How
many complaints where there about it?
46An example
- Negative questions
- What was wrong with it? Why did students dislike
it? Did it hinder their learning? Who found it
hard to use? What problems did it cause for
people? - And then, combined questions
- How much better did these students do than those?
Why did some students like it more than others?
What led to problems arising for students? - And finally - select the one question (ok, at
most, three) that your study will seek to answer
47Evaluating the Evaluation Toolkit
- Had to do it to ourselves
- Used the toolkit to do our evaluation plan
- Implemented it
- Published in JCAL 18 (2), 2002
- What we learnt
- The usability of the tool was fairly poor
- Took about 3-4.5 hours to do a full plan
- Experts thought this was wonderful
- Novices thought this was far too long
- Editing shared plans one way to reduce this
48Evaluating the Evaluation Toolkit
- A success?
- It did its job - even complete novices produced
credible plans - In addition - experts prompted to think about
methods they hadnt previously used - Suggests people were learning from this as well
as just designing studies - However
- Comfort zone - if novices wanted to do a survey
and it wasnt recommended theyd over-ride the
list - needs to be challenging enough - No evidence of impact on practice - did anyone
implement the plan? (Longitudinal studies)
49Summary of part three
- Given the complexities of evaluating learning
technology, its no surprise people need support - Tools are regularly developed as a way of
providing this - They can be interesting in their own right
- Impact of ELT Toolkit on design
- However, hard to judge their own impact
- But also interesting as a representation of
good practice
50Summary of part four
- Toolkit designed as a stand-alone resource
- although works best when introduced in a
workshop, preferably with peer discussion - Whether used or not, useful as a way of
highlighting issues and suggesting structure - Provides outline of a decision making process
(stakeholders, questions, data collection,
analysis, presentation) useful for planning,
but also for staff support, training, etc. - Highlights complexities each activity relates
to an area worth thinking about sensitises to
issues - Can be educational to look at tools like these,
whether or not you follow them
51Where does this leave us?
- A recap
- An overview of the issues is evaluating learning
technology - Overview of themes from wider educational
evaluation - Specific issues facing evaluation of learning
technology - In addition
- Examples of tools that might prove useful
- A detailed look at a particular tool the ELT
toolkit to illustrate how it can help (at least
with design no evidence for practice) - Something that places control of evaluation back
in the hands of practitioners
52Where does this leave us?
- My hope any of the following
- Support you in being better informed
participants, commissioners or readers of
evaluation studies - If youre studying this yourself, provided ideas
about how to do it - Sources of evidence of impact
- Methods in relation to absent students
- Sounded notes of caution about how to interpret
and present studies - Causes, comparisons, interpretations etc