Title: Evaluation of Informatics Tools in Primary Care
1Evaluation of Informatics Tools in Primary Care
- Frank Sullivan, Prof. of RD TCGP, Dundee
- Liz Mitchell, Research Fellow, Glasgow
- Claudia Pagliari, Lecturer in Psychology, Dundee
f.m.sullivan_at_dundee.ac.uk h.c.pagliari_at_dundee.ac
.uk edm1a_at_clinmed.gla.ac.uk
2Informatics
- The study of the
- acquisition,
- processing and
- use of information.
-
- Friedman CP, Wyatt JC. Evaluation methods in
medical informatics. New York Springer 1997.
Informatik Informatique
3Primary Care in the Information Age
- Moving from
- Popper world 2
- Notions, intuitions, judgements, mystique
- Popper world 3
- Objective reality open to criticism and logical
correction - LL Weed. Clinical judgement revisited
- Meth.Inf. Med 199938279-86
4GP Computer Screens and Prompts
5Information Age Consultations
Reference Information including guidelines
Education
Recall
Retrieval
Electronic Medical Record
Clinician
Patient
Consultation
Sullivan FM, et al
6Problem 1 Limited evaluation of informatics
tools
- Failure to evaluate new resources is a major
problem - Often top-down, technologist/manager - driven
development, with little involvement of end-users
in the process. - Expensively developed tools often discarded due
to unanticipated technical difficulties or
people and organizational issues.
7Problem 2 Inappropriate evaluation
- RCTs aimed at measuring hard clinical and
economic outcomes may not always be appropriate
for informatics systems because - a) they are not drugs but multifaceted procedural
interventions and - b) the type of questions asked of an informatics
evaluation are broader, dealing as much with
end-users acceptance and use of the system as
with external outcomes.
8Thinking about the WHOLE
- The RCT may provide useful information but it can
only give part of the story - Comprehensive evaluation of health informatics
tools requires a broader range of research
methods, involving both quantitative and
qualitative approaches. - The ideal method, or combination of methods, will
be determined by the research questions and the
context and timeframe in which it is taking
place.
9Which research questions/whose perspective?
Does it work? Will they use it?
Developer
Purchaser
User
Is it fast? Is it fun?
What is the costbenefit?
Patient
Is it safe? Will it work?
102 Classes of research method for informatics
evaluations
- Objectivist concerned with objective assessment
of clearly defined variables, usually measured
quantitatively (e.g. via experimental or
correlational studies). - Subjectivist based on the judgements of expert
evaluators, system users, potential users or
other stakeholders. Often rely on qualitative,
anthropological research methods. - Friedman and Wyatt, 1997
11Objectivist approaches 1
- Comparison-Based
- Employs experiments and quasi experiments.
Comparisons based on small numbers of outcome
variables - e.g. Hypothesis Compliance with guideline
recommendations to check diabetics feet annually
will increase following introduction of
computer-based reminders system
12Objectivist approaches 2
- Objectives-Based
- Aim is to determine whether the resource meets
its designers objectives. - E.g. Are fully integrated patient records
accessible to the GP within 2 minutes?
13Objectivist approaches 3
- Decision Facilitation
- Focus on answering questions important to
developers and administrators. Usually used in
formative studies when developing new resources. - e.g. Systematic study of various formats for a
presenting guideline information on-screen,
conducted as part of the process of resource
development. -
14Objectivist approaches 4
- Goal-Free
- Evaluators are blinded to the intended effects of
the resource and must chart all its effects. Aims
to reduce reporting bias and to uncover both
unintended and intended affects. - E.g. Conducting patient chart reviews before and
after introduction of an information resource
without telling the reviewer anything about the
nature of the information resource. -
15Subjectivist approaches 1
- Responsive-illuminative
- Focuses on the reports of users, e.g. feedback
following a demonstration or period of hands-on
familiarisation with the tool. Useful for
technical troubleshooting and for examining
contextual factors which may affect
implementation. -
- E.g. Observations of prototypical users in a
laboratory setting, followed by one-to-one
interviews about the advantages and disadvantages
of the resource and discussion of what has been
observed. -
-
16Subjectivist approaches 2
- Art Criticism
- Analysis and review of a resource by a generic
expert. - E.g. Software review in a technical magazine.
Inviting a noted consultant on user interface
design to spend a day on site to offer
suggestions regarding the prototype of a new
system. -
-
17Subjectivist approaches 3
- Professional review
- Management consultancy type approach using
extended site visits by experienced peers to the
environment in which the resource is installed.
May employ a combination of methods including
speaking to users, observing the system in
operation etc. -
- E.g. A site visit by a government review team to
several research groups competing to have their
patient management screens for asthma adopted
nationally. -
-
18Subjectivist approaches 4
- Quasi-legal
-
- Mock trial or other formal adversarial procedure
to judge a resource. Rarely used. - E.g. Staging a mock debate at a research group
retreat. -
-
19Tailoring methods to the problem
- Comprehensive evaluation may require a
combination of research methods involving both
objectivist and subjectivist approaches. - The choice will relate to the specific research
questions and the stage of the evaluation.
20General steps in informatics evaluations
- Define and prioritise study questions
- Define the "system" to be studied
- Select or develop reliable, valid measurement
methods - Design the demonstration study
- Choose the appropriate methodology
- Ensure that study findings can be generalized
- Carry out the evaluation study
- (NB. Demonstration and evaluation phases may
overlap)
21Step 1 Define and prioritise your study questions
- Decide exactly what you want to find out and
specify your objectives. - Ideally questions should be be agreed between the
research team, system developers, clinical
non-clinical users patients. - Find out what has been done before
22Step 2 Define the "system" to be studied
- Is the system simple or multifaceted? Is it one
component or the system as a whole that is of
interest? If the former, can you isolate and
evaluate that part alone? (e.g diabetes
web-suite) - Develop a model for the evaluation to test.
Results can be compared with the model to define
the place of the new technology and further
refine the model.
23Step 3 Select or develop reliable, valid
measurement methods
- The aim of so-called measurement studies is to
ensure that the tools you use to assess outcomes
are of as high quality as the methodology allows.
- Try and use established measurement tools (e.g.
questionnaires) if available. If not, there are
clear procedures for developing them (see
Friedman Wyatt p71). - It may necessary to consult widely, interview
potential system users individually or in groups
in order to determine which are the key variables
to be studied.
24Step 4 Design the demonstration study
- Leaving the evaluation until after a system is in
place restricts the degree to which the results
of the evaluation can be used to modify the
system, resulting in less-than-ideal
implementation (meaning not only access but also
acceptance and use). - Gold standard approach to evaluation involves a
prototyping phase or demonstration study, in
as realistic a context as possible, followed by
one or more user-informed iterations of the
system (i.e. the evaluation-development cycle).
- May assess several objective and subjective
variables including usability attitudes ideas
for change barriers to implementation.
25Step 5 Choose the appropriate methodology
- Approaches to evaluation that examine informatics
resources from multiple perspectives, using
several methodologies, are likely to produce more
valuable results - Tailor methods to research questions
stakeholder perspectives - Ensure methodological rigor. See checklists by
Johnston et al. Sullivan Mitchell for
assessment criteria for experimental and
non-experimental studies.
26Step 6 Ensuring that study findings can be
generalized
- Difficult to achieve in informatics research.
Study effects can be context-dependent (People
dont use computers organisations do) - Qualitative research will focus on small
(selected) samples, although may indicate wider
issues which could affect generalisability of
results - Experimental research may be more generalizable
but important to build-in safeguards e.g.
increase sample sizes when randomising by
practice to correct for intra-cluster
correlation.
27Step 7 Carry out the evaluation study
- Preparation
- Decide whether continuation is justified.
- Firm up the methodology
- Convince the ethics committee.
- Identify liase with key stakeholders
- Consider commercial implications and intellectual
property rights.
28evaluation study continued.
- Recruitment
- Remember - enthusiasts may not be representative
- Design strategies for recruiting patients (
gaining consent) - Detailed study planning
- Create written manual of study procedures.
(Focuses on the fine detail of who does what at
the different stages of the project. May change
over time.)
29evaluation study continued.
- Pilot as much of your study procedure as possible
- Think carefully about where you intend to do the
pilot work. Sites need to be representative of
those you intend to use in the main study. Use a
small number of test-bed sites to learn of the
problems with the resource. - Other issues to consider during the study
- Respond immediately to any technical problems or
concerns expressed by participants. - Study sites and participants should be kept
informed of progress. - Reward participating practices if possible