Evaluation%20of%20Informatics%20Tools%20in%20Primary%20Care - PowerPoint PPT Presentation

About This Presentation
Title:

Evaluation%20of%20Informatics%20Tools%20in%20Primary%20Care

Description:

Claudia Pagliari, Lecturer in Psychology, Dundee ... Objective reality open to criticism and logical ... Remember - enthusiasts may not be representative ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 30
Provided by: Fra5256
Learn more at: https://sites.pitt.edu
Category:

less

Transcript and Presenter's Notes

Title: Evaluation%20of%20Informatics%20Tools%20in%20Primary%20Care


1
Evaluation of Informatics Tools in Primary Care
  • Frank Sullivan, Prof. of RD TCGP, Dundee
  • Liz Mitchell, Research Fellow, Glasgow
  • Claudia Pagliari, Lecturer in Psychology, Dundee

f.m.sullivan_at_dundee.ac.uk h.c.pagliari_at_dundee.ac
.uk edm1a_at_clinmed.gla.ac.uk
2
Informatics
  • The study of the
  • acquisition,
  • processing and
  • use of information.
  • Friedman CP, Wyatt JC. Evaluation methods in
    medical informatics. New York Springer 1997.

Informatik Informatique
3
Primary Care in the Information Age
  • Moving from
  • Popper world 2
  • Notions, intuitions, judgements, mystique
  • Popper world 3
  • Objective reality open to criticism and logical
    correction
  • LL Weed. Clinical judgement revisited
  • Meth.Inf. Med 199938279-86

4
GP Computer Screens and Prompts
5
Information Age Consultations
Reference Information including guidelines
Education
Recall
Retrieval
Electronic Medical Record
Clinician
Patient
Consultation
Sullivan FM, et al
6
Problem 1 Limited evaluation of informatics
tools
  • Failure to evaluate new resources is a major
    problem
  • Often top-down, technologist/manager - driven
    development, with little involvement of end-users
    in the process.
  • Expensively developed tools often discarded due
    to unanticipated technical difficulties or
    people and organizational issues.

7
Problem 2 Inappropriate evaluation
  • RCTs aimed at measuring hard clinical and
    economic outcomes may not always be appropriate
    for informatics systems because
  • a) they are not drugs but multifaceted procedural
    interventions and
  • b) the type of questions asked of an informatics
    evaluation are broader, dealing as much with
    end-users acceptance and use of the system as
    with external outcomes.

8
Thinking about the WHOLE
  • The RCT may provide useful information but it can
    only give part of the story
  • Comprehensive evaluation of health informatics
    tools requires a broader range of research
    methods, involving both quantitative and
    qualitative approaches.
  • The ideal method, or combination of methods, will
    be determined by the research questions and the
    context and timeframe in which it is taking
    place.

9
Which research questions/whose perspective?
Does it work? Will they use it?
Developer
Purchaser
User
Is it fast? Is it fun?
What is the costbenefit?
Patient
Is it safe? Will it work?
10
2 Classes of research method for informatics
evaluations
  • Objectivist concerned with objective assessment
    of clearly defined variables, usually measured
    quantitatively (e.g. via experimental or
    correlational studies).
  • Subjectivist based on the judgements of expert
    evaluators, system users, potential users or
    other stakeholders. Often rely on qualitative,
    anthropological research methods.
  • Friedman and Wyatt, 1997

11
Objectivist approaches 1
  • Comparison-Based
  • Employs experiments and quasi experiments.
    Comparisons based on small numbers of outcome
    variables
  • e.g. Hypothesis Compliance with guideline
    recommendations to check diabetics feet annually
    will increase following introduction of
    computer-based reminders system

12
Objectivist approaches 2
  • Objectives-Based
  • Aim is to determine whether the resource meets
    its designers objectives.
  • E.g. Are fully integrated patient records
    accessible to the GP within 2 minutes?

13
Objectivist approaches 3
  • Decision Facilitation
  • Focus on answering questions important to
    developers and administrators. Usually used in
    formative studies when developing new resources.
  • e.g. Systematic study of various formats for a
    presenting guideline information on-screen,
    conducted as part of the process of resource
    development.

14
Objectivist approaches 4
  • Goal-Free
  • Evaluators are blinded to the intended effects of
    the resource and must chart all its effects. Aims
    to reduce reporting bias and to uncover both
    unintended and intended affects.
  • E.g. Conducting patient chart reviews before and
    after introduction of an information resource
    without telling the reviewer anything about the
    nature of the information resource.

15
Subjectivist approaches 1
  • Responsive-illuminative
  • Focuses on the reports of users, e.g. feedback
    following a demonstration or period of hands-on
    familiarisation with the tool. Useful for
    technical troubleshooting and for examining
    contextual factors which may affect
    implementation.
  • E.g. Observations of prototypical users in a
    laboratory setting, followed by one-to-one
    interviews about the advantages and disadvantages
    of the resource and discussion of what has been
    observed.

16
Subjectivist approaches 2
  • Art Criticism
  • Analysis and review of a resource by a generic
    expert.
  • E.g. Software review in a technical magazine.
    Inviting a noted consultant on user interface
    design to spend a day on site to offer
    suggestions regarding the prototype of a new
    system.

17
Subjectivist approaches 3
  • Professional review
  • Management consultancy type approach using
    extended site visits by experienced peers to the
    environment in which the resource is installed.
    May employ a combination of methods including
    speaking to users, observing the system in
    operation etc.
  • E.g. A site visit by a government review team to
    several research groups competing to have their
    patient management screens for asthma adopted
    nationally.

18
Subjectivist approaches 4
  • Quasi-legal
  • Mock trial or other formal adversarial procedure
    to judge a resource. Rarely used.
  • E.g. Staging a mock debate at a research group
    retreat.

19
Tailoring methods to the problem
  • Comprehensive evaluation may require a
    combination of research methods involving both
    objectivist and subjectivist approaches.
  • The choice will relate to the specific research
    questions and the stage of the evaluation.

20
General steps in informatics evaluations
  • Define and prioritise study questions
  • Define the "system" to be studied
  • Select or develop reliable, valid measurement
    methods
  • Design the demonstration study
  • Choose the appropriate methodology
  • Ensure that study findings can be generalized
  • Carry out the evaluation study
  • (NB. Demonstration and evaluation phases may
    overlap)

21
Step 1 Define and prioritise your study questions
  • Decide exactly what you want to find out and
    specify your objectives.
  • Ideally questions should be be agreed between the
    research team, system developers, clinical
    non-clinical users patients.
  • Find out what has been done before

22
Step 2 Define the "system" to be studied
  • Is the system simple or multifaceted? Is it one
    component or the system as a whole that is of
    interest? If the former, can you isolate and
    evaluate that part alone? (e.g diabetes
    web-suite)
  • Develop a model for the evaluation to test.
    Results can be compared with the model to define
    the place of the new technology and further
    refine the model.

23
Step 3 Select or develop reliable, valid
measurement methods
  • The aim of so-called measurement studies is to
    ensure that the tools you use to assess outcomes
    are of as high quality as the methodology allows.
  • Try and use established measurement tools (e.g.
    questionnaires) if available. If not, there are
    clear procedures for developing them (see
    Friedman Wyatt p71).
  • It may necessary to consult widely, interview
    potential system users individually or in groups
    in order to determine which are the key variables
    to be studied.

24
Step 4 Design the demonstration study
  • Leaving the evaluation until after a system is in
    place restricts the degree to which the results
    of the evaluation can be used to modify the
    system, resulting in less-than-ideal
    implementation (meaning not only access but also
    acceptance and use).
  • Gold standard approach to evaluation involves a
    prototyping phase or demonstration study, in
    as realistic a context as possible, followed by
    one or more user-informed iterations of the
    system (i.e. the evaluation-development cycle).
  • May assess several objective and subjective
    variables including usability attitudes ideas
    for change barriers to implementation.

25
Step 5 Choose the appropriate methodology
  • Approaches to evaluation that examine informatics
    resources from multiple perspectives, using
    several methodologies, are likely to produce more
    valuable results
  • Tailor methods to research questions
    stakeholder perspectives
  • Ensure methodological rigor. See checklists by
    Johnston et al. Sullivan Mitchell for
    assessment criteria for experimental and
    non-experimental studies.

26
Step 6 Ensuring that study findings can be
generalized
  • Difficult to achieve in informatics research.
    Study effects can be context-dependent (People
    dont use computers organisations do)
  • Qualitative research will focus on small
    (selected) samples, although may indicate wider
    issues which could affect generalisability of
    results
  • Experimental research may be more generalizable
    but important to build-in safeguards e.g.
    increase sample sizes when randomising by
    practice to correct for intra-cluster
    correlation.

27
Step 7 Carry out the evaluation study
  • Preparation
  • Decide whether continuation is justified.
  • Firm up the methodology
  • Convince the ethics committee.
  • Identify liase with key stakeholders
  • Consider commercial implications and intellectual
    property rights.

28
evaluation study continued.
  • Recruitment
  • Remember - enthusiasts may not be representative
  • Design strategies for recruiting patients (
    gaining consent)
  • Detailed study planning
  • Create written manual of study procedures.
    (Focuses on the fine detail of who does what at
    the different stages of the project. May change
    over time.)

29
evaluation study continued.
  • Pilot as much of your study procedure as possible
  • Think carefully about where you intend to do the
    pilot work. Sites need to be representative of
    those you intend to use in the main study. Use a
    small number of test-bed sites to learn of the
    problems with the resource.
  • Other issues to consider during the study
  • Respond immediately to any technical problems or
    concerns expressed by participants.
  • Study sites and participants should be kept
    informed of progress.
  • Reward participating practices if possible
Write a Comment
User Comments (0)
About PowerShow.com