Evaluating Disaster Mental Health Programs for Children and Families - PowerPoint PPT Presentation

About This Presentation
Title:

Evaluating Disaster Mental Health Programs for Children and Families

Description:

... you still need a compass: Monitoring and evaluating disaster ... designs are often used for pilot testing to justify the cost of experimental designs later ... – PowerPoint PPT presentation

Number of Views:84
Avg rating:3.0/5.0
Slides: 43
Provided by: frann
Learn more at: https://www.nwcphp.org
Category:

less

Transcript and Presenter's Notes

Title: Evaluating Disaster Mental Health Programs for Children and Families


1
CHILD AND FAMILY DISASTER RESEARCH TRAINING AND
EDUCATION
2
Federal Sponsors
  • NIMH
  • National Institute of Mental Health
  • NINR
  • National Institute of Nursing Research
  • SAMHSA
  • Substance Abuse
  • and
  • Mental Health Services Administration

3
Principal Investigators
  • Betty Pfefferbaum, MD, JD University of Oklahoma
    Health Sciences Center
  • Alan M. Steinberg, PhD University of California,
    Los Angeles
  • Robert S. Pynoos, MD, MPHUniversity of
    California, Los Angeles
  • John Fairbank, PhDDuke University

4
Evaluating Disaster Mental Health Programs Part I
Clark Johnson, Ph.D. Adopted / Modified from
materials prepared by Fran Norris Ph.D., Craig
Rosen, Ph.D. Helena Young, Ph.D. National Center
for PTSD
5
Primary sources for presentation
Owen, J.M. (2007). Program Evaluation Forms and
Approaches. New York Guilford Press.
Rosen, C., Young, H., Norris, F. (2006). On a
road paved with good intentions, you still need a
compass Monitoring and evaluating disaster
mental health services. In C. Ritchie, P.
Watson, M. Friedman (Eds.), Mental health
intervention following disasters or mass violence
(206-223). New York Guilford Press.
6
Learning Objectives
  • After completing this module you will be able to
  • Identify evaluation methods that support the
    intervention program from conception to outcome
  • Engage in evaluation activities prior to a
    disaster
  • Recognize the barriers and challenges in
    conducting evaluations of disaster mental health
    programs
  • Understand the crucial role of both community and
    agency stakeholders as key informants and
    participants in all evaluation activities

7
Lets start with your experience
  • Give an example (past present or future) of a
    program evaluation or one you wish would be
    evaluated(!)
  • Please focus on
  • What is being evaluated?
  • Why
  • what is the objective of this evaluation ?
  • what is the product this evaluation should
    generate?
  • How
  • Method(s)

8
Evaluation Traditional Perspective
  • Program evaluation as a judgment of worth
  • How good is this program?
  • Did the program work?
  • Was the program worthwhile from a monetary
    perspective?

9
Logic of Evaluation
  • Establish criteria of worth
  • On what dimensions must the evaluand do well?
  • Constructing standards
  • How well should the evaluand perform?
  • Measuring performance/compare with standards
  • How well did the evaluand perform?
  • Synthesizing Integrating evidence

10
Steps in Conducting Program Evaluation
1. Engage the stakeholders 2. Describe how the
program works 3. Articulate evaluation questions
design 4. Gather credible evidence 5.
Justify conclusions 6. Share results
11
Evaluation Global Perspective
  • Before
  • What is needed?
  • How does this program meet these needs?
  • During
  • What is happening in this program?
  • How can we improve this program?
  • After
  • How good is this program?
  • Did the program work?

12
Categories of Evaluative Inquiry
  • Proactive
  • Guides the early planning so that it incorporates
    the views of stakeholders and the accumulated
    knowledge from previous work in the field
  • Clarificative
  • Quantifies both the programs process and
    objectives make program assumptions explicit
  • Interactive
  • Think of this as evaluation design to enable the
    program to make mid-course corrections
  • Impact
  • The traditional evaluation category

13
Proactive Evaluation
  • Purpose Synthesis
  • What is already known should influence action.
  • Typical Issues
  • What is the need
  • What is known about this problem
  • experience,
  • relevant literature,
  • conventional wisdom
  • What is recognized as best practice in this area
  • Who are the stakeholders how do their
    perspectives differ

14
Engaging Stakeholders
  • Who are the stakeholders?
  • program leaders and staff
  • communities who are served by the program
  • funding and administrative agencies
  • Identifying and engaging stakeholders helps to
    create a sense of ownership by
    ensuring that their perspectives are understood
    and that essential elements of the program are
    not being ignored
  • However, it is also important to identify the
    primary client at the start of the
    process Who will own the data,
    and who gets to put the spin on results?

15
How Stakeholders are Engaged
  • Evaluators often begin by asking,
  • What will this evaluation do for you?
  • What is it that you want to know?
  • Who do you have to answer to?
  • What does that mandating authority care about?
  • Evaluators often invite discussion about
    immediate, intermediate and long-term concerns
  • Often evaluators explore policies the stakeholder
    is attempting to inform or influence and
    incorporate these choices into the design

16
Clarificative Evaluation
  • Purpose Clarification
  • Define (make explicit) the internal structure and
    functioning of an intervention or program.
  • Typical Issues
  • Define program
  • outcomes,
  • rationale,
  • methods
  • How is program designed to achieve the outcomes
  • Is the program plausible?

17
Interactive Evaluation
  • Purpose Improvement
  • Assist with ongoing service provision and
    structural arrangements with a strong emphasis
    on process
  • Typical Issues
  • What is this program trying to achieve
  • Is the delivery
  • Working
  • Consistent with the program plan
  • How could the delivery be changed to maximize
    efficiency effectiveness
  • Is program reaching the target population
  • Is there a site which needs attention to ensure
    effective delivery

18
Impact Evaluation
  • Purpose Learning / accountability
  • Assess the effects of completed program.
  • Determine what did (not) work and why
  • Typical Issues
  • Program implemented as planned?
  • Program achieved stated goals / objectives
  • What were unintended outcomes

19
So, what is our definition our of Program
Evaluation
  • Program Evaluation is more than a judgment of
    worth it also contributes to
  • Planning
  • Fine tuning
  • Execution
  • Expanded definition emphasizes the production of
    useful knowledge for decision making

20
Categories of Evaluative Inquiry
  • Proactive
  • Clarificative
  • Interactive
  • Impact

Great! But how / when is this done? Next slide
series will focus on the Methods associated
with various categories
21
Proactive Evaluation
  • Major focus Program Context
  • State of Program None
  • Key approaches
  • Needs assessment
  • Research synthesis (evidence-based practice)
  • Review of best practice (benchmarking)
  • Generate input from Stakeholders, key informants,
    and target population

22
Needs Assessment Sidebar Focus on problems not
solutions
  • A sampling of needs assessment field notes
  • We need to Minimize psychological trauma
    following a disaster
  • For that purpose we need a new health center in
    the neighborhood.
  • What kind of need is this?
  • 1) Need as the difference b/w pre and post
    disaster
  • 2) Need as the solution
  • Always use the need as discrepancy definition
    when conducting a needs assessment.

23
Key words for Google search(and other useful
references)
  • Concept mapping
  • Sutherland Katz (2005). Concept mapping
    methodology A catalyst for organizational
    learning. Evaluation and Program Planning, 28,
    257-269
  • Focus groups
  • Strickland (1999) Conducting Focus Groups
    Cross-Culturally Experiences with Pacific
    Northwest Indian People, Public Health Nursing,
    16(3),190-197.
  • Needs Assessment
  • Roth (1990). Needs and the needs assessment
    process. Evaluation Practice, 11(2), 39-44.

24
Clarificative Evaluation
  • Major focus All elements
  • State of Program Development
  • Key approaches
  • Evaluability assessment
  • Stakeholders Identify and determine their
    perceptions, concerns and interests.
  • Logic development -- identify assumed cause and
    effect relationships as well as interplay of
    resources and activities
  • Ex-ante
  • An investigation undertaken to estimate the
    impact of a future situation

25
Describing How the Program Works
Evaluation is grounded in an understanding of
how a program operates, known as program
theory or logic model
26
Example Logic Model
27
Key words for Google search(and other useful
references)
  • Evaluability Assessment
  • Smith (1989) Evaluability Assessment A Practical
    Approach. Norwell, MA Kluwer.
  • Program Logic.
  • Patton (1997) Utilization Focused Evaluation. 3rd
    ed. Thousand Oaks, CA Sage.
  • Ex-ante evaluation
  • Ex-ante Evaluation A practical guide for
    preparing proposals for expenditure programmes
    (http//ec.europa.eu/budget/evaluation/pdf/ex_ante
    _guide_en.pdf)

28
Interactive Evaluation(New program)
  • Major focus Delivery
  • State of Program Development
  • Key approaches
  • Responsive
  • Action research
  • Developmental
  • Empowerment
  • Quality review

29
Key words for Google search(and other useful
references)
  • Responsive
  • Stake (1980). Program evaluation, particularly
    responsive evaluation. In Dockrell Ganuktib
    (eds) Rethinking Evaluation Research. London
    Hodder Stoughton.
  • Empowerment
  • Fetterman Wandersman (2004). Empowerment
    Evaluation Principles in Practice. New York
    Guilford Publications.
  • Also read summary overview sections in
  • Owen (2006). Program Evaluation Forms and
    approaches. New York Guilford Publications Pg
    217-236

30
Impact Evaluation
  • Major focus Delivery / outcomes
  • State of Program Settled
  • Sidebar on study design

31
Designs For Outcome Evaluation
  • Pre-experimental or pre-post
  • In the simplest case, consumers are compared with
    themselves before and after an intervention
  • Experimental
  • When people are randomly assigned to receive the
    intervention or not, groups are equivalent in all
    ways others than receipt of the intervention. So
    it is reasonable to attribute differences to the
    intervention
  • Quasi-experimental
  • Sometimes it is possible to identify a reasonable
    comparison group, even though people were not
    randomly assigned. When this is not possible,
    repeated measures are helpful

32
Pre-Post Designs
Hypothetical Results
  • Longitudinal measures
  • Change over time
  • Better than retrospective estimates of change

33
When Are Pre-post Designs Adequate, and When Not?
  • Pre-post designs are adequate to assess an
    immediate outcome, such as knowledge gained, that
    normally would not change with time
  • Pre-post designs are typically inadequate for
    evaluating intermediate or long-term outcomes.
    Other things not controlled for can account for
    the change. People receiving the intervention
    might have improved anyway because symptoms
    normally improve over time. People may be most
    like to seek help when their distress is at its
    peak
  • Pre-post designs are often used for pilot testing
    to justify the cost of experimental designs later

34
Experimental and Quasi-experimental Designs
Hypothetical Results
  • The gold standard
  • Randomized
  • Treatment vs. Control
  • What can we infer
  • New gt Service as usual
  • Persistent effect

35
Outcome EvaluationWhat Do You Do When There is
No Feasible Comparison Group?
  • An Example InCourage,The Baton Rouge Area
    FoundationsMental Health Initiative

36
Repeated Assessment as an Quasi-Experimental
Strategy
  • In the BRAF initiative, each client is receiving
    Treatment for Postdisaster Distress, which
    requires 8-10 sessions. The first two sessions
    are psycho-education, very much like crisis
    counseling. The heart of the treatment (including
    cognitive restructuring or CR) begins at
    Session 3.
  • Each client is assessed (briefly) at five
    time-points
  • At point of referral
  • At enrollment (beginning of first session)
  • At beginning of third session
  • At beginning of last session
  • At follow-up (3 months after completion)

37
The treatment effect is plausible if, on average,
the data looked something like this
Why?
38
Wed have less confidence if, on average, the
data looked like this
Why?
39
What are the Lessons Here?
  • There is middle ground between clinical trials
    and simple pre-post designs. It is usually
    true that something is better than nothing.
  • Although there is only one group, the repeated
    assessments will allow us to evaluate competing
    explanations of observed improvements.
  • Comment As indicated in the example, a quasi
    experimental design can be used to demonstrate
    that no effect exists but usually will not
    provide convincing evidence (beyond plausibility)
    that the observed effect was caused by the
    intervention

40
Lets take a break
  • When we come back well focus on moving these
    concepts
  • From theory to practice

41
We will start the next session in about 10
minutes and will begin with a discussion of the
following text
Disaster research is different from most other
fields in that much of the work is motivated by a
sense of urgency and concern. Disaster research
has both benefited and suffered from this. It
has benefited because the cadre of researchers is
fluid, and new ideas are accepted and welcomed.
It has benefited also because the result has been
an impressively diverse database that includes
samples from all different regions of the United
States.... However, disaster research has also
suffered from this situation. Scholarship is not
always the best because studies often are
undertaken under conditions where there simply is
not time to absorb a literature that is scattered
across a variety of journals and is mixed in
quality. Concerns about experimental designs and
scientific rigor must often take a back seat to
provider beliefs, consumer demands, and clinical
necessities. Most of the research is
atheoretical and little of it is programmatic.
On the basis of this review, we will state our
opinion unequivocally that we do not need more
research that establishes only that severely
exposed disaster victims develop psychological
disorders or, worse, that barely exposed disaster
victims do not. We need carefully conceived and
theory-driven studies of basic process that are
longitudinal in design. ... We need more
research that addresses the needs of diverse
populations. We need more complex studies of
family systems and community-level processes. We
need to identify and investigate novel approaches
to community intervention, where the intervention
itself has been designed to produce collective
rather than individual improvements.
  • Source
  • Norris, Friedman, Watson. (2002) 60,000
    Disaster Victims Speak Part II. Summary and
    Implications of the Disaster Mental Health
    Research.
  • Psychiatry 65(3), 240-260

42
Blank
Write a Comment
User Comments (0)
About PowerShow.com