The Role of Evaluation in Program Planning - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

The Role of Evaluation in Program Planning

Description:

The population of participants used for a study, experiment, or evaluation. ... By Fitz-Gibbon & Morris; Sage Publications. The Focus Group Research Handbook (1999) ... – PowerPoint PPT presentation

Number of Views:318
Avg rating:3.0/5.0
Slides: 41
Provided by: marks242
Category:

less

Transcript and Presenter's Notes

Title: The Role of Evaluation in Program Planning


1
The Role of Evaluation in Program Planning
Implementation
  • Presentation by
  • Clay Gemmill,
  • Matt Underwood,
  • Paul Thomlinson

2
Research Evaluation Lingo
  • Mean
  • The average of a group of scores.

3
Research Evaluation Lingo
  • Sample
  • The population of participants used for a study,
    experiment, or evaluation. Hopefully it is
    representative of the population as a whole.

4
Research Evaluation Lingo
  • Outcomes
  • The end results.

5
Research Evaluation Lingo
  • Baseline
  • Scores or data collected prior to application of
    treatment or involvement in study. Used as a
    comparison to scores after the application of a
    treatment.

6
Research Evaluation Lingo
  • n
  • The exact number of participants in the sample.
  • N
  • The exact number of individuals in the population.

7
Research Evaluation Lingo
  • Correlation
  • The association of scores on 2 variables.
  • Correlation does not imply causation!!!!!!!

8
Research Evaluation Lingo
  • Pretest/Posttest Design
  • An evaluation design in which participants are
    tested before (pretest) and after (posttest) a
    treatment.

9
Research Evaluation Lingo
  • Control Group
  • Participants who are measured, but do not receive
    any of the treatment. Used to compare to the
    group of participants that did receive the
    treatment to gauge the impact of said treatment.

10
Research and Evaluation Lingo
  • Margin of Error
  • A measurement of the accuracy of the results of a
    survey.
  • The larger the Margin of Error, the less accurate
    the estimated value. The smaller the Margin of
    Error, the more likely the results are accurate.

11
Research Evaluation Lingo
  • Statistically Significant
  • Indicates that results are unlikely to have
    occurred due to chance results are due to
    treatment or intervention.
  • While important, this may not be the only
    criteria to use. Individual results and personal
    stories can also indicate change.

12
Research Evaluation Lingo
  • Likert Scale
  • Measurement used to quantify qualitative data on
    a numeric scale
  • i.e., Rate the statements from 1 to 5 with 1
    being strongly agree and 5 being strongly
    disagree.

13
Research Evaluation Lingo
  • Standard Deviation
  • The average amount that scores in a distribution
    vary from the mean.

14
Myth Busting
  • Myth Its an event to get over with and then
    move on!
  • Buster Nope. Outcomes evaluation is an ongoing
    process. It takes months to develop, test and
    polish -- however, many of the activities
    required to carry out outcomes evaluation are
    activities that you're either already doing or
    you should be doing.

15
Myth Busting
  • Myth Evaluation is a whole new set of activities
    we dont have the resources.
  • Buster No! Most of these activities in the
    outcomes evaluation process are normal management
    activities that need to be carried out anyway in
    order to take your organization or program to the
    next level.

16
Myth Busting
  • Myth Theres a "right" way to do outcomes
    evaluation. What if I dont get it right?
  • Buster No! Each outcomes evaluation process is
    somewhat different, depending on the needs and
    nature of the organization and its programs.
    Consequently, each must be the "expert" at their
    outcomes plan. Therefore, start simple, but start
    and learn as you go along in your outcomes
    planning and implementation.

17
Myth Busting
  • Myth I always know what my clients need I
    don't need outcomes evaluation to tell me if I'm
    really meeting the needs of my clients or not
  • Buster You dont always know what you dont know
    about the needs of your clients outcomes
    evaluation helps ensure that you always know the
    needs of your clients. Outcomes evaluation sets
    up structures in your organization so that you
    and your organization are very likely always
    focused on the current needs of your clients.
    Also, you wont always be around outcomes help
    ensure that your organization is always focused
    on the most appropriate, current needs of clients
    even after you've left your organization.

18
Myth Busting
  • Myth Evaluation is a useless activity that
    generates lots of boring data with useless
    conclusions
  • Buster This was a problem with evaluations in
    the past when methods were chosen largely on the
    basis of achieving complete scientific accuracy,
    reliability and validity. This approach often
    generated extensive data from which very
    carefully chosen conclusions were drawn.
    Generalizations and recommendations were avoided.
    As a result, evaluation reports tended to
    reiterate the obvious and left program
    administrators disappointed and skeptical about
    the value of evaluation in general. More recently
    evaluation has focused on utility, relevance and
    practicality at least as much as scientific
    validity.

19
Myth Busting
  • Myth Evaluation is about proving the success or
    failure of a program.
  • Buster This myth assumes that success is
    implementing the perfect program and never having
    to hear from employees, customers or clients
    again -- the program will now run itself
    perfectly. This doesn't happen in real life.
    Success is remaining open to continuing feedback
    and adjusting the program accordingly. Evaluation
    gives you this continuing feedback.

20
What is the difference between Research
Evaluation?
  • Research is a systematic investigation designed
    to discover, develop, or contribute to knowledge.
  • Some examples of Research include
  • Learning about how cancer cells function to lead
    to a cure.
  • Learning about the universe in order to explore
    distant planets and stars
  • Evaluation is a type or subset of Research and
    focuses efforts on investigating the
    characteristics of one particular object.
  • Some examples of Evaluation include...
  • Testing an individual's ability to perform a
    task.
  • How beneficial a new technology might be for
    cancer patients.
  • Learning the impact of a new program for children
    with a certain mental health disorder.

21
Program Evaluation is
The systematic collection of information about
the activities, characteristics, and outcomes of
programs to make judgments about the program,
improve program effectiveness, and/or inform
decisions about future program development
(Michael Quinn Patton)
22
How Program Evaluation fits in with
Planning asks what actions will best reach our
goals and objectives. Evaluation results are raw
material for this discussion. Performance
Measurement sets milestones/markers to monitor
how are we doing? Evaluation complements by
looking behind the markers to find out why are
we doing well or poorly? Surveillance is
continuous/routine data collection on various
factors over regular intervals of time.
Surveillance systems are data source for
evaluation--especially of long-term and pop-based
outcomes. Also, main resource for formative
(pre-implementation) evaluation.
23
What Can Evaluation Do?
  • Understand, verify or increase the impact of
    products or services on customers or clients.
  • increasingly required for accountability.
  • Improve delivery mechanisms to be more efficient
    and less costly.
  • Over time, product or service delivery ends up to
    be an inefficient collection of activities that
    are less efficient and more costly than need be.
    Evaluations can identify program strengths and
    weaknesses to improve the program.
  • Verify that you're doing what you think you're
    doing.
  • Produce data or verify results that can be used
    for public relations and promoting services in
    the community.
  • Produce valid comparisons between programs to
    decide which should be retained, e.g., in the
    face of pending budget cuts.
  • Fully examine and describe effective programs for
    duplication elsewhere.

24
Common Concerns
  • Concern 1 Evaluation diverts resources away
    from the program and therefore harms
    participants.
  • This is a common concern in most programs.
    However, because evaluation helps to determine
    what does and does not work in a program, it is
    actually beneficial to program participants.

25
Common Concerns
  • Concern 2 Evaluation increases the burden for
    program staff.
  • Often program staff are responsible for
    collecting evaluation information because they
    are most familiar with, and have the most contact
    with program participants. Despite this potential
    for increased burden, staff can benefit greatly
    from evaluation because it provides information
    that can help them improve their work with
    participants, learn more about program and
    participant needs, and validate their successes.
    Also, the burden can be decreased somewhat by
    incorporating evaluation activities into ongoing
    program activities.

26
Common Concerns
  • Concern 3 Evaluation may produce negative
    results and lead to information that will make
    the program look bad.
  • An evaluation may reveal problems in
    accomplishing the work of the program, as well as
    successes. It is important to understand that
    both types of information are significant. The
    discovery of problems should not be viewed as
    evidence of program failure, but rather as an
    opportunity to learn and improve the program.

27
Types of Evaluation
  • Process Evaluation
  • Addresses questions about how well the program is
    functioning
  • Is useful for diagnosing outcome
  • Is critical in quality improvement
  • Key questions in process evaluation
  • Who is served?
  • What activities or services are provided?
  • Where is the program held?
  • When and how long?

28
Types of Evaluation
  • Outcome evaluation
  • Gauges the extent to which a program produces the
    intended improvements it addresses
  • Addresses effectiveness, goal attainment and
    unintended outcomes
  • Is critical in quality improvement
  • Key questions in outcome evaluation
  • To what degree did the desired change(s) occur?
  • Outcomes can be initial, intermediate or
    longer-term
  • Outcomes can be measured at the patient-,
    provider-, organization or system level.

29
Six Basic Steps in Evaluation
  • Step 1 Assemble an evaluation team.
  • Step 2 Prepare for the evaluation. This planning
    phase includes deciding what to evaluate,
    building a program model, stating your objectives
    in measurable terms, and identifying the context
    for the evaluation.
  • Step 3 Develop an evaluation plan. An evaluation
    plan is a blueprint or a map for an evaluation.
  • Step 4 Collect evaluation information. This task
    will require selecting or developing information
    collection procedures and instruments.
  • Step 5 Analyze your evaluation information.
  • Step 6 Prepare the evaluation report. The report
    should also include an interpretation of the
    results for understanding program effectiveness.

30
Objectives should be
  • Specific identifies specific event or action
    that will take place
  • Measurable quantify amount of change
  • Achievable realistic with given resources
  • Relevant logical and relates to program goals
  • Time bound specifies time by which objective
    will be achieved

31
From General to Measurable
  • General objective We expect to improve the
    parenting skills of program participants.
  • Measurable objective Parents participating in
    the program will demonstrate significant
    increases in their scores on an instrument that
    measures parenting skills from intake to
    completion of the parenting education classes.
  • General objective We expect to reduce the use of
    alcohol and other drugs by youth participating in
    the substance abuse intervention program.
  • Measurable objective Youth will indicate
    significant decreases in their scores on an
    instrument that measures use of alcohol and other
    drugs from intake to after program participation.

32
Framework for Program Evaluation
32
33
Data Collection Methods
34
Data Collection Methods
35
Data Collection Methods
36
How to Decide?
  • In deciding the best sources for information,
    your evaluation team will need to answer three
    questions
  • What source is likely to provide the most
    accurate information?
  • What source is the least costly or time
    consuming?
  • Will collecting information from a particular
    source pose an excessive burden on that person?
  • Possible Data Sources include
  • Program records (case records, registration
    records, academic records, and other information)
  • Program reports and documents
  • Program staff
  • Program participants
  • Family members of participants
  • Staff of collaborating agencies

37
Making Sense of Results
  • Suppose your statistical test indicates that, for
    your population as a whole, understanding of
    child development did not change significantly at
    the time of instrument administration. That is,
    "program exit" scores were not significantly
    higher than "program intake" scores. This finding
    would presumably indicate that you were not
    successful in attaining this expected participant
    outcome.

38
Making Sense of Results
  • BUTlack of a significant change among your
    participants as a group does not necessarily rule
    out program effectiveness. If you include the
    potential mediating variable of age in your
    analysis, you may find that older mothers (ages
    25 to 35) did demonstrate significant differences
    in before-and-after program scores but younger
    mothers (ages 17 to 24 years) did not. This would
    indicate that your program's interventions are
    effective for the older mothers in your target
    population, but not for the younger ones. You may
    then want to implement different types of
    interventions for the younger mothers, or you may
    want to limit your program recruitment to older
    mothers, who seem to benefit from what you are
    doing. And you would not have known this without
    the evaluation!

39
How to Choose an Evaluator
  • Assess your needs
  • The type of information desired and how it needs
    to be analyzed
  • What you're using the evaluation for
  • The complexity of the evaluation
  • Assess your resources
  • Money
  • Knowledge/experience already on staff
  • External vs. Internal Evaluator (contract vs.
    salary)
  • Geopolitical environment
  • Communication
  • Ability to communicate with a broad range of
    people
  • Check around
  • See if they have references
  • Ask others about experiences with evaluators with
    which they have worked

40
References and Resources
  • The Program Manager's Guide to Evaluation
    (December 2003). U.S. Dept of Health Human
    Services, Administration for Children Families,
    Office of Planning, Research Evaluation.
  • Field Guide to Nonprofit Program Design,
    Marketing and Evaluation (2000). By Carter
    McNamara Authenticity Consulting.
  • How to Design a Program Evaluation (1987). By
    Fitz-Gibbon Morris Sage Publications.
  • The Focus Group Research Handbook (1999). By
    Holly Edmunds NTC Business Books in Conjunction
    with the American Marketing Association.
  • Statistics for Psychology, 3rd Edition (2003). By
    Arthur Elaine Aron Prentice Hall.
  • Research Methods in Psychology, 3rd Edition
    (2002). By Gary Heiman. Houghton Mifflin Company.
Write a Comment
User Comments (0)
About PowerShow.com