Understanding Program Evaluation Module 1 - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

Understanding Program Evaluation Module 1

Description:

Title: No Slide Title Author: PBE Last modified by: Kruger Created Date: 1/30/1998 3:10:28 PM Document presentation format: Custom Other titles: Times New Roman Arial ... – PowerPoint PPT presentation

Number of Views:186
Avg rating:3.0/5.0
Slides: 60
Provided by: PBE74
Category:

less

Transcript and Presenter's Notes

Title: Understanding Program Evaluation Module 1


1
UnderstandingProgram EvaluationModule 1
  • Essential Skills Series
  • An Introduction to Evaluation Concepts and
    Practice
  • Canadian Evaluation Society

Date_________________ Location______________
2
Workshop Agenda
  • Registration 830 - 900 a.m.
  • Introduction to Program Evaluation 900 -
    1015 a.m.
  • Break 1015 - 1030 a.m.
  • Program Planning Evaluation 1030 - 1200
    p.m.
  • Lunch 1200 - 100 p.m.
  • Types of Evaluations 100 -
    215 p.m.
  • Break 215 - 230 p.m.
  • Major Evaluation Roles and Approaches 230 -
    345 p.m.
  • Evaluation Standards Ethical Guidelines 345
    - 400 p.m.
  • Discussion of Upcoming Modules 400 - 420
    p.m.
  • Workshop Evaluation
    420 - 430 p.m.

3
Workshop Objectives
  • Introduction to terms and concepts used
    by evaluators
  • Overview of historical and current trends
  • Identify major benefits and uses of evaluation
  • Examining the relationship between planning,
    management and evaluation
  • Review major models of evaluation
  • Review evaluation standards, ethics and
    fairness
  • Summarize the Canadian evaluation basics

4
What is Evaluation?
  • What do you think of when you hear the term
    evaluation?

5
Section 1.
  • Introduction to
  • Program Evaluation

6
Working Definition of Program Evaluation
  • Program evaluation is the systematic collection
    and analysis of information about program
    activities, characteristics, and outcomes to make
    judgements about the program, improve program
    effectiveness and/or inform decisions about
    future programming.
  • Source Patton, M.Q. (1997). Utilization-focused
    Evaluation. Sage Publications.

7
Working Definition of Evaluation Research
  • Evaluation research is the
  • systematic application of social science research
    procedures in assessing social intervention
    programs.
  • Program evaluation is viewed as a form of applied
    social research.

Source Rossi, P., Lipsey, M., Freeman, H.
(2004). Evaluation A Systematic Approach (7th
edition). Sage Publications
8
Canadian Context
  • Treasury Board Secretariat Definition
  • The application of systematic methods to
    periodically and objectively assess effectiveness
    of programs in achieving expected results, their
    impacts, both intended and unintended, continued
    relevance and alternative or more cost-effective
    ways of achieving expected results. 
  • Source Results-Based Management Lexicon,
    http//www.tbs-sct.gc.ca/rma/lex-lex_e.asp
    accessed November 2008

9
Historical Context Evaluation in Canada
  • 1960s - Emergence of Evaluation an amalgam
    of social sciences
  • 1970s - Evaluation becomes more routinized
  • 1977 Treasury Board Secretariat Policy
  • 3-5 year cycle
  • 1980s - Distinct, mandated function some
    common standards
  • 1981 Office of the Comptroller General Guide
  • Standardized issues
  • Evaluation units in most federal agencies (and
    many provincial agencies)
  • 1990s - Evaluation seen as serving
    organizational operations, strategic and tactical
    evaluation a part of review
  • 1994 Evaluation Policy
  • 2000s - Evaluation as a tool for
    accountability and management

Sources Müller-Clemm and Barnes (1997) A
Historical Perspective on Federal Program
Evaluation in Canada, Canadian Journal of
Program Evaluation, 12 (1), 47-70 Segsworth
(2005) Program Evaluation in Canada Plus Ça
Change Canadian Journal of Program Evaluation,
20 (3), 195-197
10
The Original Canadian Evaluation Issues
  • Program Rationale (does the program make sense)
  • a) To what extent are the objectives and mandate
    of the program still relevant?
  • Are the activities and outputs of the program
    consistent with its mandate and plausibly linked
    to the attainment of the objectives and the
    intended impacts and effects?
  • Impacts and Effects (what has happened as a
    result of the program)
  • a) What impacts and effects, both intended and
    unintended, resulted from carrying out the
    program?
  • In what manner and to what extent does the
    program complement, duplicate, overlap or work at
    cross purposes with other programs?
  • Objectives Achievement (has the program achieved
    what was expected)
  • In what manner and to what extent were
    appropriate program objectives achieved as a
    result of the program?
  • Alternatives (are there better ways of achieving
    the results)
  • a) Are there more cost-effective alternative
    programs which might achieve the objectives and
    intended impacts and effects?
  • b) Are there more cost-effective ways of
    delivering the existing program? (OCG, 1981a, p.
    7)

Source Guide on the Program Evaluation
Function, Office of the of the Comptroller
General 1981 http//www.tbs-sct.gc.ca/eval/pubs/pu
bs-to-1995/orig-gd_e.asp . Accessed November
2008.
11
Evaluation vs. Performance Measurement
Contrasting Paradigms
  • Evaluation
  • Behavioural Sciences
  • Logic Model
  • Academic / interntl development / social
    development
  • Periodic
  • Strategic
  • Heretical
  • Performance Measurement
  • Accounting, Process Engineering, Marketing
  • Ledger / Scorecard
  • Business
  • Ongoing
  • Operational
  • Conformist

Source Montague, S. (2005) Performance Planning,
Measurement and Reporting For Continuous
Improvement, CES-AEA Workshop
12
Audit and Evaluation in Public Management
Audit Evaluation
DEFINITION checking, comparing, compliance, assurance assessment of merit, worth, value of administration, output and outcome of interventions
TYPES traditional financial and compliance performance audit substantive systems and procedures wide variability many types noted in the literature
WHO DOES IT? internal auditors part of organization external auditors independent agency internal evaluators part of organization external contracted consultants not really independent?
ROLES provide assurance public accountability improve management not as well articulated increase knowledge improve delivery and management (re) consider the rationale varies by a long list of potential clients
METHODS file review, interviews, focus groups, surveys, observations wide variety of methods, from scientific and quasi scientific designs to purely qualitative and interpretative methods and methods linked to testing program theory
REPORTING attest to legislatures direct to management management various stakeholders
STRENGTH strong reputation supported by professional associations well established and followed standards addresses issues of public concern (e.g. waste mis-management etc.) addresses attribution explains why? acknowledges complexity and uncertainty flexible in design and practice
CHALLENGES dealing with complexity operating in a collaborating state credibility perceived relevance
Source Mayne, John (2006) Audit and Evaluation in Public Management, The Canadian Journal of Program Evaluation Vol. 21, No. 1 Source Mayne, John (2006) Audit and Evaluation in Public Management, The Canadian Journal of Program Evaluation Vol. 21, No. 1 Source Mayne, John (2006) Audit and Evaluation in Public Management, The Canadian Journal of Program Evaluation Vol. 21, No. 1
13
Benefits of Program Evaluation
  • Evaluation provides information about
  • Relevance to need
  • Program operations
  • Program strengths and weaknesses
  • Attainment of goals and performance
  • Program issues
  • Attributable impact
  • Efficiency and cost-effectiveness

Source Love, A. (2007)
14
Uses of Program Evaluation
  • Symbolic use (evaluation as part of a token
    effort)
  • Instrumental use (evaluation for direct design
    and delivery improvement)
  • Conceptual use (evaluation to change the way
    people think or see a program)
  • Source Weiss, C., Murphy-Graham, E. and
    Birkeland, S. (2005) An Alternate Route to Policy
    Influence, American Journal of Evaluation, Vol
    26, No 1

15
Two Fundamental Uses
  • Development (learning)
  • Accountability
  • Source Chelimsky, E., Shadish, W.R.
    (eds.) (1997). Evaluation for the 21st Century A
    Handbook. Thousand Oaks, CA Sage

16
Some Potential Misuses of Program Evaluation
  • Evaluation information can be misused to
  • Postpone action
  • Whitewash a program
  • Eliminate a program
  • Justify a weak program

17
The Results of Effective Evaluation
  • Assessing the relevance of the program to support
    continued operation
  • Providing objective assessment of the extent to
    which program results are being achieved
  • Supporting submissions and Initiative Proposals
    (e.g. Memoranda to Cabinet)
  • Identifying areas of program improvement and/or
    alternative delivery means
  • Providing overall assessment of the
    cost-effectiveness of the program

Source Treasury Board Secretariat of Canada
Case Studies in Effective Evaluation.
http//www.tbs-sct.gc.ca/eval/tools_outils/impact/
impact_e.asp5.0 Accessed November 2008.
18
Conditions Where Evaluation is Considered Useful
  • High quality / credibility / integrity
  • Provides clear support for decision making and
    action
  • Responsive to user information needs
  • Adequate organizational infrastructure and
    resources are available to support the function
  • Owned and embraced by users
  • Source Cousins, Goh, Aubry, Lahey, Montague
    and Elliott (2006) What Makes Evaluation Useful
    in Government? A Concept Mapping Study
    American Evaluation Association Presentation,
    November 2006

19
Program EvaluationTruth Test and Utility Test
  • Truth Test
  • Is it trustworthy?
  • Can I rely on it?
  • Will it hold up under scrutiny or attack?
  • Utility Test
  • Does it provide direction?
  • Does it yield guidance?
  • Does it have an action orientation?
  • Does it challenge the status quo?

Source Love, A. (2007)
20
Section 2.
  • Program Planning
  • and Evaluation

21
What is a Program?
  • A program is
  • an organized set of activities whose objective is
    the production of changes in the recipients and /
    or their environment.
  • What about policies, initiatives, projects etc.?

22
  • Program
  • 1. Meals on Wheels for Senior Citizens
  • 2. Emergency Shelter Beds in Winter
  • 3. Job Retraining
  • Desired Change
  • Increased social interaction
  • Nutritious varied diet
  • Relief from exposure to cold nights
  • Homeless person uses shelter
  • Increase in employment levels, salary levels, job
    satisfaction

23
Group Exercise 1
  • Unintended Outcomes
  • Often not certain what changes are expected
  • Programs can produce unanticipated changes
  • Specify 1 desired and 1 unintended change for a
    program that promotes responsible gambling
  • Specify Desired Change
  • _______________________
  • _______________________
  • _______________________
  • _______________________
  • Specify the Unintended Change
  • _______________________
  • _______________________
  • _______________________
  • _______________________

24
Program Logic Elements
  • Inputs The financial and non-financial
    resources used to produce outputs and accomplish
    outcomes.
  • Activities An operation or work process
    internal to an organisation, intended to produce
    specific outputs (e.g. products or services).
    Activities are the primary link in the chain
    through which outcomes are achieved.
  • Outputs Direct products or services stemming
    from the activities of a policy, program, or
    initiative, and delivered to a target group or
    population. Usually things you can count.
  • Outcomes An external consequence attributed to
    an organisation, policy, program or initiative
    that is considered significant in relation to its
    commitments. Outcomes may be described as
    immediate, intermediate or final (end), direct or
    indirect, intended or unintended. A good outcome
    statement represents the type of change wanted,
    includes reference to the target population or
    intended beneficiary and does not include
    reference to the how.

(Source TBS Results-based Management Lexicon
http//www.tbs-sct.gc.ca/rma/lex-lex_e.asp
accessed Dec 15, 2008)
25
A Program as Seen From an Evaluation Perspective
Environment
Needs
Social value of inputs
Program
Objectives
Inputs
Activities
Outputs
Outcomes
Source McDavid, J. Hawthorn, L. (2006)
Program Evaluation and Performance Measurement
Sage Publications Adapted from Nagarajan, N.
Vanheukelen, M. (1997) Evaluating EU expenditure
programmes A guide (p 25)
26
Section 3.
  • Types of Evaluations

27
Two Fundamental Types
  • Formative improvement and development oriented
  • Summative accountability oriented
  • (Scriven 1967)

28
Formative vs. Summative Evaluation
  • Formative evaluation is used for the improvement
    and development of an ongoing program. Based on
    the outcome(s) of the formative evaluation, the
    program can be modified to improve on problems or
    difficulties.
  • Summative evaluation usually serves an
    accountability function. At the end of the
    program, a summative evaluation is completed to
    describe the overall successes of the program and
    to determine whether the program should be
    continued.

29
Types of Evaluations
  • Needs Assessment
  • Utilization
  • Program Planning
  • Focus
  • Program Need
  • Gap between Actual and Desired State

30
Types of Evaluations
  • Evaluability Assessment
  • Utilization
  • Program Design
  • Focus
  • Program Rationale
  • Program Interventions and Strategies

31
Types of Evaluations
  • Process Evaluation
  • Utilization
  • Program Operations and Implementation
  • Focus
  • Program Monitoring
  • Efficiency

32
Types of Evaluations
  • Outcome Evaluation
  • Utilization
  • Program Results and Impacts
  • Focus
  • Program Results
  • Effectiveness

33
Evaluation and the Management Life-cycle
Design an intervention
Reconsider, redesign, expand, reduce or end
Initial Situation
Identify a need
Program/Policy start-up
FINAL OUTCOMES
IMMEDIATE OUTCOMES
INTERMEDIATE OUTCOMES
Ongoing Performance/Outcome Monitoring/Measurement
Source Adapted from Birch-Jones, J., Integrating
PM and Evaluation Bridging the Chasm, CES-NCR,
2002.
34
Key Evaluation Questions Needs Assessment
  • Who needs the program?
  • What kinds of services do they need?
  • Are the needed services available?
  • Are the needed services accessible?
  • Are the people in need aware that the services
    exist?
  • Are there enough program resources to address the
    need?

35
Key Evaluation Questions Evaluability Assessment
  • What are the program components?
  • What are the goals of the program?
  • What are the indicators or criteria of goal
    achievement?
  • Are the programs goals and objectives compatible
    with the organizations vision and mission?

36
Key Evaluation Questions Process Evaluation
  • To what extent is the program being implemented
    as designed?
  • Who uses the program? What activities are
    participants involved in?
  • How are time, money and personnel allocated?
  • Are program participants satisfied with the
    program?
  • Are they receiving quality services?
  • How does the program vary from one site to
    another?

37
Key Evaluation Questions Outcome Evaluation
  • To what extent is the program meeting its goals?
  • How does the program compare against accepted
    standards?
  • Is the program effective?
  • How does the program compare with competitive
    programs?
  • Should the program be continued?
  • Should the program be expanded?

38
Who Should Conduct the Evaluation?
  • Internal evaluation uses a staff member to
    evaluate an organizations programs
  • External evaluation uses someone who is not
    directly supervised by an organization to
    evaluate its programs

39
Comparative Advantages of Internal and External
Evaluation
  • Internal
  • Issues matched to managers needs
  • Costs lower
  • Results readily accepted by staff
  • Data better understood by staff
  • External
  • Outside expertise and experience
  • Credibility to funders and stakeholders
  • Staff not diverted from normal tasks
  • Brings fresh perspective

40
Comparative Disadvantages of Internal and
External Evaluation
  • Internal
  • Evaluation skills may not be available
  • Staff busy and evaluations not completed on time
  • Staff rarely independent
  • Difficult to separate program from personnel
    evaluation
  • External
  • Need time to become familiar with program
  • Illusion of independence
  • Financial costs can be high
  • Consultants require careful management

41
Understanding Program EvaluationSmall Group
Exercise 2
  • Rockwood Community Services
  • Founded in the 1970s, Rockwood Community Services
    is a multiservice agency that helps over 10,000
    diverse individuals of all ages annually achieve
    greater independence and support recovery from
    illness by providing community-based health and
    mental health services.
  • After reviewing the annual program statistics,
    Rockwoods Board Planning Committee has
    identified a disturbing new trend each month
    more and more clients are seeking help for
    serious gambling problems. It seems as if
    Rockwood has been caught in a perfect storm
    caused by recent changes to gambling legislation,
    increased access to gambling venues, and the
    recent surge in online gambling. Gambling is now
    being strongly promoted not only as an enjoyable
    form of entertainment, but as part of the modern
    lifestyle and perhaps even as a civic duty,
    because of huge revenues generated by
    government-sponsored gambling. These revenues
    fund a wide variety of important social and
    educational programs that would not exist
    otherwise. In the space of a few years, it is
    said that gambling has become the fastest growing
    industry in Canada and the United States.
  • The Board wants to know whether Rockwood has an
    effective response to deal with the apparent
    problem gambling epidemic. The Board is placing
    Rockwoods Executive Director under tremendous
    pressure to evaluate the effectiveness of the
    agencys programs to prevent and treat problem
    gambling. Because Rockwood prides itself in
    delivering high quality, evidence-based services,
    the Board wants a rigorous summative evaluation
    of these programs.
  • Rockwoods clinical directors and staff have not
    responded well to these demands. They point out
    that their programs for problem gambling are in
    the developmental stages and that it is too early
    to evaluate their effectiveness. Because the
    surge of clients with gambling problems has
    caught Rockwood and other service providers by
    surprise, they are not sure about the nature and
    extent of the problem and the characteristics of
    the clients affected. They are in the process of
    assessing their needs and designing programs to
    prevent and treat problem gambling. They argue
    that now is not the time for an evaluation, and
    that summative evaluation of their current
    programs would be a waste of time and money.

42
Small Group Exercise 2 Worksheet
  • Rockwood Community Services
  • 1. What are the reasons for conducting an
    evaluation of this program now?
  • 2. What are the reasons against conducting an
    evaluation of this program now?
  • 3. In your opinion, where are Rockwoods programs
    for problem gambling on the program / management
    development life cycle?
  • 4. What type of evaluation would be appropriate
    for programs at this stage of program
    development cycle?

43
Section 4.
  • Major Evaluation Roles and Approaches

44
Role of the Evaluator
  • Researcher
  • Management consultant
  • Facilitator

45
Evaluation Approaches
  • Degree of evaluator
  • Independence
  • Control over the design
  • Lead of the process
  • Some important types
  • Goal based
  • Participatory
  • Empowerment
  • Developmental

46
Traditional vs. Developmental Evaluation
Traditional evaluations Complexity-based, developmental evaluations
Render definitive judgments of success or failure Provide feedback, generate learnings, support direction or affirm changes in direction
Measure success against predetermined goals Develop new measures and monitoring mechanisms as goals emerge evolve
Position the evaluator outside to assure independence and objectivity Position evaluation as an internal, team function integrated into action and ongoing interpretive processes
Design the evaluation based on linear cause-effect logic models Design the evaluation to capture system dynamics, interdependencies, and emergent interconnections
Aim to produce generalizable findings across time space Aim to produce context-specific understandings that inform ongoing innovation
Accountability focused on and directed to external authorities and funders Accountability centered on the innovators deep sense of fundamental values and commitments
Accountability to control and locate blame for failures Learning to respond to lack of control and stay in touch with whats unfolding and thereby respond strategically
Evaluator controls the evaluation and determines the design based on the evaluators perspective about what is important Evaluator collaborates in the change effort to design a process that matches philosophically and organizationally
Evaluation engenders fear of failure Evaluation supports hunger for learning
Source Patton, Michael Q. Evaluation for the
Way We Work. The Nonprofit Quarterly, Spring
2006, pp. 28-33
47
Evaluator Role for Each Evaluation Model Adapted
from Love (1998)
Goal-Based Model Evaluator directs
the evaluation process.
Participatory Model Evaluator guides evaluation
process. Evaluator is facilitator and resource.
Empowerment Model Team has total authority and
resources to evaluate and improve
performance. Evaluator is empowerment facilitator.
Developmental Model Evaluator supports
teamwork. Ownership is shared by all. Evaluator
is advisor to program team.
48
Small Group Exercise 3 Understanding Program
Evaluation
  • Selecting an Evaluation Approach
  • The Board Planning Committee of Rockwood
    Community Services has decided to strike a
    Problem-Gambling Task Force to examine the extent
    of the gambling problem and develop a range of
    feasible options in response.
  • The Problem-Gambling Task Force invited
    Rockwoods senior managers and clinical leads to
    discuss the problem gambling situation. It was
    obvious from this meeting that Rockwood lacked
    fundamental knowledge about problem gambling and
    that both senior managers and clinical staff
    would benefit greatly by receiving training from
    experts in the field.
  • Rockwoods Problem-Gambling Task Force contracted
    with an organization with experienced trainers
    who were also therapists that specialized in
    problem gambling. The trainers provide
    evidence-based training and support materials
    designed to develop core competencies related to
    gambling and problem gambling.
  • This organization developed a six-session
    training program delivered one half-day per week
    for six weeks. The sessions were scheduled to
    cause minimal disruption to service delivery and
    to develop competencies that could be immediately
    used at Rockwood. Training topics included an
    overview of gambling terms and concepts, signs of
    problem gambling, assessment and screening tools,
    different evidence-based prevention and treatment
    models, strategies for supporting families,
    working with specific populations (youth,
    seniors, women, families, specific ethno-cultural
    groups), and brief referral and support services.
  • There was a great deal of debate among the
    Problem-Gambling Task Force members and
    Rockwoods evaluation staff about the appropriate
    approach to the evaluation. Some members felt
    that a goal-based model was right because the
    purpose of the evaluation was to assess whether
    the training program developed the needed
    competencies or not. The key evaluation task was
    to do assessments of competencies
    before-and-after training.
  • Others disagreed. They felt that it was important
    for clinical staff and other stakeholders to
    participate in the evaluation process or for
    evaluation to be integrated into project
    development to maximize learning from the
    evaluation process. Still others felt that the
    end result of the evaluation should be to empower
    the clinical staff with the knowledge and skills
    they needed to develop programs unique to
    Rockwood and the specific needs of its clients.

49
Small Group Exercise 3 Worksheet
  • Selecting an Evaluation Approach
  • Review the proposed Rockwood case.
  • 1. In your view, what are the primary purposes of
    this evaluation?
  • Select one evaluation approach (either
    goal-based, participatory, developmental or
    empowerment) and discuss how selecting that
    approach would affect the design of the
    evaluation (e.g., focus of the evaluation, types
    of questions asked, methods used to collect data,
    ownership of the evaluation).
  • 3. Now select another evaluation approach
    (either goal-based, participatory, developmental
    or empowerment) and discuss how selecting this
    second approach would affect the design of the
    evaluation (e.g., focus of the evaluation, types
    of questions asked, methods used to collect data,
    ownership of the evaluation).

50
Small Group Exercise 4
  • Which evaluation model would you personally use
    within your organization?
  • List your reasons for using this model.
  • How well does this model fit your organizational
    structure and culture?

51
Section 5.
  • Evaluation Standards
  • and
  • Ethical Guidelines

52
The Canadian Program Evaluation Standards
  • Utility Standards
  • Ensure evaluation will serve the practical
    information needs of users informative, timely,
    and influential
  • Feasibility Standards
  • Ensure evaluation will be realistic, prudent,
    diplomatic, and economical
  • Propriety Standards
  • Ensure evaluation will be conducted legally and
    ethically
  • Accuracy Standards
  • Ensure evaluation will be technically adequate

Source The Joint Committee on Standards for
Educational Evaluation, 1994
53
American Evaluation Association Guiding
Principles for Evaluators
  • Systematic Inquiry
  • Evaluators conduct systematic, data-based
    inquiries
  • Competence
  • Evaluators provide competent performance to
    stakeholders
  • Integrity / Honesty
  • Evaluators display honesty and integrity in their
    own behavior and attempt to ensure the honesty
    and integrity of the entire evaluation process
  • Respect for People
  • Evaluators respect the security, dignity, and
    self-worth of respondents, program participants,
    clients, and other evaluation stakeholders
  • Responsibilities for General and Public Welfare
  • Evaluators articulate and take into account the
    diversity of general and public interests and
    values

Source American Journal of Evaluation March 2008
54
Canadian Evaluation Society Ethical Guidelines
  • Competence
  • Evaluators are to be competent
  • Integrity
  • Evaluators are to act with integrity
  • Accountability
  • Evaluators are to be accountable

55
Small Group Exercise 5Evaluation Standards and
Ethical Guidelines
Match the situations in the first column with
correct standards or ethical guidelines in the
second column.
  • _____ Utility
  • _____ Feasibility
  • _____ Propriety
  • _____ Technical Adequacy
  1. A program manager used a Web-based survey package
    to measure client satisfaction. There was only a
    5 response rate and little information about
    who responded and who did not. The evaluator
    recommended that decisions should not be based on
    the findings from this survey. Which standard or
    ethical guideline was the evaluator following?
  2. An evaluator who conducted an evaluation using
    focus groups with program clients described the
    limitations of this methodology clearly in
    presentations and reports of the evaluation
    findings. Which standard or ethical guideline was
    the evaluator following?
  3. Before designing an evaluation, the program
    evaluator met with key stakeholders to assess
    their evaluation information needs. The evaluator
    also was careful to clarify the purpose for the
    evaluation, how the evaluation findings might be
    used and factors in the program context that
    might affect the evaluation. Which standard or
    ethical guideline was the evaluator following?
  4. An external evaluator presented a draft of the
    evaluation findings to the program manager and
    staff for review. The program manager met with
    the evaluator and demanded that the programs
    strengths be emphasized and the weaknesses
    buried in the report otherwise funding would
    be jeopardized. What standard or ethical
    guideline should the evaluator follow in this
    situation?
  5. A small nonprofit program with an annual budget
    under 10,000 must evaluate outcomes to meet
    funding requirements. The evaluator recommended a
    simple evaluation design using a combination of
    internal and external resources to keep costs low
    and burden to staff and clients at a minimum.
    Which standard or ethical guideline was the
    evaluator following?
  6. Before conducting an evaluation, the evaluator
    provided each client with a description of the
    evaluation, its benefits and risks, and asks for
    their written permission to participate. Which
    standard or ethical guideline was the evaluator
    following?
  7. An evaluator who had little experience in
    evaluating problem gambling programs notified the
    Evaluation Steering Committee of this limitation.
    The Evaluation Steering Committee then hired an
    expert in problem gambling to work with the
    evaluator on the study. Which standard or ethical
    guideline was the evaluator and Evaluation
    Steering Committee following?

56
Quick Summary Three Pillars of Evaluation in
Canada
  • There are three keys to Canadian program
    evaluation
  • Evaluations are mostly issue driven
  • Evaluations are results logic (program theory)
    focused
  • Evaluations rely on multiple lines of enquiry

57
Preview of the Upcoming Modules
Module 2 Planning for Evaluations
Module 3 Monitoring, Formative Evaluation and Data Collection
Module 4 Outcome Evaluation
58
A Program as Seen From an Evaluation Perspective
Environment
Needs
Social value of inputs
Program
Objectives
Inputs
Activities
Outputs
Outcomes
Needs Assessment
Monitoring and Measurement
Outcome / Impact Evaluation
Source McDavid, J. Hawthorn, L. (2006)
Program Evaluation and Performance Measurement
Sage Publications Adapted from Nagarajan, N.
Vanheukelen, M. (1997) Evaluating EU expenditure
programmes A guide (p 25)
59
For Next Week
  • Look at some programs (select a case)
  • Consider the results logic
  • Inputs
  • Activities
  • Outputs
  • Outcomes
  • Come prepared to discuss
Write a Comment
User Comments (0)
About PowerShow.com