Title: Introduction to Impact Evaluation The Motivation
1Introduction to Impact EvaluationThe Motivation
- Emmanuel Skoufias
- The World Bank
- PRMPR
- PREM Learning Week April 21-22, 2008
2Outline of presentation
- Role of IE in the Results Agenda
- Impact evaluation Why and When?
- Evaluation vs. Monitoring
- Necessary ingredients of a good Impact Evaluation
3The Role of IE in the Results Agenda
- Demand for evidence of the results of development
assistance is increasing. - Among monitoring and evaluation techniques,
impact evaluation provides an important tool to
show the effect of interventions - Given the power of this tool, the Bank is
supporting an increasing number of impact
evaluations (figure 1)
4(No Transcript)
5Status of IE within the Bank-1
- Although the number of impact evaluations is
growing overall, some Regions and Networks are
more active than others - Most ongoing impact evaluations are in the social
sectors (figure 2), which reflects not only the
support provided by the HD Network, but also that
there is more of an evaluation tradition in these
areas and that the projects are more amenable to
impact evaluation techniques.
6WB Lending and IE by Sector
7Status of IE within the Bank--2
- The regional picture is also a skewed one.
Africa is the leader with 47 ongoing evaluations,
followed by SAR (27), LAC (26), and EAP (17).
MENA and ECA have 2 each.
8WB Lending and IE by Region
92. Impact Evaluation Why and When?
10Impact evaluation
- Ex-ante vs. ex-post
- Impact is the difference between outcomes with
the program and without it - The goal of impact evaluation is to measure this
difference in a way that can attribute the
difference to the program, and only the program. - Challenge to evaluating SDN operations
- difficult to find comparison group
- need quasi-experimental methods
- take advantage of sub-national variation
11Why conduct an Impact Evaluation?
- Knowledge Learning
- Improve design and effectiveness of the program
- Economic Reasons
- To make resource allocation decisions Comparing
program impacts allows G to reallocate funds from
less to more effective programs and thus to an
increase in Social Welfare - Social Reasons
- increases transparency accountability
- Support of public sector reform / innovation
- Political Reasons
- Credibility/break with bad practices of past
12When Is It Time to Make Use of Evaluation?--1
- When you want to determine the roles of both
design and implementation on project, program, or
policy outcomes - Resource and budget allocations are being made
across projects, programs, or policies - A decision is being made whether to (or not)
expand a pilot - When regular results measurement suggests actual
performance diverges sharply from planned
performance.
13When Is It Time to Make Use of Evaluation?--2
- There is a long period with no evidence of
improvement in the problem situation - Similar projects, programs or policies are
reporting divergent outcomes - There are conflicting political pressures on
decision-making in ministries or parliament - Public outcry over a governance issue
- To identify issues around an emerging problem,
I.e. children dropping out of school
14SummaryAn impact evaluation informs
153. Evaluation vs. Monitoring
16Definitions
(Results Based) Monitoring is a continuous
process of collecting and analyzing information
to compare how well a project, program or policy
is performing against expected results (Results-Ba
sed) Evaluation An assessment of a planned,
ongoing, or completed intervention to determine
its relevance, efficiency, effectiveness, impact
and sustainability. The intent is to incorporate
lessons learned into the decision-making process.
17Monitoring and Evaluation
18Evaluation Addresses
19Six Types Of Evaluation
20Complementary Roles of Results-Based Monitoring
and Evaluation
21Summary--1
- Results-based monitoring and evaluation are
generally viewed as distinct but complementary
functions - Each provides a different type of performance
information - Both are needed to be able to better manage
policy, program, and project implementation
22Summary--2
- Implementing results-based monitoring and
evaluation systems can strengthen WB and public
sector management - Implementing results-based monitoring and
evaluation systems requires commitment by
leadership and staff alike
234. Necessary ingredients of a good Impact
EvaluationA good counterfactual robustness
checks
24What we need for an IE
- The difference in outcomes with the program
versus without the program for the same unit of
analysis (e.g. individual, community etc.) - Problem individuals only have one existence
- Hence, we have a problem of a missing
counter-factual, a problem of missing data
25Thinking about the counterfactual
- Why not compare individuals before and after (the
reflexive)? - The rest of the world moves on and you are not
sure what was caused by the program and what by
the rest of the world - We need a control/comparison group that will
allow us to attribute any change in the
treatment group to the program (causality)
26We observe an outcome indicator,
Intervention
27 and its value rises after the program
Intervention
28Having the ideal counterfactual
Intervention
29allows us to estimate the true impact
30Comparison Group Issues
- Two central problems
- Programs are targeted
- Program areas will differ in observable and
unobservable ways precisely because the program
intended this - Individual participation is (usually) voluntary
- Participants will differ from non-participants in
observable and unobservable ways (selection based
on observable variables such as age and education
and unobservable variables such as ability,
motivation, drive) - Hence, a comparison of participants and an
arbitrary group of non-participants can lead to
heavily biased results
31Impact Evaluation methods
- Differ in how they construct the counterfactual
- Experimental methods/Randomization
- Quasi-experimental methods
- Propensity score matching (PSM)
- Regression discontinuity design (RDD)
- Other Econometric methods
- Before and After (Reflexive comparisons)
- Difference in Difference (Dif in Dif)
- Instrumental variables
- Encouragement design
32Thank you