Title: Workshop on Using Contribution Analysis to Address CauseEffect Questions
1Workshop onUsing Contribution Analysis to
Address Cause-Effect Questions
- Danish Evaluation Society Conference
- Kolding, September 2008
- John Mayne, Advisor on Public Sector Performance
- john.mayne_at_rogers.com
2Workshop Objectives
- Understand the need to address attribution
- Understand how contribution analysis can help
- Have enough information to undertake a
contribution analysis on your own
3Outline
- Dealing with attribution
- Contribution analysis
- Working a case
- Levels of contribution analysis
- Conclusions
4The challenge
- Attribution for outcomes always a challenge
- Strong evaluations (such as RCTs) not always
available or possible - A credible performance story needs to address
attribution - Sensible accountability needs to address
attribution - Complexity significantly complicates the issue
- What can be done?
5The idea
- Based on the theory of change of the program,
- Buttressed by evidence validating the theory of
change, - Reinforced by examination of other influencing
factors, - Contribution analysis builds a reasonably
credible case about the difference the program is
making
6The typical context
- A program has been funded to achieve intended
results - The results have occurred, perhaps more or less
- It is recognized that several factors likely
caused the results - Need to know what was the programs role in this
7Two measurement problems
- Measuring outcomes
- Linking outcomes to actions (activities and
outputs), i.e. attribution - Are we making a difference with our actions?
8Attribution
- Outcomes not controlled are always other factors
at play - Conclusive causal links dont exist
- Are trying to understand better the influence you
are having on intended outcomes - Need to understand the theory of the program, to
establish plausible association - Something like contribution analysis can help
9The need to say something
- Many evaluations and most public reporting are
silent on attribution - Credibility greatly weakened as a result
- In evaluations, in performance reporting and in
accountability, something be said about
attribution
10Proving Causality
- The gold standard debate (RCTs et al)
- Intense debate underway, especially in
development impact evaluation - Some challenge on RCTs (e.g. Scriven)
- Does appear if RCTs have limited applicability
- Then what do we do?
11Proving Causality
- AEA and EES many methods capable of
demonstrating scientific rigour - Methodological appropriateness for given
evaluation questions - Causal analysis auto mechanic, air crashes,
forensic work, doctorsScrivens Modus Operandi
approach
12Theory-based evaluation
- Reconstructing the theory of the program
- Assess/test the credibility of the micro-steps in
the theory (links in the results chain) - Developing confirming the results achieved by
the program
13Contribution analysis the theory
- There is a postulated theory of change
- The activities of the program were implemented
- The theory of change is supported by evidence
- Other influencing factors have been assessed
accounted for - Therefore
- The program very likely made a contribution
14Steps in Contribution Analysis
- 1. Set out the attribution problem to be
addressed - 2. Develop the postulated theory of change
- 3. Gather the existing evidence on the ToC
- 4. Assemble assess the contribution story
- 5. Seek out additional evidence
- 6. Revise strengthen the contribution story
- 7. Develop the complex contribution story
151. Set out the attribution problem
- Acknowledge the need to address attribution
- Scope the attribution problem
- What is really being asked
- What level of confidence is needed?
- Explore the contribution expected
- What are the other influencing factors?
- How plausible is a contribution?
16Cause-Effect Questions
- Traditional attribution questions
- Has the program caused the outcome?
- How much of the outcome is caused by the program?
- Contribution questions
- Has the program made a difference?
- How much of a difference?
17Cause-Effect Questions
- Management questions
- Is it reasonable to conclude that the program
made a difference? - What conditions are needed to make this type of
program succeed? - Why has the program failed?
18building an evaluation office contribution story
Step 1
- Evaluation aim is to make a difference (an
outcome) - e.g., improvements in management and reporting,
more cost-effective public service, enhanced
accountability, etc. - Evaluation products (outputs)
- Evaluations and evaluation reports
- Advice and assistance
192. Develop the ToC and Risks to It
- Build the postulated results chain and ToC
- Identify roles played by other influencing
factors - Identify the risks to the assumptions
- Determine how contested the ToC is
20A results chain
Examples negotiating, consulting, inspecting,
drafting legislation
activities (how the program carries out its work)
Examples checks delivered, advice given, people
processed, information provided, reports produced
outputs (goods and services produced by the
program)
Immediate outcomes (the first level effects of
the outputs)
Examples actions taken by the recipients, or
behaviour changes
External Factors
Results
Examples satisfied users, jobs found, equitable
treatment, illegal entries stopped, better
decisions made
intermediate outcomes (the benefits and changes
resulting from the outputs)
end outcomes (the final or long-term consequences)
Examples environment improved, stronger economy,
safer streets, energy saved
21Results chain links
Examples negotiating, consulting, inspecting,
drafting legislation
activities (how the program carries out its work)
Examples checks delivered, advice given, people
processed, information provided, reports produced
outputs (goods and services produced by the
program)
Why will these immediate outcomes come about?
Immediate outcomes (the first level effects of
the outputs)
Examples actions taken by the recipients, or
behaviour changes
External Factors
Results
Examples satisfied users, jobs found, equitable
treatment, illegal entries stopped, better
decisions made
intermediate outcomes (the benefits and changes
resulting from the outputs)
end outcomes (the final or long-term consequences)
Examples environment improved, stronger economy,
safer streets, energy saved
22Theories of change
- A results chain with embedded assumptions and
risks identified - An explanation of why the results chain is
expected to work what has to happen
Assumptions target is reached, message is heard,
message is convincing, no other major influences
at work Risks target not reached, poor message,
peer pressure very strong
23(No Transcript)
24(No Transcript)
25Theory of Change for an Evaluation Office
Step 2
Results Chain
- Evaluation Reports
- findings conclusions
- recommendations
- Evaluation Studies
- participation
Advice
Outputs
Enhanced value of evaluative thinking
better informed management
acceptance of recommendations advice
Immediate Outcomes
Changes not planned anyway
Better designed programs
Better data for evaluations
implementation of recommendations advice
Intermediate Outcomes
Recommendations work
managers organisation initiatives
Other influencing factors
better management practices
Recommendations work
- More effective programs
- informed decision-making
- productive operations
- cost-effective programs
Better benefits to citizens
Our contribution story line
Final Outcomes
263. Gather existing evidence
- Assess the logical robustness of the ToC
- Gather available evidence on
- Results
- Assumptions
- Other influencing factors
274. Assemble and assess the contribution story
- Set out the contribution story
- Assess its strengths and weaknesses
- Refine the ToC
28Theory of change analysis
- Need to identify which of the links in the
results chain have the weakest evidence - Some may be supported by prior research
- Some may be well accepted
- But some may be a large leap of faith, or the
subject of debate - With limited resources, these contested links are
where effort should be focused
295. Seek out additional evidence
- Determine what is needed
- Gather new evidence
30Strengthening Techniques
- Refine the results chain and/or gather additional
results data - Survey knowledgeable others involved
- Track program variations and their impacts (time,
location, strength) - Undertake case studies
- Identify relevant research or evaluation
- Use multiple lines of evidence
- Do a focused mini-evaluation
31The Agr Research Orgs evaluation
- CA done
- Theory of change developed
- Other influencing factors recognized
- The theory of change was revised based on lessons
learned - CA that could have been done
- A more CA structured approach
- More analysis of other factors
- More attention to the risks faced
326. Revise and strengthen the contribution story
- Build the more credible contribution story
- Reassess its strengths and weaknesses
- Revisit step 5
33A CA Case Study
- Patton (2008). Advocacy Impact Evaluation. JMDE,
5(9) 1-10. - Collaboration of agencies spent over 2M on a
campaign to influence a Supreme Court decision - Evaluation Issue Did it work?
- Conclusion the campaign contributed
significantly to the Courts decision
34Features
- Was a stealth campaign
- Evaluation used Scrivens General Elimination
Method, or the modus operandi approach. - Undertook considerable document review and
interviews, an in-depth case study which served
as the evidence for the evaluation
35Cause-effect
- Attribution vs contribution
- Attribution concepts dont work well in complex
settings - Contribution analysis identifies likely
influences - Case examined 2 alternative possible influences
36Levels of contribution analysis
- Minimalist contribution analysis
- Contribution analysis of direct influence
- Contribution analysis of indirect influence
37Minimalist CA
- Develop the theory of change
- Confirm that the expected outputs were delivered
- then,
- Based on the strength of the theory of change,
conclude the program made a contribution
38Other influencing factors
- Literature and knowledgeable others can identify
the possible other factors - Reflecting on the theory of change may provide
some insight on their plausibility - Prior evaluation/research may provide insight
- Relative size compared to the program
intervention can be examined - Knowledgeable others will have views on the
relative importance of other factors
39CA of direct influence
- Minimalist CA, plus
- Verifying the expected direct outcomes occurred
- Confirming the assumptions associated with the
direct outcomes - Accounting for other influencing factors
40CA of indirect influence
- CA of direct influence, plus
- Verifying the intermediate and final outcomes
occurred - Confirming the assumptions associated with these
indirect outcomes - Accounting for other influencing factors
41A credible contribution statement
- Description of program context and other
influencing factors - A plausible theory of change
- Confirmed program activities, outputs and
outcomes - CA findings evidence supporting the ToC and
assessment of other influencing factors - Discussion of the quality of evidence
42When is CA useful?
- Program is not experimental
- Funding is based on a theory of change
- Program has been in place for some time
- No real scope for varying the intervention(s)
43Contribution analysis
- Builds evidence on
- Immediate/intermediate outcomes, the behavioural
changes - Links in the results chain
- Other influencing factors at play
- Other explanations for observed outcomes
- Contribution Evaluation