Monitoring and evaluation - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Monitoring and evaluation

Description:

MONITORING AND EVALUATION OUR APPROACH What is monitoring and evaluation? Conceptual differences and terminologies. Approaches to Monitoring and Evaluation. – PowerPoint PPT presentation

Number of Views:89
Avg rating:3.0/5.0
Slides: 33
Provided by: Tosh148
Category:

less

Transcript and Presenter's Notes

Title: Monitoring and evaluation


1
Monitoring and evaluation
2
Our Approach
  1. What is monitoring and evaluation? Conceptual
    differences and terminologies.
  2. Approaches to Monitoring and Evaluation.
  3. Establishing ME System.
  4. How to do Monitoring and Evaluation.

3
What is monitoring?
  • Day-to-day follow up of activities during
    implementation to measure progress and identify
    deviations
  • Routine follow up to ensure activities are
    proceeding as planned and are on schedule
  • Routine assessment of activities and results
  • Answers the question, what are we doing?

4
Why to monitor activities?
  • Tracks inputs and outputs and compares them to
    plan
  • Identifies and addresses problems
  • Ensures effective use of resources
  • Ensures quality and learning to improve
    activities and services
  • Strengthens accountability
  • Program management tool

5
What is evaluation?
  • It is a time-bound exercise that attempts to
    assess systematically and objectively the
    relevance, performance and success of ongoing and
    completed programmes and projects.
  • Designed specifically with intention to attribute
    changes to intervention itself
  • Answers the question, what have we achieved and
    what impact have we made
  • Evaluation commonly aims to determine the
    relevance, efficiency, effectiveness, impact and
    sustainability of a programme or project.

6
  • Relevance The degree to which the outputs,
    outcomes or goals of a programme remain valid and
    pertinent as originally planned or as
    subsequently modified owing to changing
    circumstances within the immediate context and
    external environment of that programme.
  • Efficiency A measure of how economically or
    optimally inputs (financial, human, technical and
    material resources) are used to produce outputs.
  • Effectiveness A measure of the extent to which a
    programme achieves its planned results (outputs,
    outcomes and goals).

7
  • Impact Positive and negative long term effects
    on identifiable population groups produced by a
    development intervention, directly or indirectly,
    intended or unintended. These effects can be
    economic, socio-cultural, institutional,
    environmental, technological or of other types.
  • Sustainability Durability of programme results
    after the termination of the technical
    cooperation channelled through the programme.
  • Static sustainability the continuous flow of
    the same benefits, set in motion by the completed
    programme, to the same target groups
  • Dynamic sustainability the use or adaptation of
    programme results to a different context or
    changing environment by the original target
    groups and/or other groups.

8
Why evaluate activities
  • Determines program effectiveness
  • Shows impact
  • Strengthens financial responsibility and
    accountability
  • Promotes a learning culture focused on service
    improvement
  • Promotes replication of successful interventions.

9
Types of Evaluation
  • Ex-ante Evaluation An evaluation that is
    performed before implementation of a development.
    intervention. Related term appraisal.
  • Ex-post Evaluation A type of summative
    evaluation of an intervention usually conducted
    after it has been completed.
  • External Evaluation An evaluation conducted by
    individuals or entities free of control by those
    responsible for the design and implementation of
    the development intervention to be evaluated
    (synonym independent evaluation).
  • Internal Evaluation Evaluation of a development
    intervention conducted by a unit and /or
    individual/s reporting to the donor, partner, or
    implementing organization for the intervention.

10
  • Formative Evaluation A type of process
    evaluation undertaken during programme
    implementation to furnish information that will
    guide programme improvement.
  • Impact Evaluation A type of outcome evaluation
    that focuses on the broad, longer-term impact or
    results of a programme.
  • Joint Evaluation An evaluation conducted with
    other partners, bilateral donors or international
    development banks.
  • Meta-evaluation A type of evaluation that
    aggregates findings from a series of evaluations.

11
  • Process Evaluation A type of evaluation that
    examines the extent to which a programme is
    operating as intended by assessing ongoing
    programme operations. A process evaluation helps
    programme managers identify what changes are
    needed in design, strategies and operations to
    improve performance.
  • Qualitative Evaluation A type of evaluation that
    is primarily descriptive and interpretative, and
    may or may not lend itself to quantification.
  • Quantitative Evaluation A type of evaluation
    involving the use of numerical measurement and
    data analysis based on statistical methods.

12
  • Summative Evaluation A type of outcome and
    impact evaluation that assesses the overall
    effectiveness of a programme.
  • Thematic Evaluation Evaluation of selected
    aspects or cross-cutting issues in different
    types of interventions.

13
Confusing terms
  • Audit
  • Appraisal
  • Inspection

14
(No Transcript)
15
Approach Major Focus Typical Question Likely Methodology
Goal Based (Strategic Approach) Assessing achievement of goals and objectives. Were the goals achieved? Efficiently? Were they the right goals? Comparing baseline and progress data finding ways to measure indicators.
Decision Making (System Approach) Provide Information Is the project effective? Should it continue? How might it be modified? Assessing range of options related to the project context, inputs, process, and product. Establishing some kind of decision-making consensus.
Goal Free (Inductive Approach) Assessing full range of project impacts, intended and not intended What are all the outcomes? What value do they have? Independent determination of needs and standards to judge project worth. Qualitative and quantitative techniques to uncover any possible results.
Expert Use Expertise How does an outside professional rate this project? Critical review based on experience, informal surveying, and subjective insights.
Participatory Approach Stakeholder Satisfaction How do the stakeholders rate the project. Participatory Workshops.
16
ME Tools
  • Evaluating programme strategy and direction
    Log-frames, Stakeholder Analysis
  • Evaluating programme management Horizontal
    Evaluation Appreciative Inquiry
  • Evaluating programme outputs Evaluating academic
    articles and research reports Evaluating
    websites After Action Reviews
  • Evaluating outcomes and impacts Outcome Mapping,
    Most Significant Change Episode Studies.

17
ME Tools
  • Following is a non exhaustive list of ME
    Tools
  • Performance indicators
  • Formal surveys
  • Rapid appraisal methods
  • Participatory methods
  • Cost-benefit and cost-effectiveness analysis
  • Impact evaluation

18
1. Performance indicators
  • Performance indicators are measures of inputs,
    processes, outputs, outcomes, and impacts for
    development projects, programs, or strategies.
  • Uses
  • Setting performance targets and assessing
    progress toward achieving them.
  • Identifying problems via an early warning system
    to allow corrective action to be taken.
  • Problems
  • Poorly defined indicators are not good measures
    of success.
  • Tendency to define too many indicators, or those
    without accessible data sources,
  • Often a trade-off between picking the optimal or
    desired indicators and having to accept the
    indicators which can be measured using existing
    data.

19
How To Make Indicators
  1. Identify the problem situation you are trying to
    address.
  2. Develop a vision for how you would like the
    problem areas to be/look. This will give you
    impact indicators.
  3. Develop a process vision for how you want things
    to be achieved. This will give you process
    indicators.
  4. Develop indicators for effectiveness.
  5. Develop indicators for efficiency .

20
2- Formal Surveys
  • Formal surveys can be used to collect
    standardized information from a carefully
    selected sample of people or households.
  • Uses
  • Providing baseline data against which the
    performance of the strategy, program, or project
    can be compared.
  • Comparing different groups at a given point in
    time.
  • Comparing changes over time in the same group.
  • Comparing actual conditions with the targets
    established in a program or project design.

21
2- Formal Surveys
  • Problems
  • With the exception of CWIQ, results are often not
    available for a long period of time.
  • The processing and analysis of data can be a
    major bottleneck for the larger surveys even
    where computers are available.
  • LSMS household surveys are expensive
    time-consuming.
  • Many kinds of information are difficult to obtain
    through formal interviews.

22
Different types of survey
  1. Multi-Topic Household Survey (also known as
    Living Standards Measurement SurveyLSMS).
  2. Core Welfare Indicators Questionnaire (CWIQ).
  3. Client Satisfaction (or Service Delivery) Survey.
  4. Citizen Report Card.

23
3- Rapid appraisal methods
  • Rapid appraisal methods are quick, low-cost ways
    to gather the views and feedback of beneficiaries
    and other stakeholders, in order to respond to
    decision-makers needs for information.
  • Uses
  • Providing rapid information for management
    decision-making, especially at the project or
    program level.
  • Providing qualitative understanding of complex
    socioeconomic changes, highly interactive social
    situations, or peoples values, motivations, and
    reactions.
  • Providing context and interpretation for
    quantitative data collected by more formal
    methods.

24
3- Rapid appraisal methods
  • Problems
  • Findings usually relate to specific communities
    or localitiesthus difficult to generalize from
    findings.
  • Less valid, reliable, and credible than formal
    surveys.

25
4- Rapid appraisal methods
  1. Key informant interview
  2. Community group interview
  3. Focus group discussion
  4. Direct Observation
  5. Mini surveys

26
4- Participatory methods
  • Participatory methods provide active involvement
    in decision-making for those with a stake in a
    project, program, or strategy and generate a
    sense of ownership in the ME results and
    recommendations.
  • Uses
  • Learning about local conditions and local
    peoples perspectives and priorities to design
    more responsive and sustainable interventions.
  • Evaluating a project, program, or policy.
  • Providing knowledge and skills to empower poor
    people.

27
4- Participatory methods
  • Problems
  • Sometimes regarded as less objective.
  • Time-consuming if key stakeholders are involved
    in a meaningful way.
  • Potential for domination and misuse by some
    stakeholders to further their own interests.

28
4- Participatory methods
  1. Participatory rural appraisal
  2. Participatory monitoring and evaluation

29
5- Cost-benefit cost-effectiveness analysis
  • Cost-benefit and cost-effectiveness analysis are
    tools for assessing whether or not the costs of
    an activity can be justified by the outcomes and
    impacts. Cost-benefit analysis measures both
    inputs and outputs in monetary terms.
    Cost-effectiveness analysis estimates inputs in
    monetary terms and outcomes in non-monetary
    quantitative terms.
  • Uses
  • Informing decisions about the most efficient
    allocation of resources.
  • Identifying projects that offer the highest rate
    of return on investment.

30
5- Cost-benefit cost-effectiveness analysis
  • Problems
  • Fairly technical, requiring adequate financial
    and human resources available.
  • Requisite data for cost-benefit calculations may
    not be available, and projected results may be
    highly dependent on assumptions made.
  • Results must be interpreted with care,
    particularly in projects where benefits are
    difficult to quantify.

31
6 Impact Evaluation
  • Impact evaluation is the systematic
    identification of the effects positive or
    negative, intended or not on individual
    households, institutions, and the environment
    caused by a given development activity such as a
    program or project.
  • Uses
  • Measuring outcomes and impacts of an activity and
    distinguishing these from the influence of other,
    external factors.
  • Helping to clarify whether costs for an activity
    are justified.
  • Informing decisions on whether to expand, modify
    or eliminate projects, programs or policies.

32
  • Problems
  • Some approaches are very expensive and
    time-consuming
  • Reduced utility when decision-makers need
    information quickly.
  • Difficulties in identifying an appropriate
    counter-factual.
Write a Comment
User Comments (0)
About PowerShow.com