Title: Day 1 Overview of Evaluation
1Day 1Overview ofEvaluation
- Richard Krueger
- Professor Emeritus and
- Senior Fellow
- University of Minnesota
2Evaluation is used for
- Accountability
- Program Improvement
- Leverage Resources
- Personal Improvement
- Needs Assessment
- Program development / program planning
- Decision making
- Accreditation
- Public awareness / involvement
- Organizational learning
3Recent Influences on Evaluation
- Shrinking budgets -- limited resources
- Advent of policy analysts
- Shifts in population, values, concerns
- Ideologically based policies / government
4Definitions of Evaluation
- Evaluation is the systematic determination of
the merit or worth of an object Michael Scriven
1967 - The systematic collection of information about
the activities, characteristics, and outcomes of
programs to make judgments about the program,
improve program effectiveness, and/or inform
decisions about future programming
Michael Patton 1997
5The Personal Factor
- Patton argues that the Personal Factor is
important - Where the personal factor emerges, where some
individual takes direct, personal responsibility
for getting the right information to the right
people, evaluations have an impact. . . - Use is not simply determined by some
configuration of abstract factors it is
determined in large part by real, live, caring
human beings." - (Patton, 1997, pp. 44 47)
6Other definitions of Evaluation
- Carol Weiss
- The systematic assessment of the operation
and/or the outcomes of a - program or policy, compared to a set of explicit
or implicit standards, - as a means of contributing to the improvement of
the program or policy" - Fitzpatrick, Sanders, Worthen p. 5
- The identification, clarification, and
application of defensible - criteria to determine an evaluation objects
value (worth or - merit) in relation to those criteria
7The Role of the Evaluator
- Identifies and publicizes standards or the basis
of decisions about - value, worth, quality, use, effectiveness,
etc - 2. Decides if those standards are relative or
absolute - 3. Collects relevant information
- 4. Applies standards to determine value, quality,
utility, effectiveness or significance - 5. Attentive to unintended consequences --- and
raising stakeholder awareness of these
consequences - ---Fitzpatrick, et al---
8Criteria and Standards
- Evaluators sometimes use Criteria and Standards
- Criteria are broad categories
- Standards are specific and measurable levels
within criteria
9Example of Criteria Standards
10Steps in a General Evaluation Process
- Identify the program, product, or process
- Describe boundaries
- Identify the timeline
- Identify stakeholders
- Identify criteria and standards
- Identify weight of criteria
- Identify methods for obtaining data
- Develop methods for gathering data
- Pilot test and revise
- Gather data
- Interpret, analyze and write
- Report findings
11Comparing Evaluationand Research
-
- Important distinctions to consider
12(No Transcript)
13Evaluation is judged by
- Utility---The utility standards are intended to
ensure that an evaluation will serve the
information needs of intended users - Feasibility---The feasibility standards address
the need to conserve resources, materials,
personnel and time to properly answer the
evaluation questions - Propriety---The propriety standards are intended
to ensure that an evaluation will be conducted
legally, ethically, and with due regard for the
welfare of those involved in the evaluation as
well as those affected by its results. - Accuracy---Accuracy standards address the need to
yield sound evaluative information and to make
logical data-drive judgments.
14Internal versus External
- INTERNAL evaluation
- Conducted by program employees
- Plus side More knowledge about program
- Minus side Potential bias and influence
- EXTERNAL evaluation
- Conducted by outsiders, often for a fee
- Plus side Less visible bias
- Minus side Outsiders have to gain entrée have
less first-hand know-ledge of the program
15Classic distinctions in evaluation
- FORMATIVE evaluation (Scriven 1967)
- Evaluation for program improvement
- A developmental process
- Often done for program developers and
implementers - Not always the same as process evaluation
16Classic distinctions in evaluation
- SUMMATIVE evaluation
- Typically done at the end of a project
- Often done for other users or for accountability
purposes - Not always the same as outcome or product
evaluation
17Classic distinctions in evaluation
- PROCESS evaluation
- Done during and at end of project
- Seeks information about how the program worked
- How was program implemented and operated?
- What decisions were made in developing the
program?
18Classic distinctions in evaluation
- IMPACT / OUTCOME evaluation
- Typically done at the end of the project
- Documents the changes that occur to individuals,
organizations the community
19Classic distinctions in evaluation
- HOW TO REMEMBER
- F The cook tastes the soup
- S The customer tastes the soup
- P How is the soup made?
- I What happened as a result of tasting
- the soup?
20EvaluationDesign Considerations
- Design strategies used by evaluators
211. Baseline Strategy
- Set a baseline at a point in time and measure
again in the future. Find the difference between
the two time periods. Present the case for how
and why the program is responsible for the
difference.
222. Comparison or Control Group Strategy
- Find a comparison or control group.
- Control group has randomized participants
- Measure before and after the program
- Describe differences between experimental group
and control group - R O X O experimental group
- R O O control group
233. Reflective Strategy
- Ask participants and others to reflect back to a
baseline level - Use open-ended questions and ask what is
different or what has changed. What's changed in
the community? What caused the change? - Use closed-ended questions with a scale. For
example, use a 10-point scale (1low and 10high)
and ask participants to rate the community or
situation now versus at a point of time in the
past (one year ago). If change occurred, ask what
caused the change.
244. Descriptive Strategy
- Describe the outcomes in a narrative manner from
the perspectives of the customers and the
providers. -
- Use stories or mini-cases.
255. Assessment Strategy
- Experts review indicators of outcomes
-
- Community observers monitor progress
- toward outcomes
266. Logic Model
- The Logic model is developed to show the
progression of change - The Logic model is based on theory or established
protocol - The Evaluator uses available evidence and then
cites the theory in the logic model when evidence
is unavailable
27Types of Tests
- Norm-reference tests
- Compares one person to anotherscores based on a
comparison of how one does in relation to another - Criterion-referenced tests
- A specific skill that someone is to achieve
- Levels created by comparison with performance
- Which should be used for driving examinations?
- Which should be used for Board or Bar exams?
- Which should be used to evaluate employees?
- Which should be used in this class?
28Identify the Evaluation Design
- Consumer Reports
- American Idol
- American Medical Association
- Legislative Auditor
- College Admissions Officer
- Upjohn Pharmaceuticals
- Blandin Foundation Leadership Cohort