Title: Evaluating Organizational Change: How and Why
1Evaluating Organizational Change How and Why?
- Dr Kate Mackenzie Davey
- Organizational Psychology
- Birkbeck, University of London
- k.mackenzie-davey_at_bbk.ac.uk
2Aims
- Examine the arguments for evaluating
organizational change - Consider the limitations of evaluation
- Consider different methods for evaluation
- Consider difficulties of evaluation in practice
- Consider costs and benefits in practice
-
3Arguments for evaluating organizational change
- Sound professional practice
- Basis for organizational learning
- Central to the development of evidence based
practice - Widespread cynicism about fads and fashions
- To influence social or governmental policy
4Research and evaluation
- Research focuses on relations between theory and
empirical material (data) - Theory should provide a base for policy decisions
- Evidence can illuminate and inform theory
- Show what does not work as well as what does
- Highlight areas of uncertainty and confusion
- Demonstrate the complexity of cause-effect
relations - Understand predict control
5Pragmatic Evaluation what matters is what works
- Why it works may be unclear
- Knowledge increases complexity
- Reflexive monitoring of strategy links to OL KM
- Evidence and cultural context
- May be self fulfilling
- Tendency to seek support for policy
- Extent of sound evidence unclear
6Why is sound evaluation so rare?
- Practice shows that evaluation is an extremely
complex, difficult and highly political process
in organizations. - Questions may be how many, not what works
7Evaluation models
- Pre-evaluation
- Goal based (Tyler, 1950)
- Realistic evaluation (Pawson Tilley,1997
Sanderson, 2002) - Experimental
- Constructivist evaluation (Stake, 1975)
- Contingent evaluation (Legge, 1984)
- Action learning (Reason Bradbury, 2001)
- A study should be technically sound,
administratively convenient and politically
defensible. Alec Rodger
81.1 Pre-evaluation (Goodman Dean, 1982)The
extent to which it is likely that... A has an
impact on b
- Scenario planning
- Evidence based practice
- All current evidence thoroughly reviewed and
synthesised - Meta-analysis
- Systematic literature review
- Formative v summative (Scriven, 1967)
91.2 Pre-evaluation issues
- Based on theory and past evidence not clear it
will generalise to the specific case - Formative influences planning
- Argument to understand a system you must
intervene (Lewin)
102. 1. Goal based evaluation Tyler (1950)
- Objectives used to aid planned change
- Can help clarify models
- Goals from bench marking, theory or
pre-evaluation exercises - Predict changes
- Measure pre and post intervention
- Identify the interventions
- Were objectives achieved?
112.2 Difficulties with Goal based evaluation
- Who sets the goals? How do you identify the
intervention? - Tendency to managerialism (unitarist)
- Failure to accommodate value pluralism
- Over-commitment to scientific paradigm
- What is measured gets done
- No recognition of unanticipated effects
- Focus on single outcome, not process
123.1 Realistic evaluation Conceptual clarity
(Pawson Tilley,1997)
- Evidence needs to be based on clear ideas about
concepts - Measures may be derived from theory
- Examine definitions used elsewhere
- Consider specific examples
- Ensure all aspects are covered
133.2 Realistic evaluation Towards a theory What
are you looking for?
- Make assumptions and ideas explicit
- What is your theory of cause and effect?
- What are you expecting to change (outcome)?
- How are you hoping to achieve this change
(mechanism)? - What aspects of the context could be important?
143.3 Realistic evaluation Context-mechanism-outcome
- Context What environmental aspects may affect
the outcome? - What else may influence the outcomes?
- What other effects may there be?
153.4 Realistic evaluation Context-mechanism-outcome
- Mechanism What will you do to bring about this
outcome? - How will you intervene (if at all)?
- What will you observe?
- How would you expect groups to differ?
- What mechanisms do you expect to operate?
163.5 Realistic evaluation Context-mechanism-outcome
- Outcome What effect or outcome do you aim for?
- What evidence could show it worked?
- How could you measure it?
174.1 Experimental evaluation
- Explain, predict and control by identifying
causal relationships - Theory of causality makes predictions about
variables eg training increases productivity - Two randomly assigned matched groups
experimental and control - One group experiences intervention, one does not
- Measure outcome variable pre-test and post-test
(longitudinal) - Analyse for statistically significant differences
between the two groups - Outcome linked back to modify theory
- The gold standard
184.2 Difficulties with experimental evaluation in
organizations
- Difficult to achieve in organizations
- Unitarist view
- Leaves out unforeseen effects
- Problems with continuous change processes
- Summative not formative
- Generally at best quasi-experimental
195.1 Constructivist or stakeholder evaluation
- Responsive evaluation (Stake, 1975) or Fourth
generation evaluation (Guba Lincoln, 1989) - Constructivist interpretivist hermeneutic
methodology - Based on stakeholder claims concerns issues
- Stakeholders agents, beneficiaries, victims
205.2 Response to an IT implementation(Brown, 1998)
215.3 Constructivist evaluation issues
- No one right answer
- Demonstrates complexity of issues
- Highlights conflicts of interests
- Interesting for academics
- Difficult for practitioners to resolve
226 A Contingent approach to evaluation(Legge,
1984)
- Do you want the proposed change programme to be
evaluated? (Stakeholders) - What functions do you wish its evaluation to
serve? (Stakeholders) - What are the alternative approaches to
evaluation? (Researcher) - Which of the alternatives best matches the
requirements? (Discussion)
237. Action research
- Identify good practice(Reason Bradbury, 2001)
Action research - Responds to practical issues in organizations
- Engages in collaborative relationships
- Draws on diverse evidence
- Value orientation - humanist
- Emergent, developmental
24Problems with realist models
- Tendency to managerialise
- Over-commitment to scientific paradigm
- Context stripping,
- Over-dependence on measures
- Coerciveness truth as non-negotiable
- Failure to accommodate value pluralism
- Every act of evaluation is a political act, not
tenable to claim it is value free
25Problems with Constructionist approach
- Evaluation judged by who for whom and in whose
interests? - Identify different views, then what?
- Who has power?
- Leaves decisions open
- May lead to ambiguity
-
26Why not evaluate?
- Expensive in time and resources
- De-motivating for individuals
- Contradiction between scientific evaluation
models and supportive, organization learning
models - Individual identification with activity
- Difficulties in objectifying and maintaining
commitment - External evaluation off the shelf inappropriate
and unhelpful
27Why evaluate?(Legge, 1984)
- Overt
- Aids decision making
- Reduce uncertainty
- Learn
- Control
- Covert
- Rally support/opposition
- Postpone a decision
- Evade responsibility
- Fulfil grant requirements
- Surveillance
28Conclusion
- Evaluation is very expensive, demanding and
complex - Evaluation is a political process need for
clarity about why you do it - Good evaluation always carries the risk of
exposing failure - Therefore evaluation is an emotional process
- Evaluation needs to be acceptable to the
organization
29Conclusion 2
- Plan and decide which model of evaluation is
appropriate - Identify who will carry out the evaluation and
for what purpose - Do not overload the evaluation processjudgment
or development? - Evaluation can give credibility and enhance
learning - Informal evaluation will take place whether you
plan it or not