Evaluating the effectiveness of innovation policies - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Evaluating the effectiveness of innovation policies

Description:

Evaluating the effectiveness of innovation policies Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys – PowerPoint PPT presentation

Number of Views:83
Avg rating:3.0/5.0
Slides: 31
Provided by: Goedhuys
Category:

less

Transcript and Presenter's Notes

Title: Evaluating the effectiveness of innovation policies


1
Evaluating the effectiveness of innovation
policies
  • Lessons from the evaluation of Latin American
    Technology Development Funds
  • Micheline Goedhuys
  • goedhuys_at_merit.unu.edu

2
Structure of presentation
  • 1. Introduction to the policy evaluation
    studies
  • policy background
  • features of TDFs
  • evaluation setup outcomes to be evaluated, data
    sources
  • 2. Evaluation methodologies
  • the evaluation problem
  • addressing selection bias
  • 3. Results from Latin American TDF evaluation
    example of results, summary of results,
    concluding remarks

3
1.A. Introduction Policy background
  • Constraints to performance in Latin America
  • ST falling behind in relative terms small and
    declining share in world RD investment,
    increasing gap with developed countries, falling
    behind other emerging economies
  • Low participation by productive sector in RD
    investment lack of skilled workforce with
    technical knowledge macro volatility, financial
    constraints, weak IPR, low quality of research
    institutes, lack of mobilized government
    resources, rentier mentality

4
1.A. Introduction Policy background
  • Policy response shift in policy
  • From focus on promotion of scientific research
    activities, in public research institutes,
    universities and SOE
  • To (1990-) needs of productive sector, with
    instruments that foster the demand for knowledge
    by end users and that support the transfer of
    Know How to firms
  • TDF emerged as an instrument of ST policy

5
1.A. Introduction Policy background
  • IDB evaluating the impact of a sample of IDB ST
    programmes and instruments frequently used
  • Technology Development Funds (TDF) to stimulate
    innovation activities in the productive sector,
    through RD subsidies
  • Competitive research grants (CRG)
  • OVE coordinated, compiled results for TDF
    evaluation in Argentina, Brazil, Chile, Panama
    (Colombia)

6
1.B. Introduction Selected TDFs
Country and Period Name Tools
Argentina 1994-2001 FONTAR-TMP I Targeted Credit
Argentina 2001-2004 FONTAR ANR Matching Grants
Brazil 1996-2003 ADTEN Targeted Credit
Brazil 1999-2003 FNDCT Matching Grants
Chile 1998-2002 FONTEC-line1 Matching Grants
Panama 2000-2003 FOMOTEC Matching Grants
7
1.B. Introduction features of TDFs
  • Demand driven
  • Subsidy
  • Co-financing
  • Competitive allocation of resources
  • Execution by a specialised agency

8
1.C. Introduction evaluation setup
  • Evaluation of TDFs at recipient (firm) level
  • Impact on
  • RD input additionality
  • Behaviour additionality
  • Innovative output
  • performance productivity, employment
  • and growth thereof

9

10
Indicator Data source
Input additionality Amount invested by beneficiaries in RD Firm balance sheets Innovation surveys Industrial surveys
Behavioral additionality Product / process innovation, linkages with other agents in the NIS Innovation surveys
Innovative Outputs Patents Sales due to new products Patents databases Innovation surveys
Performance Total factor productivity Labor productivity Growth in sales, exports,employment Firm balance sheets Innovation surveys Industrial surveys Labor surveys
11
2.A. The evaluation problem (in words)
  • To measure the impact of a program, the evaluator
    is interested in the counterfactual question
  • what would have happened to the beneficiaries ,
  • if they had not had access to the program
  • This is however not observed, unknown.
  • We can only observe the performance of
    non-beneficiaries and compare it to the
    performance of beneficiaries.

12
2.A. The evaluation problem (in words)
  • This comparison however is not sufficient to tell
    us the impact of the program, it presents rather
    correlations, no causality
  • Why not?
  • Because there may be a range of characteristics
    that affect both the possibility of accessing the
    program AND performing well on the performance
    indicators (eg RD intensity, productivity)
  • Eg. size of the firm, age, exporting

13
2.A. The evaluation problem (in words)
  • This means, being in the treatment group or not
    is not the result of a random draw, but there is
    a selection into a specific group, along both
    observable and non-observable characteristics
  • The effect of selection has to be taken into
    account if one wants to measure the impact of the
    program on the performance of the firms!!
  • More formally.

14
2.A. The evaluation problem
  • Define
  • YT the average expenses in innovation by a
    firm in a specific year if the firm participates
    in the TDF and
  • YC the average expenses by the same firm if it
    does not participate to the program.
  • Measuring the program impact requires a
    measurement of the difference (YT- YC) which is
    the effect of having participated in the program
    for firm i.

15
2.A. The evaluation problem
  • Computing (YT- YC) requires knowledge of the
    counterfactual outcome that is not empirically
    observable since a firm can not be observed
    simultaneously as a participant and as a
    non-participant.

16
2.A. The evaluation problem
  • by comparing data on participating and
    non-participating firms, we can evaluate an
    average effect of program participation, EYT-
    YC
  • Substracting and adding EYC D1

17
2.A. The evaluation problem
  • Only if there is no selection bias, the average
    effect of program participation will give an
    unbiased estimate of the program impact
  • There is no selection bias, if participating and
    non-participating firms are similar with respect
    to dimensions that are likely to affect both the
    level of innovation expenditures and TDF
    participation
  • Eg. Size, age, exporting, solvency affecting RD
    expenditures and application for grant

18
2.B. The evaluation problem avoided
  • Incorporating randomized evaluation in programme
    design
  • Random assignment of treatment (participation in
    the program) would imply that there are no
    pre-existing differences between the treated and
    non-treated firms, selection bias is zero
  • Hard to implement for certain types of policy
    instruments

19
2.B. Controlling for selection bias
  • Controlling for observable differences
  • Develop a statistically robust control group of
    non-beneficiaries
  • identify comparable participating and
    non-participating firms, conditional on a set of
    observable variables X,
  • i.o.w. control for the pre-existing observable
    differences
  • using econometric techniques
  • e.g. propensity score matching

20
2.B. Propensity score matching (PSM)
  • If there is only one dimension (eg size) that
    affects both treatment (participation in TDF) and
    outcome (RD intensity) , it would be relatively
    simple to find pairs of matching firms.
  • When treatment and outcome are determined by a
    multidimensional vector of characteristics (size,
    age, industry, location...), this becomes
    problematic.
  • Find pairs of firms that have equal or similar
    probability of being treated (having TDF support)

21
2.B. PSM
  • Using probit or logit analysis on the whole
    sample of beneficiaries and non-beneficiaries, we
    calculate the probability (P) or propensity that
    a firm participates in a program
  • P(D1)F(X)
  • X vector of observable characteristics
  • Purpose to find for each participant (D1) at
    least one program non-participant that has
    equal/very similar chance of being participant,
    which is then selected into the control group.

22
2.B. PSM
  • It reduces the multidimensional problem of
    several matching criteria to one single measure
    of distance
  • There are several measures of proximity
  • Eg nearest neighbour, predefined range, kernel
    based matching ...

23
2.B. PSM
  • Estimating the impact (Average effect of
    Treatment on Treated)
  • ATTEE(Y1 D 1, p(x)) E(Y0 D 0, p(x))
    D1
  • Y is the impact variable
  • D 0,1 is a dummy variable for the
    participation in the program,
  • x is a vector of pre-treatment characteristics
  • p(x) is the propensity score.

24
2.B. Difference in difference (DID)
  • The treated and control group of firms may also
    differ in non-observable characteristics, eg
    management skills.
  • If panel data are available (data of
    pre-treatment and post-treatment time periods)
    the impact of unobservable differences and time
    shocks can be neutralised by taking the
    difference-in-differences of the impact variable.
  • Important assumption unobservables do not change
    over time
  • In case of DID, the impact variable is a growth
    rate.

25
3. Example of results
  • Impact of ADTEN (Brazil) on (private) RD
    intensity
  • Single difference in 2000
  • (RD/sales 2000 beneficiaries
  • RD/sales 2000 control) after PSM
  • 92 observations each
  • beneficiaries 1.18
  • Control group 0.52
  • Difference 0.66
  • positive and significant impact,net of subsidy

26
3. Example of results
  • Impact of FONTAR-ANR (Argentina)
  • on (publicprivate) RD intensity (RD
    expenditures/sales)
  • Difference in difference with PSM
  • 37 observations each
  • (RDint. afterANR beneficiaries RD/sales
    beforeANR ben.)-
  • RD/sales afterANR control-RD/Sales beforeANR
    control)
  • Beneficiaries (0.20- 0.08) 0.12
  • Control group (0.15 - 0.22) -0.07
  • DID 0.19
  • positive and significant impact, GROSS of subsidy

27
3. Results summary
  • The impact of the programs on firm behaviour and
    outcomes becomes weaker and weaker as one gets
    further from the immediate target of the policy
    instrument
  • There is clear evidence of a positive impact on
    RD,
  • weaker evidence of some behavioural effects,
  • and almost no evidence of an immediate positive
    impact on new product sales or patents.
  • This may be expected, given the relatively short
    time span over which the impacts were measured.

28
3. Results
  • no clear evidence that the TDF can significantly
    affect firms productivity and competitiveness
    within a five-year period, although there is a
    suggestion of positive impacts.
  • However, these outcomes, which are often the
    general objective of the programs, are more
    likely related to a longer run impact of policy.
  • The evaluation does not take into account
    potential positive externalities that may result
    from the TDF.

29
3. Results
  • the evaluation design should clearly identify
  • rationale
  • short, medium and long run expected outcomes
  • periodic collection of primary data on the
    programs beneficiaries and on a group of
    comparable non-beneficiaries
  • the repetition of evaluation on the same sample
    so that long run impacts can be clearly
    identified
  • the periodic repetition of the impact evaluation
    on new samples to identify potential needs of
    re-targeting of policy tools.

30
3. Concluding remarks
  • The data needs of this type of evaluation are
    evident
  • Involvement and commitment of statistical offices
    is needed to be able to merge survey data that
    allow these analyses
  • The merger and accessability of several data
    sources create unprecedented opportunities for
    the evaluation and monitoring of policy
    instruments
  • Thank you!
Write a Comment
User Comments (0)
About PowerShow.com