Econometric Analysis of Panel Data - PowerPoint PPT Presentation

1 / 121
About This Presentation
Title:

Econometric Analysis of Panel Data

Description:

Econometric Analysis of Panel Data William Greene Department of Economics Stern School of Business – PowerPoint PPT presentation

Number of Views:342
Avg rating:3.0/5.0
Slides: 122
Provided by: ValuedSon3
Category:

less

Transcript and Presenter's Notes

Title: Econometric Analysis of Panel Data


1
Econometric Analysis of Panel Data
  • William Greene
  • Department of Economics
  • Stern School of Business

2
(No Transcript)
3
(No Transcript)
4
www.oft.gov.uk/shared_oft/reports/Evaluating-OFTs-
work/oft1416.pdf
5
Econometrics
  • Theoretical foundations
  • Microeconometrics and macroeconometrics
  • Behavioral modeling
  • Statistical foundations Econometric methods
  • Mathematical elements the usual
  • Model building the econometric model

6
Estimation Platforms
  • Model based
  • Kernels and smoothing methods (nonparametric)
  • Semiparametric analysis
  • Parametric analysis
  • Moments and quantiles (semiparametric)
  • Likelihood and M- estimators (parametric)
  • Methodology based (?)
  • Classical parametric and semiparametric
  • Bayesian strongly parametric

7
Trends in Econometrics
  • Small structural models vs. large scale multiple
    equation models
  • Non- and semiparametric methods vs. parametric
  • Robust methods GMM (paradigm shift? Nobel
    prize)
  • Unit roots, cointegration and macroeconometrics
  • Nonlinear modeling and the role of software
  • Behavioral and structural modeling vs. reduced
    form, covariance analysis
  • Pervasiveness of an econometrics paradigm
  • Identification and causal effects

8
Objectives in Model Building
  • Specification guided by underlying theory
  • Modeling framework
  • Functional forms
  • Estimation coefficients, partial effects, model
    implications
  • Statistical inference hypothesis testing
  • Prediction individual and aggregate
  • Model assessment (fit, adequacy) and evaluation
  • Model extensions
  • Interdependencies, multiple part models
  • Heterogeneity
  • Endogeneity
  • Exploration Estimation and inference methods

9
Regression Basics
  • The MODEL
  • Modeling the conditional mean Regression
  • Other features of interest
  • Modeling quantiles
  • Conditional variances or covariances
  • Modeling probabilities for discrete choice
  • Modeling other features of the population

10
Application Health Care Usage
German Health Care Usage Data, 7,293 Individuals,
Varying Numbers of PeriodsData downloaded from
Journal of Applied Econometrics Archive. They can
be used for regression, count models, binary
choice, ordered choice, and bivariate binary
choice.  There are altogether 27,326
observations.  The number of observations ranges
from 1 to 7.  (Frequencies are 11525, 22158,
3825, 4926, 51051, 61000, 7987). 
(Downloaded from the JAE Archive) Variables in
the file are DOCTOR
1(Number of doctor visits gt 0)
HOSPITAL 1(Number of hospital visits gt 0)
HSAT   health satisfaction,
coded 0 (low) - 10 (high)  
DOCVIS   number of doctor visits in last
three months HOSPVIS  
number of hospital visits in last calendar year
PUBLIC   insured in public
health insurance 1 otherwise 0
ADDON   insured by add-on insurance 1
otherswise 0 HHNINC  
household nominal monthly net income in German
marks / 10000.
(4 observations with income0 were
dropped) HHKIDS children
under age 16 in the household 1 otherwise 0
EDUC   years of schooling
AGE age in years
MARRIED marital status
11
Household Income
Kernel Density Estimator
Histogram
12
Regression Income on Education
--------------------------------------------------
-------------------- Ordinary least squares
regression ............ LHSLOGINC Mean
-.92882 Standard
deviation .47948 Number
of observs. 887 Model size
Parameters 2
Degrees of freedom 885 Residuals
Sum of squares 183.19359
Standard error of e .45497 Fit
R-squared .10064
Adjusted R-squared .09962 Model
test F 1, 885 (prob)
99.0(.0000) Diagnostic Log likelihood
-559.06527 Restricted(b0)
-606.10609 Chi-sq 1 (prob)
94.1(.0000) Info criter. LogAmemiya Prd. Crt.
-1.57279 --------------------------------
------------------------------------- Variable
Coefficient Standard Error b/St.Er. PZgtz
Mean of X --------------------------------------
------------------------------- Constant
-1.71604 .08057 -21.299 .0000
EDUC .07176 .00721 9.951
.0000 10.9707 ------------------------------
--------------------------------------- Note
, , Significance at 1, 5, 10
level. -------------------------------------------
---------------------------
13
Specification and Functional Form
--------------------------------------------------
-------------------- Ordinary least squares
regression ............ LHSLOGINC Mean
-.92882 Standard
deviation .47948 Number
of observs. 887 Model size
Parameters 3
Degrees of freedom 884 Residuals
Sum of squares 183.00347
Standard error of e .45499 Fit
R-squared .10157
Adjusted R-squared .09954 Model
test F 2, 884 (prob)
50.0(.0000) Diagnostic Log likelihood
-558.60477 Restricted(b0)
-606.10609 Chi-sq 2 (prob)
95.0(.0000) Info criter. LogAmemiya Prd. Crt.
-1.57158 --------------------------------
------------------------------------- Variable
Coefficient Standard Error b/St.Er. PZgtz
Mean of X --------------------------------------
------------------------------- Constant
-1.68303 .08763 -19.207 .0000
EDUC .06993 .00746 9.375
.0000 10.9707 FEMALE -.03065
.03199 -.958 .3379
.42277 ------------------------------------------
---------------------------
14
Interesting Partial Effects
--------------------------------------------------
-------------------- Ordinary least squares
regression ............ LHSLOGINC Mean
-.92882 Standard
deviation .47948 Number
of observs. 887 Model size
Parameters 5
Degrees of freedom 882 Residuals
Sum of squares 171.87964
Standard error of e .44145 Fit
R-squared .15618
Adjusted R-squared .15235 Model
test F 4, 882 (prob)
40.8(.0000) Diagnostic Log likelihood
-530.79258 Restricted(b0)
-606.10609 Chi-sq 4 (prob)
150.6(.0000) Info criter. LogAmemiya Prd. Crt.
-1.62978 --------------------------------
------------------------------------- Variable
Coefficient Standard Error b/St.Er. PZgtz
Mean of X --------------------------------------
------------------------------- Constant
-5.26676 .56499 -9.322 .0000
EDUC .06469 .00730 8.860
.0000 10.9707 FEMALE -.03683
.03134 -1.175 .2399 .42277
AGE .15567 .02297 6.777
.0000 50.4780 AGE2 -.00161
.00023 -7.014 .0000
2620.79 -----------------------------------------
----------------------------
15
Function Log Income Age
Partial Effect wrt Age
16
A Statistical Relationship
  • A relationship of interest
  • Number of hospital visits H 0,1,2,
  • Covariates x1Age, x2Sex, x3Income, x4Health
  • Causality and covariation
  • Theoretical implications of causation
  • Comovement and association
  • Intervention of omitted or latent variables
  • Temporal relationship movement of the causal
    variable precedes the effect.

17
(Endogeneity)
  • A relationship of interest
  • Number of hospital visits H 0,1,2,
  • Covariates x1Age, x2Sex, x3Income, x4Health
  • Should Health be Endogenous in this model?
  • What do we mean by Endogenous
  • What is an appropriate econometric method of
    accommodating endogeneity?

18
Models
  • Conditional mean function Ey x
  • Other conditional characteristics what is the
    model?
  • Conditional variance function Vary x
  • Conditional quantiles, e.g., median y x
  • Other conditional moments
  • Conditional probabilities P(yx)
  • What is the sense in which y varies with x?

19
Using the Model
  • Understanding the relationship
  • Estimation of quantities of interest such as
    elasticities
  • Prediction of the outcome of interest
  • Control of the path of the outcome of interest

20
Application Doctor Visits
  • German individual health care data N27,236
  • Model for number of visits to the doctor
  • Poisson regression (fit by maximum likelihood)
  • EVIncomeexp(1.412 - .0745 ? income)
  • OLS Linear regression g(Income)3.917 - .208 ?
    income

21
Conditional Mean and Linear Projection
This area is outside the range of the data
Most of the data are in here
Notice the problem with the linear
projection. Negative predictions.
22
What About the Linear Projection?
  • What we do when we linearly regress a variable on
    a set of variables
  • Assuming there exists a conditional mean
  • There usually exists a linear projection.
    Requires finite variance of y.
  • Approximation to the conditional mean
  • If the conditional mean is linear
  • Linear projection equals the conditional mean

23
Partial Effects
  • What did the model tell us?
  • Covariation and partial effects How does the y
    vary with the x?
  • Marginal Effects Effect on what?????
  • For continuous variables
  • For dummy variables
  • Elasticities e(x)d(x) ? x / Eyx

24
Average Partial Effects
  • When d(x) ?ß, APE Exd(x)
  • Approximation Is d(Ex) Exd(x)? (no)
  • Empirically Estimated APE
  • Empirical approximation Est.APE
  • For the doctor visits model
  • d(x) ß exp(aßx)-.0745exp(1.412-.0745income)
  • Sample APE -.2373
  • Approximation -.2354
  • Slope of the linear projection -.2083 (!)

25
APE and PE at the Mean
  • Implication Computing the APE by averaging over
    observations (and counting on the LLN and the
    Slutsky theorem) vs. computing partial effects at
    the means of the data.
  • In the earlier example Sample APE -.2373
  • Approximation
    -.2354

26
The Linear Regression Model
  • y X?e, N observations, K columns in X,
    including a column of ones.
  • Standard assumptions about X
  • Standard assumptions about eX
  • EeX0, Ee0 and Cove,x0
  • Regression?
  • If EyX X? then X? is the projection of y on X

27
Estimation of the Parameters
  • Least squares, LAD, other estimators we will
    focus on least squares
  • Classical vs. Bayesian estimation of ?
  • Properties
  • Statistical inference Hypothesis tests
  • Prediction (not this course)

28
Properties of Least Squares
  • Finite sample properties Unbiased, etc. No
    longer interested in these.
  • Asymptotic properties
  • Consistent? Under what assumptions?
  • Efficient?
  • Contemporary work Often not important
  • Efficiency within a class GMM
  • Asymptotically normal How is this established?
  • Robust estimation To be considered later

29
Least Squares Summary
30
Hypothesis Testing
  • Nested vs. nonnested tests
  • yb1xe vs. yb1xb2ze Nested
  • ybxe vs. yczu Not nested
  • ybxe vs. logyclogx Not nested
  • ybxe e Normal vs. e t. Not nested
  • Fixed vs. random effects Not nested
  • Logit vs. probit Not nested
  • x is endogenous Maybe nested. Well see
  • Parametric restrictions
  • Linear R?-q 0, R is JxK, J lt K, full row
    rank
  • General r(?,q) 0, r a vector of J
    functions,
  • R (?,q) ?r(?,q)/??.
  • Use r(?,q)0 for linear and nonlinear cases

31
Example Panel Data on Spanish Dairy Farms
N 247 farms, T 6 years (1993-1998)
Units Mean Std. Dev. Minimum Maximum
OutputMilk Milk production (liters) 131,107 92,584 14,410 727,281
Input Cows of milking cows 22.12 11.27 4.5 82.3
Input Labor man-equivalent units 1.67 0.55 1.0 4.0
Input Land Hectares of land devoted to pasture and crops. 12.99 6.17 2.0 45.1
InputFeed Total amount of feedstuffs fed to dairy cows (Kg) 57,941 47,981 3,924.14 376,732
32
Application
  • y log output
  • x Cobb douglas production x 1,x1,x2,x3,x4
    constant and logs of 4 inputs (5 terms)
  • z Translog terms, x12, x22, etc. and all cross
    products, x1x2, x1x3, x1x4, x2x3, etc. (10
    terms)
  • w (x,z) (all 15 terms)
  • Null hypothesis is Cobb Douglas, alternative is
    translog Cobb-Douglas plus second order terms.

33
Translog Regression Model
?x
H0?z0
34
Wald Tests
  • r(b,q) close to zero?
  • Wald distance function
  • r(b,q)Varr(b,q)-1 r(b,q) ??2J
  • Use the delta method to estimate Varr(b,q)
  • Est.Asy.Varbs2(XX)-1
  • Est.Asy.Varr(b,q) R(b,q)s2(XX)-1R(b,q)
  • The standard F test is a Wald test JF ?2J.

35
Close to 0?
36
Likelihood Ratio Test
  • The normality assumption
  • Does it work approximately?
  • For any regression model yi h(xi,?)ei where
    ei N0,?2, (linear or nonlinear), at the
    linear (or nonlinear) least squares estimator,
    however, computed, with or without restrictions,

This forms the basis for likelihood ratio tests.
37
(No Transcript)
38
Score or LM Test General
  • Maximum Likelihood (ML) Estimation
  • A hypothesis test
  • H0 Restrictions on parameters are true
  • H1 Restrictions on parameters are not true
  • Basis for the test b0 parameter estimate under
    H0 (i.e., restricted), b1 unrestricted
  • Derivative results For the likelihood function
    under H1,
  • ?logL1/?? ?b1 0 (exactly, by definition)
  • ?logL1/?? ?b0 ? 0. Is it close? If so, the
    restrictions look
  • reasonable

39
Computing the LM Statistic
The derivation on page 60 of Wooldridges text is
needlessly complex, and the second form of LM is
actually incorrect because the first derivatives
are not heteroscedasticity robust.
40
Application of the Score Test
  • Linear Model Y X?Zde
  • Test H0 d0
  • Restricted estimator is b,0
  • Namelist X a list Z a list W X,Z
  • Regress Lhs y Rhs X Res e
  • Matrix list LM e W ltWe2Wgt W e

41
Restricted regression and derivatives for the LM
Test
42
Tests for Omitted Variables
? Cobb - Douglas Model Namelist X
One,x1,x2,x3,x4 ? Translog second order terms,
squares and cross products of logs Namelist Z
x11,x22,x33,x44,x12,x13,x14,x23,x24,x34 ?
Restricted regression. Short. Has only the log
terms Regress Lhs yit Rhs X Res e
Calc LoglR LogL RsqR Rsqrd ? LM
statistic using basic matrix algebra Namelist
W X,Z Matrix List LM e'W ltWe2Wgt
W'e ? LR statistic uses the full, long
regression with all quadratic terms Regress
Lhs yit Rhs W Calc LoglU LogL RsqU
Rsqrd List LR 2(Logl - LoglR) ? Wald
Statistic is just JF for the translog
terms Calc List JFcol(Z)((RsqU-RsqR)/col(Z
)/((1-RsqU)/(n-kreg)) )
43
Regression Specifications
44
Model Selection
  • Regression models Fit measure R2
  • Nested models log likelihood, GMM criterion
    function (distance function)
  • Nonnested models, nonlinear models
  • Classical
  • Akaike information criterion (logL 2K)/N
  • Bayes (Schwartz) information criterion
    (logL-K(logN))/N
  • Bayesian Bayes factor Posterior odds/Prior
    odds
  • (For noninformative priors, BFratio of
    posteriors)

45
Remaining to Consider for the Linear Regression
Model
  • Failures of standard assumptions
  • Heteroscedasticity
  • Autocorrelation and Spatial Correlation
  • Robust estimation
  • Omitted variables
  • Measurement error

46
Endogeneity
  • y X?e,
  • Definition Eex?0
  • Why not?
  • Omitted variables
  • Unobserved heterogeneity (equivalent to omitted
    variables)
  • Measurement error on the RHS (equivalent to
    omitted variables)
  • Structural aspects of the model
  • Endogenous sampling and attrition
  • Simultaneity (?)

47
Instrumental Variable Estimation
  • One problem variable the last one
  • yit ?1x1it ?2x2it ?KxKit eit
  • EeitxKit ? 0. (0 for all others)
  • There exists a variable zit such that
  • ExKit x1it, x2it,, xK-1,it,zit g(x1it,
    x2it,, xK-1,it,zit)
  • In the presence of the other variables, zit
    explains xit
  • Eeit x1it, x2it,, xK-1,it,zit 0
  • In the presence of the other variables, zit
    and eit are uncorrelated.
  • A projection interpretation In the projection
  • XKt ?1x1it, ?2x2it ?k-1xK-1,it ?K
    zit,
  • ?K ? 0.

48
The First IV Study Natural Experiment(Snow, J.,
On the Mode of Communication of Cholera,
1855)http//www.ph.ucla.edu/epi/snow/snowbook3.ht
ml
  • London Cholera epidemic, ca 1853-4
  • Cholera f(Water Purity,u)e.
  • Causal effect of water purity on cholera?
  • Purityf(cholera prone environment (poor, garbage
    in streets, rodents, etc.). Regression does not
    work.
  • Two London water companies
  • Lambeth Southwark
  • Main sewage discharge

River Thames
Paul Grootendorst A Review of Instrumental
Variables Estimation of Treatment
Effectshttp//individual.utoronto.ca/grootendors
t/pdf/IV_Paper_Sept6_2007.pdf
49
IV Estimation
  • Choleraf(Purity,u)e
  • Z water company
  • Cov(Cholera,Z)dCov(Purity,Z)
  • Z is randomly mixed in the population (two full
    sets of pipes) and uncorrelated with behavioral
    unobservables, u)
  • CholeraadPurityue
  • Purity Meanrandom variation?u
  • Cov(Cholera,Z) dCov(Purity,Z)

50
Cornwell and Rupert Data
Cornwell and Rupert Returns to Schooling Data,
595 Individuals, 7 YearsVariables in the file
are EXP work experienceWKS weeks
workedOCC occupation, 1 if blue collar, IND
1 if manufacturing industrySOUTH 1 if
resides in southSMSA 1 if resides in a city
(SMSA)MS 1 if marriedFEM 1 if
femaleUNION 1 if wage set by union
contractED years of educationLWAGE log of
wage dependent variable in regressions These
data were analyzed in Cornwell, C. and Rupert,
P., "Efficient Estimation with Panel Data An
Empirical Comparison of Instrumental Variable
Estimators," Journal of Applied Econometrics, 3,
1988, pp. 149-155.  See Baltagi, page 122 for
further analysis.  The data were downloaded from
the website for Baltagi's text.
51
Specification Quadratic Effect of Experience
52
The Effect of Education on LWAGE
53
What Influences LWAGE?
54
An Exogenous Influence
55
Instrumental Variables
  • Structure
  • LWAGE (ED,EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION)
  • ED (MS, FEM)
  • Reduced Form LWAGE ED (MS, FEM),
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION

56
Two Stage Least Squares Strategy
  • Reduced Form LWAGE ED (MS, FEM,X),
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION
  • Strategy
  • (1) Purge ED of the influence of everything but
    MS, FEM (and the other variables). Predict ED
    using all exogenous information in the sample (X
    and Z).
  • (2) Regress LWAGE on this prediction of ED and
    everything else.
  • Standard errors must be adjusted for the
    predicted ED

57
OLS
58
The weird results for the coefficient on ED
happened because the instruments, MS and FEM are
dummy variables. There is not enough variation
in these variables.
59
Source of Endogeneity
  • LWAGE f(ED,
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION) ?
  • ED f(MS,FEM,
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION) u

60
Remove the Endogeneity
  • LWAGE f(ED,
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION) u ?
  • LWAGE f(ED,
    EXP,EXPSQ,WKS,OCC,
    SOUTH,SMSA,UNION) u ?
  • Strategy
  • Estimate u
  • Add u to the equation. ED is uncorrelated with ?
    when u is in the equation.

61
Auxiliary Regression for ED to Obtain Residuals
62
OLS with Residual (Control Function) Added
2SLS
63
A Warning About Control Functions
Sum of squares is not computed correctly because
U is in the regression. A general result. Control
function estimators usually require a fix to the
estimated covariance matrix for the estimator.
64
The General Problem
65
Instrumental Variables
  • Framework y X? ?, K variables in X.
  • There exists a set of K variables, Z such that
  • plim(ZX/n) ? 0 but plim(Z?/n) 0
  • The variables in Z are called instrumental
    variables.
  • An alternative (to least squares) estimator of ?
    is
  • bIV (ZX)-1Zy
  • We consider the following
  • Why use this estimator?
  • What are its properties compared to least
    squares?
  • We will also examine an important application

66
IV Estimators
  • Consistent
  • bIV (ZX)-1Zy
  • (ZX/n)-1 (ZX/n)ß (ZX/n)-1Ze/n
  • ß (ZX/n)-1Ze/n ? ß
  • Asymptotically normal (same approach to proof
    as for OLS)
  • Inefficient to be shown.
  • By construction, the IV estimator is
    consistent. We have an estimator that is
    consistent when least squares is not.

67
IV Estimation
  • Why use an IV estimator? Suppose that X and ?
    are not uncorrelated. Then least squares is
    neither unbiased nor consistent.
  • Recall the proof of consistency of least squares
  • b ? (XX/n)-1(X?/n).
  • Plim b ? requires plim(X?/n) 0. If this
    does not hold, the estimator is inconsistent.

68
A Popular Misconception
  • If only one variable in X is correlated with ?,
    the other coefficients are consistently
    estimated. False.
  • The problem is smeared over the other
    coefficients.

69
Consistency and Asymptotic Normality of the IV
Estimator
70
Asymptotic Covariance Matrix of bIV
71
Asymptotic Efficiency
  • Asymptotic efficiency of the IV estimator. The
    variance is larger than that of LS. (A large
    sample type of Gauss-Markov result is at work.)
  • (1) Its a moot point. LS is inconsistent.
  • (2) Mean squared error is uncertain
  • MSEestimatorßVariance square of bias.
  • IV may be better or worse. Depends on the data

72
Two Stage Least Squares
  • How to use an excess of instrumental variables
  • (1) X is K variables. Some (at least one) of
    the K
  • variables in X are correlated with e.
  • (2) Z is M gt K variables. Some of the variables
    in
  • Z are also in X, some are not. None of the
  • variables in Z are correlated with e.
  • (3) Which K variables to use to compute ZX and
    Zy?

73
Choosing the Instruments
  • Choose K randomly?
  • Choose the included Xs and the remainder
    randomly?
  • Use all of them? How?
  • A theorem (Brundy and Jorgenson, ca. 1972) There
    is a most efficient way to construct the IV
    estimator from this subset
  • (1) For each column (variable) in X, compute the
    predictions of that variable using all the
    columns of Z.
  • (2) Linearly regress y on these K predictions.
  • This is two stage least squares

74
Algebraic Equivalence
  • Two stage least squares is equivalent to
  • (1) each variable in X that is also in Z is
    replaced by itself.
  • (2) Variables in X that are not in Z are replaced
    by predictions of that X using
  • All other variables in X that are not correlated
    with e
  • All the variables in Z that are not in X.

75
The weird results for the coefficient on ED
happened because the instruments, MS and FEM are
dummy variables. There is not enough variation
in these variables.
76
2SLS Algebra
77
Asymptotic Covariance Matrix for 2SLS
78
2SLS Has Larger Variance than LS
79
Estimating s2
80
Robust estimation of VC
Predicted X
Actual X
81
2SLS vs. Robust Standard Errors
-------------------------------------------------
- Robust Standard Errors
----------------------------------------
------- Variable Coefficient Standard
Error b/St.Er. ------------------------------
----------------- B_1 45.4842872
4.02597121 11.298 B_2 .05354484
.01264923 4.233 B_3
-.00169664 .00029006 -5.849 B_4
.01294854 .05757179 .225 B_5
.38537223 .07065602 5.454 B_6
.36777247 .06472185 5.682
B_7 .95530115 .08681261 11.000
-----------------------------------------------
--- 2SLS Standard Errors
---------------------------------------
-------- Variable Coefficient Standard
Error b/St.Er. ------------------------------
----------------- B_1 45.4842872
.36908158 123.236 B_2 .05354484
.03139904 1.705 B_3
-.00169664 .00069138 -2.454 B_4
.01294854 .16266435 .080 B_5
.38537223 .17645815 2.184 B_6
.36777247 .17284574 2.128
B_7 .95530115 .20846241 4.583

82
Endogeneity Test? (Hausman)
  • Exogenous
    EndogenousOLS Consistent, Efficient
    Inconsistent 2SLS Consistent,
    Inefficient Consistent
  • Base a test on d b2SLS -
    bOLS Use a Wald statistic,
    dVar(d)-1d
  • What to use for the variance
    matrix? Hausman V2SLS - VOLS

83
Hausman Test
84
Hausman Test One at a Time?
85
Endogeneity Test Wu
  • Considerable complication in Hausman test (text,
    pp. 234-237)
  • Simplification Wu test.
  • Regress y on X and estimated for the
    endogenous part of X. Then use an ordinary Wald
    test.

86
Wu Test
Note .05544 .54900 .60444, which is the 2SLS
coefficient on ED.
87
Regression Based Endogeneity Test
88
Testing Endogeneity of WKS
(1) Regress WKS on 1,EXP,EXPSQ,OCC,SOUTH,SMSA,MS.
Uresidual, WKSHATprediction (2) Regress
LWAGE on 1,EXP,EXPSQ,OCC,SOUTH,SMSA,WKS, U or
WKSHAT ---------------------------------------
--------------------------- Variable
Coefficient Standard Error b/St.Er.PZgtz
Mean of X ----------------------------------
-------------------------------- Constant
-9.97734299 .75652186 -13.188 .0000
EXP .01833440 .00259373 7.069
.0000 19.8537815 EXPSQ -.799491D-04
.603484D-04 -1.325 .1852 514.405042 OCC
-.28885529 .01222533 -23.628
.0000 .51116447 SOUTH -.26279891
.01439561 -18.255 .0000 .29027611 SMSA
.03616514 .01369743 2.640
.0083 .65378151 WKS .35314170
.01638709 21.550 .0000 46.8115246 U
-.34960141 .01642842 -21.280
.0000 -.341879D-14 ---------------------------
--------------------------------------- Varia
ble Coefficient Standard Error
b/St.Er.PZgtz Mean of X ----------------
----------------------------------------------
---- Constant -9.97734299 .75652186
-13.188 .0000 EXP .01833440
.00259373 7.069 .0000 19.8537815 EXPSQ
-.799491D-04 .603484D-04 -1.325
.1852 514.405042 OCC -.28885529
.01222533 -23.628 .0000 .51116447 SOUTH
-.26279891 .01439561 -18.255
.0000 .29027611 SMSA .03616514
.01369743 2.640 .0083 .65378151 WKS
.00354028 .00116459 3.040
.0024 46.8115246 WKSHAT .34960141
.01642842 21.280 .0000 46.8115246
89
General Test for Endogeneity
90
Alternative to Hausmans Formula?
  • H test requires the difference between an
    efficient and an inefficient estimator.
  • Any way to compare any two competing estimators
    even if neither is efficient?
  • Bootstrap? (Maybe)

91
(No Transcript)
92
Weak Instruments
  • Symptom The relevance condition, plim ZX/n not
    zero, is close to being violated.
  • Detection
  • Standard F test in the regression of xk on Z. F lt
    10 suggests a problem.
  • F statistic based on 2SLS see text p. 351.
  • Remedy
  • Not much most of the discussion is about the
    condition, not what to do about it.
  • Use LIML? Requires a normality assumption.
    Probably not too restrictive.

93
Weak Instruments (cont.)
94
Weak Instruments
95
A study of moral hazardRiphahn, Wambach,
Million Incentive Effects in the Demand for
HealthcareJournal of Applied Econometrics,
2003Did the presence of the ADDON insurance
influence the demand for health care doctor
visits and hospital visits?For a simple
example, we examine the PUBLIC insurance (89)
instead of ADDON insurance (2).
96
Application Health Care Panel Data
German Health Care Usage Data, 7,293 Individuals,
Varying Numbers of PeriodsVariables in the file
areData downloaded from Journal of Applied
Econometrics Archive. This is an unbalanced panel
with 7,293 individuals. They can be used for
regression, count models, binary choice, ordered
choice, and bivariate binary choice.  This is a
large data set.  There are altogether 27,326
observations.  The number of observations ranges
from 1 to 7.  (Frequencies are 11525, 22158,
3825, 4926, 51051, 61000, 7987).  Note, the
variable NUMOBS below tells how many observations
there are for each person.  This variable is
repeated in each row of the data for the person. 
(Downloaded from the JAE Archive)
DOCTOR 1(Number of doctor visits gt 0)
HOSPITAL 1(Number of hospital
visits gt 0) HSAT  
health satisfaction, coded 0 (low) - 10 (high)  
DOCVIS   number of doctor
visits in last three months
HOSPVIS   number of hospital visits in last
calendar year PUBLIC  
insured in public health insurance 1 otherwise
0 ADDON   insured by
add-on insurance 1 otherswise 0
HHNINC   household nominal monthly net
income in German marks / 10000.
(4 observations with
income0 were dropped) HHKIDS
children under age 16 in the household 1
otherwise 0 EDUC   years
of schooling AGE age in
years MARRIED marital
status EDUC years of
education
97
Evidence of Moral Hazard?
98
Regression Study
99
Endogenous Dummy Variable
  • Doctor Visits f(Age, Educ, Health,
    Presence of Insurance,
    Other unobservables)
  • Insurance f(Expected Doctor Visits,
    Other unobservables)

100
Approaches
  • (Parametric) Control Function Build a structural
    model for the two variables (Heckman)
  • (Semiparametric) Instrumental Variable Create an
    instrumental variable for the dummy variable
    (Barnow/Cain/ Goldberger, Angrist, Current
    generation of researchers)
  • (?) Propensity Score Matching (Heckman et al.,
    Becker/Ichino, Many recent researchers)

101
Heckmans Control Function Approach
  • Y xß dT EeT e - EeT
  • ? EeT , computed from a model for whether
    T 0 or 1

Magnitude 11.1200 is nonsensical in this
context.
102
Instrumental Variable Approach
  • Construct a prediction for T using only the
    exogenous information
  • Use 2SLS using this instrumental variable.

Magnitude 23.9012 is also nonsensical in this
context.
103
Propensity Score Matching
  • Create a model for T that produces probabilities
    for T1 Propensity Scores
  • Find people with the same propensity score some
    with T1, some with T0
  • Compare number of doctor visits of those with T1
    to those with T0.

104
Treatment Effect
  • Earnings and Education Effect of an additional
    year of schooling
  • Estimating Average and Local Average Treatment
    Effects of Education when Compulsory Schooling
    Laws Really Matter
  • Philip Oreopoulos
  • AER, 96,1, 2006, 152-175

105
Treatment Effects and Natural Experiments
106
How do panel data fit into this?
  • We can use the usual models.
  • We can use far more elaborate models
  • We can study effects through time
  • Observations are surely correlated.
  • The same individual is observed more than once
  • Unobserved heterogeneity that appears in the
    disturbance in a cross section remains persistent
    across observations (on the same unit).
  • Procedures must be adjusted.
  • Dynamic effects are likely to be present.

107
Appendix Computing the LM Statistic
108
LM Test
109
LM Test (Cont.)
110
Appendix Structure and Regression
111
Least Squares Revisited
112
Inference with IV Estimators
113
Comparing OLS and IV
114
Testing for Endogeneity(?)
115
Structure vs. Regression
  • Reduced Form vs. Stuctural Model
  • Simultaneous equations origin
  • Q(d) a0 a1P a2I e(d) (demand)Q(s) b0
    b1P b2R e(s) (supply)Q(.) Q(d)
    Q(s)What is the effect of a change in I on
    Q(.)?(Not a regression)
  • Reduced form Q c0 c1I c2R
    v.(Regression)
  • Modern concepts of structure vs. regression The
    search for causal effects.

116
Implications
  • The structure is the theory
  • The regression is the conditional mean
  • There is always a conditional mean
  • It may not equal the structure
  • It may be linear in the same variables
  • What is the implication for least squares
    estimation?
  • LS estimates regressions
  • LS does not necessarily estimate structures
  • Structures may not be estimable they may not be
    identified.

117
Structure and Regression
  • Simultaneity? What if Eex?0
  • yx?e, xdyu. Covx, e?0
  • x? is not the regression?
  • What is the regression?
  • Reduced form Assume e and u are uncorrelated.
  • y ?/(1- ?d)u 1/(1- ?d)e
  • x 1/(1- ?d)u d /(1- ?d)e
  • Covx,y/Varx ?
  • The regression is y ?x v, where Evx0

118
Structure vs. Regression
Supply a bPrice cCapacity Demand A
BPrice CIncome
119
Representing Covariation
  • Conditional mean function Ey x g(x)
  • Linear approximation to the conditional mean
    function Linear Taylor series
  • The linear projection (linear regression?)

120
Projection and Regression
  • The linear projection is not the regression, and
    is not the Taylor series.
  • Example

121
For the Example a1, ß2
Conditional Mean
Linear Projection
Linear Projection
Taylor Series
Write a Comment
User Comments (0)
About PowerShow.com