Econometrics Lecture 1 - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Econometrics Lecture 1

Description:

To establish whether the hypothesised economic relationship is supported by empirical evidence ... The set of regressors is not perfectly collinear. ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 15
Provided by: nicola66
Category:

less

Transcript and Presenter's Notes

Title: Econometrics Lecture 1


1
Econometrics Lecture 1
  • A Review of OLS, Model Specification and
    Mis-specification Testing

2
OUR OBJECTIVES
  • To establish whether the hypothesised economic
    relationship is supported by empirical evidence
  • To obtain 'good' estimates of the unknown
    parameters ?1 and ?2
  • To be able to test hypotheses about the unknown
    parameters or to construct confidence intervals
    surrounding our estimates.
  • To use our estimated regression model for other
    purposes, including particularly
  • forecasting
  • policy analysis/simulation modelling.

3
THE REQUIREMENTS OF A GOOD ECONOMETRIC MODEL
  • THE MODEL MUST BE DATA ADMISSIBLE.
  • It must be logically possible for the data to
    have been generated by the model that is being
    assumed to generate the data. This implies that
    the model should impose any constraints that the
    data are required to satisfy (e.g. they must lie
    within the 0-1 interval in a model explaining
    proportions).
  • THE MODEL MUST BE CONSISTENT WITH SOME ECONOMIC
    THEORY.
  • REGRESSORS SHOULD BE (AT LEAST) WEAKLY EXOGENOUS
    WITH RESPECT TO THE PARAMETERS OF INTEREST
  • This is a requirement for valid conditioning in a
    single equation modelling context, to permit
    efficient inference on a set of parameters of
    interest.
  • THE MODEL SHOULD EXHIBIT PARAMETER CONSTANCY.
  • This is essential if parameter estimates are to
    be meaningful, and if forecasting or policy
    analysis is to be valid.
  • THE MODEL SHOULD BE DATA COHERENT.
  • The residuals should be unpredictable from their
    past history. This implies the need for
    exhaustive misspecification testing.
  • THE MODEL SHOULD BE ABLE TO ENCOMPASS A RANGE OF
    ALTERNATIVE MODELS.

4
Model specification
  • Step 1 Specify a statistical model that is
    consistent with the relevant prior theory
  • (i) The choice of the set of variables to include
    in the model.
  • (ii)The choice of functional form of the
    relationship (is it linear in the variables,
    linear in the logarithms of the variables, etc.?)
  • Step 2 Select an estimator (OLS, GLS, GMM, IV,
    etc etc)
  • Step 3 Estimate the regression model using the
    chosen estimator
  • Step 4 Test whether the assumptions made are
    valid (in which case the regression model is
    statistically well-specified) and the estimator
    will have the desired properties.
  • Step 5a
  • If these tests show no evidence of
    misspecification in any relevant form, go on to
    conduct statistical inference about the
    parameters.
  • Step 5b
  • If these tests show evidence of misspecification
    in one or more relevant forms, then two possible
    courses of action seem to be implied
  • If you are able to establish the precise form in
    which the model is misspecified, then it may be
    possible to find an alternative estimator which
    will is optimal or will have other desirable
    qualities when the regression model is
    statistically misspecified in a particular way.
  • Regard statistical misspecification as a symptom
    of a flawed model. In this case, one should
    search for an alternative, well-specified
    regression model, and so return to Step 1.

5
CLRM Assumption
  • The assumptions of the CLRM are
  • The dependent variable is a linear function of
    the set of possibly stochastic, covariance
    stationary regressor variables and a random
    disturbance term. The model specification is
    correct.
  • The set of regressors is not perfectly collinear.
    This means that no regressor variable can be
    obtained as an exact linear combination of any
    subset of the other regressor variables.
  • The error process has zero mean. That is, E(ut)
    0 for all t.
  • The errors terms, ut, t1,..,T, are serially
    uncorrelated. That is, Cov(ut,us) 0 for all s
    not equal to t.
  • The errors have a constant variance. That is,
    Var(ut) s2 for all t.
  • Each regressor is asymptotically correlated with
    the equation disturbance, ut.
  • We sometimes wish to make the following
    assumption
  • The equation disturbances are normally
    distributed, for all t.

6
ASSUMPTIONS ABOUT THE SPECIFICATION OF THE
REGRESSION MODEL
7
Testing Restrictions
  • F Test

NOTE the sample must be the same for the two
regressions
8
Functional Form
  • We know yf(x) but not the correct functional form

LINEAR
LOGARITHMIC (LOG-LINEAR)
LIN-LOG (SEMI-LOG)
LOG-LIN (SEMI-LOG)
RECIPROCAL/RATIO FORM

POLYNOMIAL
9
Ramseys RESET test
  • First Step run regression by OLS and save
    residuals and fitted values
  • Second Step run auxiliary regression

10
Assumptions about Equation Disturbance Term
  • ABSENCE OF DISTURBANCE TERM SERIAL CORRELATION
  • CONSTANCY OF DISTURBANCE TERM VARIANCE
    (HOMOSCEDASTICITY)
  • NORMALITY OF DISTURBANCE TERM

11
ABSENCE OF DISTURBANCE TERM SERIAL CORRELATION
  • THE DURBIN-WATSON (DW) TEST\
  • Only first order autocorrelation
  • Only for models without lagged variables
  • DURBIN'S h STATISTIC
  • GODFREY'S LAGRANGE MULTIPLIER (LM) TESTS

12
Responses to presence of Autocorrelation
  • a Type 1 error has occurred (you have incorrectly
    rejected a true null)
  • your chosen regression model is misspecified in
    some way, perhaps because a variable has been
    incorrectly omitted, there are insufficient lags
    in the model, or the functional form is
    incorrect.
  • the disturbance term is actually serially
    correlated GLS

13
CONSTANCY OF DISTURBANCE TERM VARIANCE
(HOMOSCEDASTICITY)
  • Types of heteroscedasticity

14
Assumptions about the Parameters of the Model
  • PARAMETER CONSTANCY OVER THE WHOLE SAMPLE PERIOD

Chow 1 If enough observation in the two
subsample
Chow 2 predictive failure test not enough
observations in the subsample
Write a Comment
User Comments (0)
About PowerShow.com