WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED

Description:

Predicting variables and error term uncorrelated. No serial ... Check out the skewness and kurtosis. Compare it to the values for normal distribution ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 23
Provided by: joannat
Category:

less

Transcript and Presenter's Notes

Title: WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED


1
WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?
2
QUIZ ONCE AGAIN ?
  • What are the main OLS assumptions?
  • On average right
  • Linear
  • Predicting variables and error term uncorrelated
  • No serial correlation in error term
  • Homoscedasticity
  • Normality of error term

3
QUIZ ONCE AGAIN ?
  • Do we know the error term?
  • Do we know the coefficients?
  • How can we know whether all the assumptions are
    fulfilled
  • On average right gt ???
  • Linearity gt ???
  • X and e uncorrelated gt ???
  • e serially uncorrelated gt ???
  • e homoscedastic gt ???

4
FUNCTIONAL FORM
  • We assumed a certain functional form
  • What if in reality the relation exists, but has a
    different form?
  • Any function can be approximated by the Taylor
    expansion. Consequently
  • So what we need to test is that ?s are all zero
    in

5
FUNCTIONAL FORM cont.
  • This is easy, cause we already know how to do it
    ?
  • But there could be a lot of these terms (time
    consuming)
  • Ramsey came up with the idea, that if ?s are all
    zero, then fitted ys should have no correlation
    with actual ys.
  • So instead of testing whether ?s are all zero,
    we test whether they differ from zero ?
  • RESET TEST
  • Do your model
  • Find the fitted ys
  • Run a model with your Xß and all the powers of
    fitted ys
  • Test the hypothesis that coefficients by fitted
    ys are zero
  • If you cannot reject the null (i.e. that they are
    all zero), you have a functional misspecification
    problem gt you cannot say if your bs are correct
    estimates of ßs.

6
NORMALITY OF THE ERROR
  • We typically assume that es have a standard
    normal distribution N(O,s2).
  • This helps us to derive t and F distributions.
  • Although we never know es, we get es, who
    should be consistent with es (so the same
    distribution)
  • We can test if the distribution of es is far
    from normal
  • Jarque-Berra
  • Check out the skewness and kurtosis
  • Compare it to the values for normal distribution
  • The null says that they are alike, if you reject
    the null, you reject the normality in residuals
  • Does that hurt?

Central Limit Theorem
7
STABILITY OF THE PARAMETERS
  • We typically assume that ßs do not depend on the
    size of Xs (in other words, relation between Xs
    and ys is stable)
  • However, we can actually have many subsamples.
  • Then what? Look at your dots, see if such thing
    occurs, and run Chow estimation (just as in LAB)

8
NO AUTOCORRELATION(AT LEAST NOT SERIAL)
  • We also assume that subsequent error terms are
    independent of each other.
  • Assume that it does not hold, so that
  • What then?
  • Our estimators are still unbiased
  • We can also show they are consistent
  • But the problem occurs with the estimators of the
    variance of the estimators (which we need to test
    the significance hypothesis)

9
NO AUTOCORRELATION(AT LEAST NOT SERIAL)
  • Consequently, our estimators of the standard
    errors are incorrect
  • We cannot trust our t-statistics any more!
  • KEEP THAT IN MIND,
  • WELL COME BACK TO IT IN A SECOND

10
HOW DO WE GET AUTOCORRELATION?
  • What we need in the error term is white noise

11
HOW DO WE GET AUTOCORRELATION?
  • Positive autocorrelation (rare changes of signs)

12
HOW DO WE GET AUTOCORRELATION?
  • Negative autocorrelation (frequent changes of
    signs)

13
HOW DO WE GET AUTOCORRELATION?
  • Model misspecification can give it to you for
    free ?

14
NO HETEROSCEDASTICITY
  • We also assume that error terms do not depend in
    size on the size of Xs
  • Assume that it does not hold, so that
  • What then?
  • Our estimators are still unbiased
  • We can also show they are consistent
  • But the problem occurs with the estimators of the
    variance of the estimators (which we need to test
    the significance hypothesis)

15
HOW DO WE GET HETEROSCEDASTICITY?
  • What we need is error terms independent of SIZE
    of X.

16
HOW DOES THE V MATRIX LOOK?
  • We know that
  • This holds for both heteroscedasticity and for
    autocorrelation.
  • But arent there any differences?
  • Heteroscedasticity is about the diagonal (values
    along the diagonal differ and should be always
    the same)
  • Autocorrelation is about what happens outside the
    diagonal (they should be zero and they deviate
    from that)

17
Testing for hetero
  • Breusch-Pagan approach
  • The alternative hypothesis assumes that
    s2is2f(zi), where f(.) is continuous
  • Run your model yixißei
  • Run the regression of e2 on any set of variables
    (x, y, whatever)
  • Use the R2 from this regression (it has ?2
    distribution with p dof, where p is the no. of
    variables in the auxiliary regression)
  • Test
  • H0 no heteroscedasticity in this form (does not
    say NO heteroscedasticity in general!)
  • H1 heteroscedasticity in the assumed form

18
Testing for hetero
  • White approach
  • Heteroscedasticity occurs because some
    interrelations between xs are not accounted for
  • Run your model yixißei
  • Run the regression of e2 on all interactions of
    xs
  • Use the R2 from this regression (it has ?2
    distribution with K(K1) dof, where K is the no.
    of interactions in the auxiliary regression)
  • Test
  • H0 no heteroscedasticity in this form (does not
    say NO heteroscedasticity in general! this form
    is rather general though ?)
  • H1 heteroscedasticity in the assumed form

19
Testing for auto
  • Durbin-Watson approach
  • The alternative hypothesis states, that there is
    autocorrelation of order 1 (two closest e are
    correlated)
  • Run your model yixißei
  • Get your e2
  • Do the statistic on these e2

20
Testing for auto
  • Durbin-Watson approach continued
  • If there is no (or weak) autocorrelation, the
    last term would be equal (or close) to 0, so the
    whole statistic would be 2.
  • IF DW lt 2
  • blue positive autocorrelation, green
    inconclusive
  • no color no autocorrelation,
  • IF DWgt2
  • blue negative autocorrelation, green
    inconclusive, no color no autocorrelation
  • YOU CAN USE IT ON SMALL SAMPLES EVEN

21
Testing for auto
  • Breush-Godfrey approach
  • There is autocorrelation of order s (s closest e
    are correlated)
  • Run your model yixißei
  • Get your e2
  • Run the auxiliary regression on lagged e in the
    form of etx??1et-1 ?2et-2 ?set-s
  • Test the hypothesis that your ?s are zero
  • The nice part is that TR2 (where T is the number
    of observations) of this auxiliary regression
    allows to test it as a combined hypothesis with
    ?2 distribution with s dof, where s is the no. of
    lags you take into account.
  • MUCH NICER THAN DW, BUT REQUIRES BIG SAMPLES!

22
CONCLUSIONS ABOUT AUTO AND HETERO ?
  • What they both mean is that you can no longer
    trust the estimates of the standard errors
  • You can still trust the estimators of your model,
    but you cannot test if they are non-zero (no
    valid hypothesis testing)
  • If you have autocorrelation but a very big
    sample, you are asymptotically OK. so you need
    not to worry ?
  • Big sample does not help for heteroscedasticity
    though ?
  • In a small sample autocorrelation cannot be
    eliminated either
  • What we have as a response is GENERALISED LEAST
    SQUARE estimator gt GLS same as OLS only helps to
    overcome the misestimation of standard errors.
  • If no problems with auto and hetero, GLS less
    efficient than OLS (do not overuse it!)
Write a Comment
User Comments (0)
About PowerShow.com