Title: Autocorrelated Disturbances
1- Autocorrelated Disturbances
- In the Classical Normal Linear Regression Model
2Linear Regression Model
- Yt b0 b1X1t b2X2t bKXKt et
- t 1,2,,T
- where
- Yt dependent variable
- Xt explanatory variable
- et stochastic disturbance (unobservable)
- bk regression parameters (k 0,1,,K)
3The Classical Assumptions
- 1. The regression model is linear in the
coefficients, correctly specified, has an
additive error term. - 2. E(e) 0.
- All explanatory variables are uncorrelated with
the error term. - Errors corresponding to different observations
are uncorrelated with each other. - The error term has a constant variance.
- No explanatory variable is an exact linear
function of any other explanatory variable(s).
4The Classical Assumptions
7. The error term is normally distributed such
that
5Autocorrelated Disturbances
- Violations of Classical Assumptions 4 7.
- et are not independent (7).
- Errors occurring at time t will be related to
errors occurring at time t-s (4), i.e., - Eet ,et-s ? 0 for t gt s.
6First-Order Serial Correlation
- et r?et-1 ut ? t
- where
- r lt 1
- ut iid N(0,s2 ) ? t
- Eut, us 0 ? t gt s
- Eut, et-1 0 ? t
u
7Positive Serial Correlation
- If r is positive, it means that the error term
tends to have the same sign from one observation
to the next. - A large external shock to the economy in one
period may have an effect over several periods. - Visual Inspection A plot of the residuals from
your regression will tend to be positive for a
number of observations, then negative for several
more, then back again.
8Negative Serial Correlation
- If r is negative, it means that the error term
tends to switch signs from negative to positive
and back again for consecutive observations. - There is some sort of cycle in the distribution
of random errors. - Sometimes a problem with first differences or
growth rates of time series observations because
changes in a variable often follow a cyclical
pattern.
9(No Transcript)
10Interpretation of r
- It can be shown that the covariance between any
two successive error terms, say et and et-1, is
equal to r?s2. - Therefore,
- which, by definition, is the coefficient of
correlation between et and et-1.
e
11Interpretation of r
- Similarly, it can be shown that r2 is the
coefficient of correlation between et and et-2 - r3 is the coefficient of correlation between et
and et-3 and so forth. - Thus, r measures the degree of the relationship
between et and et-1. - -1 ? r ? 1
12Other Forms of Serial Correlation
- Seasonally based serial correlation
- et ret-4 ut
- (quarterly data)
- Second-order serial correlation
- et r1et-1 r2et-2 ut
- General model of serial correlation
- et r1et-1 r2et-2 rpet-p ut
13Consequences of Autocorrelation
- OLS estimates of bk are still unbiased and
consistent. - OLS estimated of bk are no longer efficient,
i.e., they no longer have the smallest variance
among all linear unbiased estimators.
14Remedies
- Cochrane-Orcutt Method (unknown r)
- Step 1 Apply OLS to the original model and
calculate - where et are the
- resulting residuals
15Cochrane-Orcutt Method
- Step 2 Lagging the model one period and
pre-multiplying by r yields - which we can subtract from our original equation
to get the rho-differenced model
16Cochrane-Orcutt Method
- And we can use OLS to estimate
- From this regression, we can use the residuals,
ut, and get an new estimate of r which we can
plug into the equation above to get new estimates
of the slope coefficients. - We repeat this procedure, iterating until the
slope estimates converge.
17Hildreth-Lu Method
- Equivalent to the Cochrane-Orcutt method.
- Use the rho-difference model, searching over a
grid of different values for r between -1 and 1
selecting the value which minimizes the sum of
squared residuals for the transformed model. - Typical grid
- r -0.95, -0.90, -0.85, , 0.85, 0.90, 0.95
18Diagnostic Tests
- The most widely used test for the presence of
autocorrelation is the Durbin-Watson test. - H0 r 0
- HA r gt 0
- Test statistic
19Durbin-Watson Test
- Decision Rule for HA r gt 0
- (positive autocorrelation)
- Three possible outcomes
- 1. Reject H0 if DW lt DWL
- 2. Do not reject H0 if DW gt DWU.
- 3. The test is inconclusive if
- DWL ? DW ? DWU
20Durbin-Watson Test
Decision Rule for HA r lt 0 Three possible
outcomes 1. Reject H0 if DW lt DWL (positive
autocorrelation) or if DW gt 4 - DWL (negative
autocorrelation) 2. Do not reject H0 if DWU lt
DW lt 4 - DWL. 3. The test is inconclusive if DWL
? DW ? DWU or if 4 - DWU ? DW ? 4 - DWL
21DWL
DWU
Rejection Region Positive Serial Correlation
Acceptance Region No Positive
Serial Correlation
Dependent Variable CH Method Least
Squares Sample(adjusted) 19521 19984 Included
observations 188 after adjusting
endpoints Variable Coefficient Std.
Error t-Statistic Prob. C 78.31635 10.51183 7.4503
07 0.0000 FTBS3 -2.405857 0.390890 -6.154824 0.00
00 YPDH 0.853092 0.007402 115.2485 0.0000 NETWRT
H 0.013745 0.000887 15.49185 0.0000 R-squared 0.
999371 Mean dependent var 2748.335 Adjusted
R-squared 0.999361 S.D. dependent
var 1187.543 S.E. of regression 30.03095
Akaike info criterion 9.663381 Sum squared
resid 165941.9 Schwarz criterion 9.732242 Log
likelihood -904.3578 F-statistic 97410.91 D
urbin-Watson stat 0.619060 Prob(F-statistic)
0.000000
0
0.619
2
4
1.61
1.74
Inconclusive Region
22Durbin-Watson Test
- Values of the DW statistic close to 2 will
generally lead you to not reject H0. - Values close to zero (or 4) will generally lead
to rejection of H0. - Can also be used as a consistent
- estimate of r for the Cochrane-
- Orcutt procedure.
-
23Lagged Dependent Variables
- The Durbin Watson Statistic is not a valid test
statistic when the equation includes a lagged
dependent variable as one of the explanatory
variables. - Use instead the Durbin-h test statistic which is
normally distributed
24Alternative Test for Autocorrelation
- Durbins m-test using the OLS residuals from the
original (untransformed) model - Estimate the model
- et a0 a1X1t aKXKt
- g1et-1 g1et-2 g1et-p ut
- Then use an F-test of
- H0 g1 g2 gp 0
25Development of the Money Demand Equation in the
Colby Quarterly Model of the US EconomyMd
f(R, Y, P)
26Dependent Variable FM2 Method Least
Squares Sample(adjusted) 19591 19992 Included
observations 162 after adjusting
endpoints Variable Coefficient Std.
Error t-Statistic Prob. C -4443.789 366.9164 -12.
11118 0.0000 FTBS3 -80.04056 6.265476 -12.77486 0.
0000 LOG(JGDPB) 1179.446 153.5452 7.681426 0.0000
YPDH 0.601081 0.070789 8.491125 0.0000 R-squared
0.979978 Mean dependent var 1782.670 Adjusted
R-squared 0.979598 S.D. dependent
var 1294.507 S.E. of regression 184.9002
Akaike info criterion 13.30189 Sum squared
resid 5401715. Schwarz criterion 13.37813 Log
likelihood -1073.453 F-statistic 2577.838 D
urbin-Watson stat 0.117095 Prob(F-statistic)
0.000000
Interpret this coefficient
27(No Transcript)
28Dependent Variable FM2 Method Least
Squares Sample(adjusted) 19592 19992 Included
observations 161 after adjusting
endpoints Convergence achieved after 21
iterations Variable Coefficient Std.
Error t-Statistic Prob. C -1394.462 594.5423 -2.3
45439 0.0203 FTBS3 -3.447721 1.717735 -2.007131 0.
0465 LOG(JGDPB) 254.9655 269.6233 0.945636 0.3458
YPDH 0.081295 0.048020 1.692938 0.0925 AR(1) 1.011
525 0.002388 423.5000 0.0000 R-squared 0.999830
Mean dependent var 1791.954 Adjusted
R-squared 0.999826 S.D. dependent
var 1293.124 S.E. of regression 17.07855
Akaike info criterion 8.544087 Sum squared
resid 45501.59 Schwarz criterion 8.639783 Log
likelihood -682.7990 F-statistic 229279.2
Durbin-Watson stat 0.668768
Prob(F-statistic) 0.000000
29(No Transcript)
30Consider our rho-differenced GLS
equation Yt-rYt-1 b0(1-r) b1(Xt-rXt-1)
ut What happens when r 1 ?
31Dependent Variable D(FM2) Method Least
Squares Sample(adjusted) 19593 19992 Included
observations 160 after adjusting
endpoints Convergence achieved after 7
iterations Variable Coefficient Std.
Error t-Statistic Prob. C 32.17894 6.379597 5.044
039 0.0000 D(FTBS3) -0.575563 1.070836 -0.537490 0
.5917 DLOG(JGDPB) -539.2069 357.1017 -1.509953 0.1
331 D(YPDH) 0.031176 0.025151 1.239572 0.2170 AR
(1) 0.811573 0.047575 17.05865 0.0000 R-squared
0.653800 Mean dependent var 26.33708 Adjuste
d R-squared 0.644865 S.D. dependent
var 21.27704 S.E. of regression 12.67967
Akaike info criterion 7.948628 Sum squared
resid 24919.96 Schwarz criterion 8.044727 Log
likelihood -630.8902 F-statistic 73.17937
Durbin-Watson stat 2.217664
Prob(F-statistic) 0.000000
32- Renormalize the relationship.
- Re-write the demand equation in terms of FTBS3 so
that it becomes an interest rate predicting
equation. - Same theory money demand is inversely related to
interest rates.
33Dependent Variable D(FTBS3) Method Least
Squares Sample(adjusted) 19592
19992 Included observations 161 after
adjusting endpoints Variable Coefficient Std
. Error t-Statistic Prob. C -0.189541 0.144462
-1.312041 0.1914 D(FM2) -0.006645 0.002944 -2.257
542 0.0254 DLOG(JGDPB) 19.10078 9.698059 1.969547
0.0507 D(YPDH) 0.007948 0.002101 3.782981 0.0002
R-squared 0.107755 Mean dependent
var 0.010435 Adjusted R-squared 0.090705
S.D. dependent var 0.820131 S.E. of
regression 0.782052 Akaike info
criterion 2.370739 Sum squared resid 96.02193
Schwarz criterion 2.447295 Log
likelihood -186.8445 F-statistic 6.320184
Durbin-Watson stat 1.801651
Prob(F-statistic) 0.000449
34(No Transcript)
35Dependent Variable FTBS3 Method Least
Squares Sample(adjusted) 19592 19992 Included
observations 161 after adjusting
endpoints FTBS3FTBS3(-1)C(1)C(2)D(FM2)C(3)DL
OG(JGDPB)C(4)D(YPDH) Coefficient Std.
Error t-Statistic Prob. C(1) -0.189541 0.144462 -
1.312041 0.1914 C(2) -0.006645 0.002944 -2.257542
0.0254 C(3) 19.10078 9.698059 1.969547 0.0507 C(4)
0.007948 0.002101 3.782981 0.0002 R-squared 0.9
13762 Mean dependent var 5.957019 Adjusted
R-squared 0.912114 S.D. dependent
var 2.638003 S.E. of regression 0.782052
Akaike info criterion 2.370739 Sum squared
resid 96.02193 Schwarz criterion 2.447295 Log
likelihood -186.8445 F-statistic 554.5129 D
urbin-Watson stat 1.801651 Prob(F-statistic)
0.000000
36(No Transcript)