Title: Ka-fu Wong University of Hong Kong
1Ka-fu WongUniversity of Hong Kong
Forecasting with Regression Models
2Linear regression models
Endogenous variable
Exogenous variables
Explanatory variables
Rule, rather than exception all variables are
endogenous.
3Conditional forecasting
The h-step ahead forecast of y given some assumed
h-step-ahead value of xTh.
Assumed h-step-ahead value of the exogenous
variables
Call it scenario analysis or contingency analysis
based on some assumed h-step-ahead value of the
exogenous variables.
4Uncertainty of Forecast
- Specification uncertainty / error our models
are only approximation (since no one knows the
truth). E.g., we adopt an AR(1) model but the
truth is AR(2). - Almost impossible to account for via a forecast
interval. - Parameter uncertainty / sampling error
parameters are estimated from a data sample.
The estimate will always be different from the
truth. The difference is called sampling error. - Can account for via a forecast interval if we do
the calculation carefully. - Innovation uncertainty errors that cannot be
avoided even if we know the true model and true
parameter. This is, unavoidable. - Often account for via a forecast interval using
standard softwares.
5Quantifying the innovation and parameter
uncertainty
Consider the very simple case in which x has a
zero mean
6Density forecast that accounts for parameter
uncertainty
7Interval forecasts that do not acknowledge
parameter uncertainty
8Interval forecasts that do acknowledge parameter
uncertainty
The closer xTh is closer to its mean, the
smaller is the prediction-error variance.
9Unconditional Forecasting Models
Forecast based on some other models of x, say, by
assuming x to follow an AR(1).
10h-step-ahead forecast without modeling x
explicitlyBased on unconditional forecasting
models
- Standing at time T, with observations, (x1,y1),
(x2,y2),,(xT,yT) - 1-step-ahead
- yt b0 b1 xt-1 et
- yT1 b0 b1 xT et
- 2-step-ahead
- yt b0 b1 xt-2 et
- yT2 b0 b1 xT et
-
- h-step-ahead
- yt b0 b1 xt-h et
- yTh b0 b1 xT et
11h-step-ahead forecast without modeling x
explicitlyBased on unconditional forecasting
models
- Special cases
- The model contains only time trends and seasonal
components. - Because these components are perfectly
predictable.
12Distributed Lags
y depends on a distributed lags of past xs
b0, d1,,dNx
Parameters to be estimated
13Polynomial Distributed Lags
b0, a, b, c
Parameters to be estimated
14Rational Distributed Lags
Example A(L) a0 a1L B(L) b0 b1L
b0 yt b1 yt-1 a0 xt a1 xt-1 b0 et b1
et-1
yt - b1 yt-1 a0xt a1xt-1 b0 et b1
et-1/b0
yt - b1/b0 yt-1 a0/b0 xt a1/b0 xt-1
et b1/b0 et-1
15Regression model with AR(1) disturbance
16ARMA(p,q) models equivalent to model with only a
constant regressor and ARMA(p,q) disturbances.
17Transfer function models
A transfer function is a mathematical
representation of the relation between the input
and output of a system.
18Vector Autoregressions, VAR(p)allows
cross-variable dynamics
VAR(1) of two variables.
The variable vector consists of two elements.
The innovations allowed to be correlated.
Regressors consist of the variable vector lagged
one period only.
19Estimation of Vector Autoregressions
Run OLS regressions equation by equation.
OLS estimation turns out to have very good
statistical properties when each equation has the
same regressors, as in standard VARs.
Otherwise, a more complicated estimation
procedure called seemingly unrelated regression,
which explicitly accounts for correlation across
equation disturbances, would be need to obtain
estimates with good statistical properties.
20The choice order Estimation of Vector
Autoregressions
Use AIC and SIC.
21Forecast Estimation of Vector Autoregressions
Given the parameters, or parameter estimates
y1,T, y2,T
y1,T1, Y2,T1
y1,T1, y2,T1
y1,T2, Y2,T2
y1,T2, y2,T2
y1,T3, Y2,T3
y1,T3, y2,T3
22Predictive Causality
- Two principles
- Cause should occur before effect.
- A causal series should contain information useful
for forecasting that is not available in the
other series. - Predictive Causality in a VAR
y2 does not cause y1 if f12 0
In a bivariate VAR, noncausality in 1-step-ahead
forecast will imply noncausality in h-step-ahead
forecast.
23Predictive Causality
- In VAR with higher dimension, noncausality in
1-step-ahead forecast need not imply noncausality
in h-step-ahead forecast. - Example
- Variable i may 1-step-cause variable j
- Variable j may 1-step-cause variable k
- Variable i 2-step-causes variable k but does not
1-step-cause variable k.
24Impulse response functions
All univariate ARMA(p,q) processes can be written
as
We can always normalize the innovations with a
constant m
25Impulse response functions
Impact of et on yt
1 unit increase in et is equivalent to one
standard deviation increase in et.
1 unit increase in et has b0 impact on yt
1 standard deviation increase in et has b0s
impact on yt, b1s impact on yt, etc.
26AR(1)
27VAR(1)
28Normalizing the VAR by the Cholesky factor
If y1 is ordered first,
Example y1 GDP, y2 Price level
An innovation to GDP has effects on current GDP
and price level. An innovation to price level has
effects only on current price level but not
current GDP.
29Features of Cholesky decomposition
- The innovations of the transformed system are in
standard deviation units. - The current innovations in the normalized
representation have can non-unit coefficients. - The first equation has only one current
innovation, e1,t. The second equation has both
current innovations. - The normalization yields a zero covariance
between the innovations.
30Normalizing the VAR by the Cholesky factor
If y2 is ordered first,
Example y1 GDP, y2 Price level
An innovation to price level has effects on
current GDP and price level. An innovation to GDP
has effects only on current GDP but not current
price level.
31Impulse response functions
- With bivariate autoregression, we can compute
four sets of impulse-response functions - y1 innovations (e1,t) on y1
- y1 innovations (e1,t) on y2
- y2 innovations (e2,t) on y1
- y2 innovations (e2,t) on y2
32Variance decomposition
- How much of the h-step-ahead forecast error
variance of variable i is explained by
innovations to variable j, for h1,2,. ? - With bivariate autoregression, we can compute
four sets of variance decomposition - y1 innovations (e1,t) on y1
- y1 innovations (e1,t) on y2
- y2 innovations (e2,t) on y1
- y2 innovations (e2,t) on y2
33Exampley1 Housing starts, y2 Housing
completions (196801 199606)
group fig112 starts comps freeze(Figure112)
fig112.line(d)
Observation 1 Seasonal pattern.
Observation 2 Highly cyclical with business
cycles.
Observation 3 Completions lag starts.
34Correlogram and Ljung-Box Statistics of housing
starts (196801 to 199112)
freeze(Table112) starts.correl(24)
35Correlogram and Ljung-Box Statistics of housing
starts (196801 to 199112)
36Correlogram and Ljung-Box Statistics of housing
completions (196801 to 199112)
freeze(Table113) comps.correl(24)
37Correlogram and Ljung-Box Statistics of housing
starts (196801 to 199112)
38Starts and completions, sample cross-correlations
freeze(Figure115) fig112.cross(24) starts comps
39VAR regression by OLS (1)
equation Table114.ls starts c starts(-1)
starts(-2) starts(-3) starts(-4) comps(-1)
comps(-2) comps(-3) comps(-4)
40VAR regression by OLS (1)
41VAR regression by OLS (1)
42VAR regression by OLS (1)
43VAR regression by OLS (2)
equation Table116.ls comps c starts(-1)
starts(-2) starts(-3) starts(-4) comps(-1)
comps(-2) comps(-3) comps(-4)
44VAR regression by OLS (2)
45VAR regression by OLS (2)
46VAR regression by OLS (2)
47Predictive causality test
group tbl108 comps starts freeze(Table118)
tbl108.cause(4)
48Impulse response functions(response to one
standard-deviation innovations)
var fig1110.ls 1 4 starts comps freeze(Figure1110)
fig1110.impulse(36,m)
49Variance decomposition
freeze(Figure1111) fig1110.decomp(36,m)
50Starts History, 196801-199112Forecast,
199201-199606
51Starts History, 196801-199112Forecast,
199201-199606
52Completions History, 196801-199112Forecast,
199201-199606
53Completions History, 196801-199112Forecast,
and Realization, 199201-199606
54End