Title: Introduction to Time Series Analysis
1Introduction to Time Series Analysis
2Stationarity in Time Series
- In time series analysis, we attempt to predict
the future path of a variable based on
information on its past behavior, meaning that
the variable exhibits some regularities - A valuable way to identify such regularities is
through the concept of stationarity - We say that a time series variable Yt is
stationary if - The variable has a constant mean at all points in
time - The variable has a constant variance at all
points in time - The correlation between Yt and Yt-k depends on
the length of the lag (k) but not on any other
variable
3Stationarity in Time Series
- What type of a time series variable exhibit this
behavior? - A variable that moves occasionally away from its
mean (due to a random shock), but eventually
returns to its mean (exhibits mean reversion) - A shock in the variable in the current period
will be reflected in the value of the variable in
future periods, but the impact diminishes as we
move away from the current period - Example The variable of stock returns of Boeing
exhibits the properties of stationarity
4Boeings monthly stock returns (1984-2003)
5Stationarity in Time Series
- A variable that does not meet one or more of the
properties of stationarity is a nonstationary
variable - What is the implication of nonstationarity for
the behavior of the time series variable? - A shock in the variable in the current period
never dies away and causes a permanent deviation
in the variables time path - Calculating the mean and variance of such a
variable, we see that the mean is undefined and
the variance is infinite - Example The SP 500 index (as opposed to the
returns on the SP index which exhibit
stationarity)
6The SP 500 Index Exhibits Nonstationarity
7The Returns on the SP 500 Exhibit Stationarity
8The Impact of Nonstationarity on Regression
Analysis
- The major impact of nonstationarity for
regression analysis is spurious regression - If the dependent and explanatory variables are
nonstationary, we will obtain high R-sq and
t-statistics, implying that our model is doing a
good job explaining the data - The true reason of the good model fit is that the
variables have a common trend - A simple correction of nonstationarity is to take
the first differences of variables (Yt Yt-1),
which creates a stationary variable
9Testing for Nonstationarity
- A common way to detect nonstationarity is to
perform a Dickey-Fuller test (unit root test) - The test estimates the following model
- and test the following one-sided hypothesis
10Testing for Nonstationarity
- If the estimate of ?1 is significantly less than
zero, then we reject the null hypothesis that
there is nonstationarity (meaning that variable Y
is stationary) - Note The critical values of the t-statistics for
the Dickey-Fuller test are considerably higher
than those in the tables of the t distribution - Example For n gt 120, the critical t-statistic
from the tables is near 2.3, while the
corresponding value from the Dickey-Fuller tables
is 3.43
11Characterizing Time Series VariablesThe
Autocorrelation Function (ACF)
- The ACF is a very useful tool because it provides
a description of the underlying process of a time
series variable - The ACF tells us how much correlation there is
between neighboring points of a time series
variable Yt - The ACF of lag k is the correlation coefficient
between Yt and - Yt-k over all such pairs in the data set
12Characterizing Time Series VariablesThe
Autocorrelation Function (ACF)
- In practice, we use the sample ACF (based on our
sample of observations from the time series
variable) to estimate the ACF of the process that
describes the variable - The sample autocorrelations of a time series
variable can be presented in a graph called the
correlogram - The examination of the correlogram provides very
useful information that allows us to understand
the structure of a time series
13Characterizing Time Series VariablesThe
Autocorrelation Function (ACF)
- Example Does the ACF of a stationary series
exhibit a certain pattern that can be detected by
studying the correlogram? - For a stationary series, the autocorrelations
between two points in time, t and tk, become
smaller as k increases - In other words, the ACF falls off rather quickly
as k increases - For a nonstationary series, this is usually not
the case, as the ACF remains large as k increases
14Correlogram and ACF of SP Index Variable
- Note that as the number of lags (k) increases,
the ACF declines, but at a very slow rate - This is an indicator of a nonstationary variable
- Compare this result with the graph of the level
of the SP Index shown previously
15Correlogram and ACF of Returns on the SP Index
- An examination of the correlogram of the variable
of returns on the SP index shows that this
variable exhibits stationarity - The ACF declines very rapidly, meaning that there
is very low correlation between observations in
periods t and tk as k increases
16Characterizing Time Series VariablesThe
Autocorrelation Function (ACF)
- To evaluate the quality of information from the
correlogram, we assess the magnitudes of the
sample autocorrelations by comparing them with
some boundaries - We can show that the sample autocorrelations are
normally distributed with a standard deviation of
1/(n)1/2 - In this case, we would expect that only 5 of
sample autocorrelations would lie outside a
confidence interval of ? 2 standard deviations
17Characterizing Time Series VariablesThe
Autocorrelation Function (ACF)
- Given that the correlogram shows values of
autocorrelations, these values cannot lie outside
the interval ? 1 - As the number of time series observations
increases above 40-50, the limits of the
confidence interval given by the standard
deviations become smaller - In practical terms, if the sample
autocorrelations lie outside the confidence
intervals given by the correlogram, then the
sample autocorrelations are different from zero
at the corresponding significance level
18Correlograms and Confidence Intervals for Sample
Autocorrelations
19From Sample Data to Inference About a Time Series
Generating Model
Sample Data
Sample Autocorrelations
Population Autocorrelation
Generating Model
20Linear Time Series Models
- In time series analysis, the goal is to develop a
model that provides a reasonably close
approximation of the underlying process that
generates the time series data - This model can then be used to predict future
values of the time series variable - An influential framework for this analysis is the
use the class of models known as Autoregressive
Integrated Moving Average (ARIMA) models
developed by Box and Jenkins (1970)
21Autoregressive (AR) Models
- In an AR model, the dependent variable is a
function of its past values - A simple AR model is
- This is an example of an autoregressive model of
order 1 or an AR(1) model - In general, an autoregressive model of order p or
AR(p) model will include p lags of the dependent
variable as explanatory variables
22Autoregressive (AR) Models
- Is it possible to conclude that a time series
follows an AR(p) model by looking at the
correlogram? - Example Suppose that a series follows the AR(1)
model - The ACF of the AR(1) model begins with the value
of 1 and then declines exponentially - The implication of this fact is that the current
value of the time series variable depends on all
past values, although the magnitude of this
dependence declines with time