Evaluating Forecasting Models - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Evaluating Forecasting Models

Description:

Standard Deviation (SD) = the square root of the MSE. 7 Error Measures (cont'd) ... This measure is the square root of the mean squared error (MSE). Signed ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 23
Provided by: DavidL247
Category:

less

Transcript and Presenter's Notes

Title: Evaluating Forecasting Models


1
Evaluating Forecasting Models
2
Evaluating Forecasting Models
  • In-sample versus out-of-sample forecasts
  • Ex Post versus Ex Ante Forecasts
  • Forecast error from forecasts of independent
    variables

3
7 Error Measures Diebold p. 260-263
  • Mean (Average) Error (ME) the average of the
    error for each period
  • Mean Absolute Deviation (MAD) the average of
    the absolute values of the error for each period
  • Mean Squared Error (MSE) the average of the
    squared error for each period
  • Standard Deviation (SD) the square root of the
    MSE

4
7 Error Measures (contd)
  • Signed Squared Error (SSE) the average of the
    squared error with the signed retained of the
    direction of the error
  • Mean Percentage Error (MPE) the average of the
    percentage error for each period. The percentage
    is calculated by dividing the error by the actual
    sales.
  • Mean Absolute Percentage Error (MAPE) the
    average of the absolute value of the percentage
    error for each period. The percentage is
    calculated by dividing the error by the actual
    sales.

5
Error Measure Example
  • Whiteboard

6
Mean Error (ME)
  • shows direction of error
  • does not penalize extreme errors
  • errors cancel out (no idea of how much)
  • in original units

7
Mean Error (ME)
  • Average Error This measure averages the errors
    for all periods forecast.

8
Mean Absolute Deviation (MAD)
  • shows magnitude of overall error
  • does not penalize extreme errors
  • errors do not cancel out
  • no idea of direction of error
  • in original units

9
Mean Absolute Deviation (MAD)
  • Mean Absolute Deviation - This measure averages
    the absolute values of the errors for all periods
    forecast

10
Mean Squared Error (MSE)
  • penalizes extreme errors
  • errors do not offset one another
  • not in original units
  • does not show direction of error

11
Mean Squared Error (MSE)
  • This measure averages the squares of the error
    for all periods forecast.

12
Standard Deviation (SD) or (RMSE)
  • penalizes extreme errors
  • errors do not offset one another
  • in original units
  • does not show direction of error

13
Standard Deviation (SD) or (RMSE)
  • This measure is the square root of the mean
    squared error (MSE).

14
Signed Squared Error (SSE)
  • penalizes extreme errors
  • errors can offset one another
  • shows direction of error
  • not in original units

15
Signed Squared Error (SSE)
  • This measure is similar to the mean squared error
    except that the original sign of the error is
    kept for each periods squared error

16
Mean Percentage Error (MPE)
  • takes percentage of actual sales
  • does not penalize extreme error
  • errors can offset one another
  • shows direction of error
  • assumes more sales can absorb more error in units

17
Mean Percentage Error (MPE)
  • This measure divides the error by the actual
    value. An average is taken for all periods
    forecast.

18
Mean Absolute Percentage Error (MAPE)
  • takes percentage of actual sales
  • does not penalize extreme deviations
  • does not cancel offsetting errors
  • assumes more sales can absorb more error in units
  • does not show direction of error

19
Mean Absolute Percentage Error (MAPE)
  • This measure sums the absolute values of the
    error divided by the actual value. An average is
    taken for all periods forecast

20
Theils U-Statistic
  • This measure takes the ratio of the MPE (defined
    above) to what the MPE would be if using the
    naïve method of letting the last actual value be
    the next forecast.

21
Theils U-Statistic
  • If U1, the naïve method is as good as the
    forecasting technique being evaluated.
  • If Ult1, the forecasting technique being used is
    better than the naïve method. The smaller the
    U-statistic, the better the forecasting technique
    relative to the naïve method.
  • If Ugt1, there is no point in using a formal
    forecasting method, since using a naïve method
    will produce better results.

22
McLaughlins Batting Average
  • This measure is a conversion of Theils
    U-Statistic to something more intuitive. If the
    batting average is over .300, the forecasting
    method being evaluated is doing better than a
    naïve technique.
Write a Comment
User Comments (0)
About PowerShow.com