Forecasting Mesoscale Uncertainty: Short-Range Ensemble Forecast Error Predictability - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Forecasting Mesoscale Uncertainty: Short-Range Ensemble Forecast Error Predictability

Description:

ONR Multi-Disciplinary University Research Initiative (MURI) and ... VAR-CEC beats ENS-PDF handily. VAR-CEC skill is generally small, but positive over 40-70 ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 28
Provided by: richard217
Category:

less

Transcript and Presenter's Notes

Title: Forecasting Mesoscale Uncertainty: Short-Range Ensemble Forecast Error Predictability


1
Forecasting Mesoscale UncertaintyShort-Range
Ensemble Forecast Error Predictability
  • Eric P. Grimit and Clifford F. Mass
  • University of Washington

Supported by ONR Multi-Disciplinary University
Research Initiative (MURI) and A Consortium of
Federal and Local Agencies
2
Overview
  • This talk deals with
  • Forecasting forecast errors
  • Of surface weather variables esp. wind and
    temperature
  • In both a deterministic and probabilistic sense
  • Using short-range ensemble forecast guidance
    MM5 0 48 hrs
  • At the mesoscale O ( 10 km ) O ( 1000 km )
  • Skill assessment of the forecast error
    predictions
  • Through verification with mesoscale analyses
    20-km RUC (NCEP)
  • By comparison with idealized statistical model
    results

3
Traditional Forecast Error Prediction
  • Based on the premise that ensemble spread should
    provide an approximation to the true forecast
    uncertainty
  • agreement disagreement
  • smaller forecast errors larger
    forecast errors
  • A linear relationship between ensemble spread and
    forecast accuracy was expected -- the
    spread-skill relationship
  • Assumes that
  • forecast uncertainty and forecast error are
    equivalent
  • forecast error can be skillfully predicted using
    a deterministic methodology

4
A Disappointing Relationship
  • Highly scattered relationships, thus low
    correlations
  • Often less than 0.4
  • Some have concluded that categorical measures of
    forecast spread are more skillful predictors of
    forecast accuracy
  • (Toth et al. 2001, Ziehmann 2001)
  • e.g. statistical entropy, mode population

5
A Disappointing Relationship
  • More recent studies show that domain-averaged
    spread-error correlations can be as high as
    0.6-0.7
  • (Grimit and Mass 2002, Stensrud and Yussouf 2003)
  • Potentially higher correlations can be achieved
    by considering only cases with extreme spread

6
The Real Deal
  • In theory, for a perfect ensemble of infinite
    size
  • The strength of the correlation between ensemble
    spread (s) and the ensemble mean forecast error
    (eEM) is limited by the case-to-case spread
    variability (?).
  • (Houtekamer, 1993)
  • Even with infinite spread variability, spread and
    accuracy do not have a linear relationship (? lt
    0.8).

7
An Inherently Deterministic Approach
  • Only the mean absolute forecast error is
    estimated in the regression
  • Therefore, only an unsigned, deterministic error
    forecast can be generated

Idealized, statistical ensemble forecasts. N
2000 M 50 ? 0.5
8
A Simple Stochastic Model of Spread-Skill
  • An extension of the Houtekamer (1993) model of
    spread-skill
  • For details, please see the conference manuscript
  • PURPOSES
  • To establish practical limits of forecast error
    predictability that could be expected given
    perfect ensemble forecasts of finite size.
  • To address the user-dependent nature of forecast
    error estimation by employing a variety of
    predictors and error metrics.
  • To extend forecast error prediction to a
    probabilistic framework.

9
A Probabilistic Viewpoint
  • Uncertainty is different from error.
  • Uncertainty is the probability distribution of
    error.
  • Necessitates a probabilistic viewpoint
  • Any observed error is only a SINGLE, random draw
    from this unknown (and coveted) error
    distribution
  • Cannot infer the error distribution from only one
    sample.
  • One solution is to group together observed errors
    from different cases that share some common
    characteristic

10
The Conditional Error Climatology (CEC) Method
  • Use historical errors, conditioned by spread
    category, as probabilistic forecast error
    predictions
  • Evaluate skill by cross-validation
  • Use CRPS, and associated skill score (CRPSS)
    relative to error climatology forecast
  • Tradeoff between number of bins and number of
    samples
  • Variance-based conditional error climatology
    method
  • VAR-CEC

Idealized, statistical ensemble forecasts. N
2000 M 50 ? 0.5
11
Idealized Probabilistic Error Forecast Skill
(continuous case)
  • May use the ensemble variance directly to get a
    probabilistic error forecast
  • ENS-PDF
  • Most skillful approach if PDF is well-forecast
  • ENS-PDF CRPSS 0.060
  • VAR-CEC CRPSS 0.031
  • VAR-CEC best among spread-based CEC methods when
    using a continuous verification
  • Predictability highest for extreme spread cases
  • Validates empirical results

Idealized, statistical ensemble forecasts. N
10001 M 50 ? 0.5
12
UW SREF System Summary
of
EF Initial Forecast
Forecast Name Members Type
Conditions Model(s) Cycle
Domain ACME 17 SMMA 8 Ind. Analyses,
Standard 00Z 36km, 12km 1
Centroid, MM5 8 Mirrors
UWME 8 SMMA Independent
Standard 00Z 36km, 12km Analyses
MM5 UWME 8 PMMA
8 MM5 00Z
36km, 12km variations
PME 8 MMMA
8 native 00Z, 12Z 36km
large-scale
Homegrown
Imported
ACME Analysis-Centroid Mirroring
Ensemble PME Poor-Mans Ensemble SMMA Single-Mo
del Multi-Analysis PMMA Perturbed-Model
Multi-Analysis MMMA Multi-model Multi-Analysis
13
Mesoscale SREF and Verification Data
  • Mesoscale SREF Data
  • 129 cases (31 OCT 2002 28 MAR 2003)
  • 48h forecasts initialized at 0000 UTC
  • Parameters of Focus
  • 12 km Domain Temperature at 2m (T2), Wind Speed
    and Direction at 10m (WSPD10, WDIR10)
  • Short-term mean bias correction
  • Separately applied to each ensemble member,
    location, forecast lead time
  • Training window chosen to be 14 days
  • Verification Data
  • 12 km Domain RUC20 analysis
  • (NCEP 20 km mesoscale analysis)
  • observations

14
Spatial Distribution of Local Spread-Error
Correlation
UWME
(no bias correction)
Maximum Local STD-AEM correlation 0.54
Domain-Averaged STD-AEM correlation 0.62
15
Real Probabilistic Error Forecast Skill
UWME
(no bias correction)
  • VAR-CEC beats ENS-PDF handily
  • VAR-CEC skill is generally small, but positive
    over 40-70
  • of the grid points through F24

16
Real Probabilistic Error Forecast Skill
UWME
(no bias correction)
  • UWME exhibits larger spread-error correlations
  • Larger VAR-CEC skill (positive CRPSS into day-2
    over 40-50 of the grid points)
  • ENS-PDF improves (better raw PDF from UWME)

17
Effect of Post-Processing
UWME
(14-day grid point bias correction)
  • Bias correction reduces spread-error correlations
    and effectiveness of the VAR-CEC approach
  • Temporal spread variability decreases
  • ENS-PDF closes the gap in performance, but is
    still below the baseline

18
Conclusions
  • Traditional spread-error correlation is not the
    best way to describe the spread-skill
    relationship nor does it provide an adequate
    framework for making skillful forecast error
    predictions.
  • Probabilistic forecast error prediction is a good
    alternative.
  • If the true PDF is not well forecast, a
    spread-based CEC method provides a viable
    methodology.
  • Continuous (categorical) measures of ensemble
    spread are most appropriate as forecast error
    predictors for end users with a continuous
    (categorical) utility function. (see conference
    manuscript)
  • Forecast error predictability is higher for cases
    with extreme spread
  • The local spread-skill relationship appears to be
    much weaker than the domain-averaged spread-skill
    relationship.
  • A simple bias correction improves ensemble
    forecast skill (Eckel 2003), but may also degrade
    forecast error predictability via the
    spread-based traditional and CEC methods.

19
Contact Information
  • Eric P. Grimit, Ph.C.
  • University of Washington, Dept. of Atmospheric
    Sciences
  • Box 351640 Seattle, WA 98195
  • E-mail epgrimit_at_atmos.washington.edu
  • Ph. (206) 543-1456

20
EXTRA SLIDES
21
Idealized Probabilistic Error Forecast Skill
(categorical case)
  • End-users may not have a continuous utility
    function.
  • Divide ensemble forecasts into categories
  • Climatologically equally likely bins
  • Fixed-width bins
  • Bins associated with critical thresholds
  • (e.g. WSPD10 gt 34 kt, T2 lt 0 oC)
  • Stratify errors by
  • statistical entropy ENT-CEC
  • modal frequency MOD-CEC
  • MOD-CEC slight winner among spread-based CEC
    methods when using a categorical verification
  • success / failure, BSS

Idealized, statistical ensemble forecasts. N
10000 M 50 ? 0.5
22
Future Work
  • Will more sophisticated post-processing methods
    improve forecast error predictability?
  • Likely, via the direct ENS-PDF approach.
  • Unlikely, via the spread-based CEC approaches.
  • Can temporal ensemble spread be incorporated into
    the probabilistic error forecasting framework?
  • Are the probabilistic error forecast results
    similar for point locations using real
    observations?
  • Observation-based verification and analysis

23
Domain-Averaged Spread-Error Correlation
(no bias correction)
UWME
UWME
  • The benefit of including model physics
    variability is apparent.
  • Domain-averaging produces correlations much
    higher than expected. Correlations of averages
    are referred to as ecological correlations in
    statistics.

24
Domain-Averaged Spread-Error Correlation
(14-day bias correction)
UWME
UWME
  • Bias correction reduces case-to-case spread
    variability, resulting in poorer spread-error
    correlations overall.

25
A Simple Stochastic Model of Spread-Skill
  • Draw todays forecast uncertainty from a
    log-normal distribution (Houtekamer 1993 model).
  • ln( s ) N( ln(sf) , b 2 )
  • Create synthetic ensemble forecasts by drawing M
    values from the true distribution.
  • Fi N( Z , s 2 ) i 1,2,,M
  • Draw the verifying observation from the same
    true distribution (statistical consistency).
  • V N( Z , s 2 )
  • Stochastically simulated ensemble forecasts at a
    single, arbitrary observing location or
    model-grid box with 50,000 realizations (cases)
  • Assumed
  • Gaussian statistics
  • statistically consistent (perfectly reliable)
    ensemble forecasts
  • Varied
  • temporal spread variability (b)
  • finite ensemble size (M)
  • spread and skill metrics (continuous and
    categorical)

26
Simple Model Spread-Error Correlations
STD-AEM correlation
STD-RMS correlation
spread STD Standard Deviation error RMS Root-M
ean Square error AEM Absolute Error of the
ensemble Mean
b 0.5 M 50
27
Value of Forecast Error Prediction
  • Operational forecasters require explicit
    prediction of this flow-dependent forecast
    uncertainty
  • Helps to decide how much to trust model forecast
    guidance
  • Current uncertainty knowledge is partial, and
    largely subjective
  • End users could greatly benefit from knowing the
    expected forecast error
  • Allows sophisticated users to make optimal
    decisions in the face of uncertainty (economic
    cost-loss or utility)
  • Common users of weather forecasts confidence
    index

Take protective action if P(ET2m gt 2 C) gt
cost/loss
Write a Comment
User Comments (0)
About PowerShow.com