WG4 Activities Priority project - PowerPoint PPT Presentation

About This Presentation
Title:

WG4 Activities Priority project

Description:

WG4 Activities Priority project Advanced interpretation and verification of very high resolution models Pierre Eckert MeteoSwiss, Geneva – PowerPoint PPT presentation

Number of Views:130
Avg rating:3.0/5.0
Slides: 53
Provided by: Holder4
Category:

less

Transcript and Presenter's Notes

Title: WG4 Activities Priority project


1
WG4 ActivitiesPriority project  Advanced
interpretation and verification of very high
resolution models 
Pierre EckertMeteoSwiss, Geneva WG4 coordinator
2
Gust diagnostics
  • Jan-Peter Schulz
  • Deutscher Wetterdienst

3
Diagnosing turbulent gusts
In the COSMO model the maximum gusts at 10 m
above the ground are estimated from the absolute
speed of the near-surface mean wind Vm and its
standard deviation s
following Panofsky and Dutton (1984)
4
Diagnosing turbulent gusts
In the original version of the COSMO model as of
1999 (35 levels, lowest one about 30 m above the
ground) the absolute mean wind speed at the
lowest level was taken for Vm in the formula
above. When introducing the 40 vertical levels in
2005 (lowest one about 10 m above the ground) the
formulation was adapted, in order to keep the
tuning, by interpolating Vm at 30 m from the two
lowest model levels (while computing the friction
velocity by definition from the speed at the
lowest model level). This formulation leads to
the overestimation of the gusts. In the new
version the wind speed at 10 m above the ground
is taken for Vm, leading to more realistic gust
estimates. a is kept at a value of 3.
5
Verification Period 10 - 25 Jan.
2007
Mean Gust (observed) m/s
Mean Gust/Mean Wind (observed)
Mean Wind m/s
x old new
6
Gust diagnostics
  • Recommandation
  • WG4 recommends that the formulation of wind gusts
    of the COMSO reference version is adapted so that
    the gusts are reduced.
  • Could be affected by the choice of the vertical
    discretisation
  • ? Poster

7
Thunderstorm Prediction with Boosting
Verification and Implementation of a new Base
Classifier
André Walser (MeteoSwiss) Martin Kohli (ETH
Zürich, Semester Thesis)
8
Output of the Learn process
  • M base classifier
  • Threshold classifier

9
AdaBoost Algorithm
Classifier
10
C_TSTORM MAPS
17 UTC
18 UTC
19 UTC
11
July 2006
7 events
Random forecast
12
The COSMO-LEPS system getting close to the
5-year milestoneAndrea Montani, Chiara Marsigli
and Tiziana PaccagnellaARPA-SIMHydrometeorologic
al service of Emilia-Romagna, Italy
IX General COSMO meeting Athens,18-21 September
2007
13
The new COSMO-LEPS suite _at_ ECMWFsince February
2006
16 Representative Members driving the 16
COSMO-model integrations (weighted according to
the cluster populations) employing either
Tiedtke or Kain-Fristch convection scheme
(randomly choosen)
3 levels 500 700 850 hPa
4 variables Z U V Q
d4
d3
d-1
d
d5
d1
d2
Cluster Analysis and RM identification
Cluster Analysis and RM identification
older EPS
00
younger EPS
2 time steps
12
clustering period
European area
Complete Linkage
  • suite running as a time-critical application
    managed by ARPA-SIM
  • ?x 10 km 40 ML
  • COSM0-LM 3.20 since Nov06
  • fc length 132h
  • Computer time (4.3 million BU for 2007) provided
    by the COSMO partners which are ECMWF member
    states.

COSMO-LEPS Integration Domain
COSMO-LEPS clustering area
14
Dissemination
  • probabilistic products
  • deterministic products (individual COSMO-LEPS
    runs)
  • derived probability products (EM, ES)
  • meteograms over station points
  • products delivered at about 1UTC to the COSMO
    weather services, to Hungary (case studies) and
    to the MAP D-PHASE and COPS communities (field
    campaign).

15
Time series of Brier Skill Score
  • BSS is written as 1-BS/BSref. Sample climate is
    the reference system. Useful forecast systems if
    BSS gt 0.
  • BS measures the mean squared difference between
    forecast and observation in probability space.
  • Equivalent to MSE for deterministic forecast.

BSS
fc step 30-42h
  • improvement of performance detectable for all
    thresholds along the years
  • still problems with high thresholds, but good
    trend in 2007.

16
Main results
  • COSMO-LEPS system runs on a daily basis since
    November 2002 (6 failures in almost 5 years of
    activity) and it has become a member-state
    time-critical application at ECMWF (? ECMWF
    operators involved in the suite monitoring).
  • COSMO-LEPS products used in EC Projects (e.g.
    PREVIEW) , field campaigns (e.g. COPS, MAP
    D-PHASE) and met-ops rooms across COSMO community.

Time series scores cannot easily disentangle
improvements related to COSMO-LEPS itself from
those due to better boundaries by ECMWF EPS.
  • Nevertheless, positive trends can be identified
  • increase in ROC area scores and reduction in
    outliers percentages
  • positive impact of increasing the population from
    5 to 10 members (June 2004)
  • although some deficiency in the skill of the
    system were identified after the system upgrades
    occurred on February 2006 (from 10 to 16 members
    from 32 to 40 model levels EPS upgrade!!!),
    scores are encouraging throughout 2007.
  • 2 more features
  • marked semi-diurnal cycle in COSMO-LEPS scores
    (better skill for night-time forecasts)
  • better scores over the Alpine area rather than
    over the full domain (to be confirmed).

17
Improving COSMO-LEPS forecasts of extreme events
with reforecastsF. Fundel, A. Walser, M.
Liniger, C. Appenzeller? Poster
18
Why can reforecasts help to improve
meteorological warnings?
Model Obs
25. Jun. -14d
19
Spatial variation of model bias
Difference of CDF of observations and COSMO-LEPS
24h total precipitation 10/2003-12/2006
Model too wet, worse in southern Switzerland
20
COSMO-LEPS Model Climatology
  • Setup
  • Reforecasts over a period of 30 years (1971-2000)
  • Deterministic run of COSMO-LEPS (1 member)
  • (convective scheme tiedtke)
  • ERA40 Reanalysis as Initial/Boundary
  • 42h lead time, 1200 Initial time
  • Calculated on hpce at ECMWF
  • Archived on Mars at ECMWF (surf (30 parameters),
  • 4 plev (8 parameters) 3h step)
  • Post processing at CSCS

21
Calibrating an EPS
x Model Climate Ensemble Forecast
22
New index
  • Probability of Return Period exceedance PRP
  • Dependent on the climatology used to calculate
  • return levels/periods
  • Here, a monthly subset of the climatology is used
  • (e.g. only data from September 1971-2000)
  • PRP1 Event that happens once per September
  • PRP100 Event that happens in one out of 100
    Septembers

23
Probability of Return Period exceedance
COSMO-PRP1/2
COSMO-PRP1
COSMO-PRP2
COSMO-PRP6
24
PRP based Warngramms
twice per September (15.8 mm/24h) once per
September (21 mm/24h) once in 3 Septembers
(26.3 mm/24h) once in 6 Septembers (34.8 mm/24h)
25
PRP with Extreme Value Analysis
The underlying distribution function of extreme
values yx-u above a threshold u is the
Generalized Pareto Distribution (GPD) (a special
case of the GEV)
?scale ?shape
C. Frei, Introduction to EVA
26
PRP with Extreme Value Analysis
COSMO-PRP60 (GPD)
COSMO-PRP12 (GPD)
27
Priority project  Verification of very high
resolution models 
  • Slides from
  • Felix Ament ? Poster
  • Ulrich Damrath
  • Carlo Cacciamani
  • Pirmin Kaufmann ? Poster

28
Motivation for new scores
  • Which rain forecast would you rather use?

29
Fine scale verification Fuzzy Methods
do not evaluate a point by point match!
General Recipe
  • (Choose a threshold to define event and
    non-event)
  • define scales of interest
  • consider statistics at these scales for
    verification

Scale
Evaluate box statistics
forecast
observation






















x x x
x x
x x x


x




X X
X X
X X
x X
X
X
x

? score depends on spatial scale and intensity
x
x
Intensity
30
A Fuzzy Verification Toolbox
Fuzzy method Decision model for useful forecast
Upscaling (Zepeda-Arce et al. 2000 Weygandt et al. 2004) Resembles obs when averaged to coarser scales
Anywhere in window (Damrath 2004), 50 coverage Predicts event over minimum fraction of region
Fuzzy logic (Damrath 2004), Joint probability (Ebert 2002) More correct than incorrect
Multi-event contingency table (Atger 2001) Predicts at least one event close to observed event
Intensity-scale (Casati et al. 2004) Lower error than random arrangement of obs
Fractions skill score (Roberts and Lean 2005) Similar frequency of forecast and observed events
Practically perfect hindcast (Brooks et al. 1998) Resembles forecast based on perfect knowledge of observations
Pragmatic (Theis et al. 2005) Can distinguish events and non-events
CSRR (Germann and Zawadzki 2004) High probability of matching observed value
Area-related RMSE (Rezacova et al. 2005) Similar intensity distribution as observed
Ebert, E.E., 2007 Fuzzy verification of high
resolution gridded forecasts A review and
proposed framework. Meteorol. Appls.,
submitted.Toolbox available at
http//www.bom.gov.au/bmrc/wefor/staff/eee/fuzzy_v
erification.zip
31
A Fuzzy Verification testbed
Virtual truth (Radar data, model data, synthetic
field)
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.90 0.90 0.90 0.90 0.90 0.90 1.00 1.00 1.00
0.70 0.70 0.70 0.70 0.70 0.70 1.00 1.00 1.00
0.50 0.50 0.50 0.50 0.50 0.50 0.90 0.90 0.90
0.50 0.50 0.50 0.50 0.50 0.50 0.90 0.90 0.90
0.40 0.40 0.50 0.50 0.50 0.50 0.90 0.90 0.90
0.30 0.40 0.40 0.50 0.50 0.50 0.90 0.90 0.90
Fuzzy Verification Toolbox
Perturbation Generator
Analyzer
Realizations of virtual erroneousmodel forecasts
Realizations ofverification results
  • Assessment of
  • sensitivity (mean)
  • reliability (STD)
  • Two ingredients
  • Reference fields Hourly radar derived rain
    fields, August 2005 flood event, 19 time stamps
    (Frei et al., 2005)
  • Perturbations ? next slide

32
Perturbations
Perturbation Type of forecast error Algorithm
PERFECT No error perfect forecast! -
XSHIFT Horizontal translation Horizontal translation (10 grid points)
BROWNIAN No small scale skill Random exchange of neighboring points (Brownian motion)
LS_NOISE Wrong large scale forcing Multiplication with a disturbance factor generated by large scale 2d Gaussian kernels.
SMOOTH High horizontal diffusion (or coarse scale model) Moving window arithmetic average
DRIZZLE Overestimation of low intensity precipitation Moving Window filter setting each point below average point to the mean value
33
Perfect forecast
All scores should equal !
  • But, in fact, 5 out of 12 do not!

34
Expected response to perturbations
XSHIFT BROWNIAN LS_NOISE SMOOTH DRIZZLE

coarse
spatial scale
fine
low highintensity
Sensitivity expected (0.0) not
expected (1.0)
Summary in terms of contrast
Contrast mean( ) mean( )
35
Summary real
good
XSHIFT BROWNIAN SMOOTH
XSHIFT LS_NOISE DRIZZLE
Contrast
Leaking Scores
Up-scaling Any-where in Window 50cover-age FuzzyLogig JointProb. Multi event cont. tab. Intensity Scale Fraction Skill Score Prag-matic Appr. Practic. Perf. Hindcast CSSR Area related RMSE
  • Leaking scores show an overall poor performance
  • Intensity scale and Practically Perfect
    Hindcast perform in general well, but
  • Many score have problem to detect large scale
    noise (LS_NOISE) Upscaling and 50 coverage
    are beneficial in this respect

STD
good
36
August 2005 flood event
Precipitation sum 18.8.-23.8.2005
(Hourly radar data calibrated using rain gauges
(Frei et al., 2005))
Mean 106.2mm
Mean 73.1mm
Mean 62.8mm
Mean 43.2mm
37
Fuzzy Verification of August 2005 flood
Based on 3 hourly accumulations during August
2005 flood period (18.8.-23.8.2005)
COSMO-2
COSMO-7
Scale(7km gridpoints)
Intensitythreshold (mm/3h)
good
bad
38
Fuzzy Verification of August 2005 flood
Difference of Fuzzy Scores
COSMO-2 better
neutral
Scale(7km gridpoints)
COSMO-7 better
Intensity threshold (mm/3h)
39
D-PHASE August 2007Intensity Scale score
(preliminary), 3h accumulation
COSMO-2
COSMO-7
COSMO-DE
COSMO-EU
40
Fuzzy-type verification for 12 h forecasts
(vv06 till vv18) starting at 00 UTC August
2007 (fraction skill score)
41
In box of different size (what is the best size
?) alert warning areas (Emilia-Romagna)
First simple approach averaging QPF
42
Sensitivity to box size and precipitation
threshold
Positive impact of larger box is more visible at
higher precipitation thresholds
43
Sensitivity to box size and precipitation
threshold
  • Best result box 0.5 deg ? (7 7 grid points )

44
Sensitivity to box size and precipitation
threshold
  • Best result box 0.5 deg ? (7 7 grid points )

45
Some preliminary conclusions
  • QPF spatial averaging over box or alert areas
    produces a more usable QPF field for
    applications. Space-time localisation errors are
    minimised
  • Box or alert areas with size of 5-6 times the
    grid resolution gives the best results
  • Positive impact of larger box is more visible at
    higher precipitation thresholds
  • The gain of HRLam with respect to GCMs is greater
    for high thresholds and for precipitation maxima
  • Better results increasing time averaging
    (problems with 6 hours accumulation period, much
    better with 24 hours cumulated period !

46
1999-10-25 (Case L)Temporal radius
obs
rt1
rt3
rt6
47
1999-10-25 (Case L)Spatial radius
obs
rxy5
rxy15
rxy10
48
Italian COSMO Models implementations
cross-verifications

49
Comparison between COSMO-ME and COSMO-IT (with
upscaling)
50
Verification of very high resolution
(precipitation) Optimal  scale 0.5 50
km 5 x grid (7km) 35 km 30 x 2.2 km 70
kmSome signals that 2 km models better than 7
kmI would like to generate smothed
productsMaterial starts to be collected MAP
D-PHASE, 2km modelsWork has to
continueExchange of experience with other
consortia
51
Verification of COSMO-LEPS and coupling with a
hydrologic model
  • André Walser1) and Simon Jaun2)
  • 1)MeteoSwiss
  • 2)Institute for Atmospheric and Climate Science,
    ETH

52
Data flow for MAP D-PHASE
Main partner WSL Swiss Federal Institute for
Forest, Snow and Landscape Research
53
Comparison different models
  • August 2007 event Linth at Mollis, initial time
    2007-08-07
Write a Comment
User Comments (0)
About PowerShow.com