Title: GOES-R AWG Product Validation Tool Development
1GOES-R AWG Product Validation Tool Development
- Winds Team Products
- Derived Motion Winds
- Hurricane Intensity
- Jaime Daniels (STAR)
Wayne Bresky (IMSG, Inc) Steve Wanzong
(CIMSS) Chris Velden (CIMSS) Andy Bailey (IMSG)
Tim Olander (CIMSS) Chris Velden (CIMSS)
2OUTLINE
- Example Product Output
- Validation Strategies
- Routine Validation Tools
- Deep-Dive Validation Tools
3Derived Motion Winds
4Example Output Long-wave IR Cloud-drift Winds
Cloud-drift Winds derived from a Full Disk
Meteosat-8 SEVERI 10.8 µm image triplet centered
at 1200 UTC 01 February 2007
4
Low-Level gt700 mb
Mid-Level 400-700 mb
High-Level 100-400 mb
5Example Output Visible Cloud-drift Winds
Cloud-drift Winds derived from a Full Disk
Meteosat-8 SEVERI 0.60 um image triplet centered
at 1200 UTC 01 February 2007
5
Low-Level gt700 mb
6Validation Strategies
- Routinely generate Derived Motion Wind (DMW)
product in real-time using available ABI proxy
data - Acquire reference/ground truth data and
collocate DMW product - Radiosondes, GFS analysis, Wind profilers,
CALIPSO - Analyze and visualize data (imagery, GFS model,
L2 products, intermediate outputs,
reference/ground truth) using available and
developed (customized) tools - Measure performance
- Modify L2 product algorithm(s), as necessary
7Validation Strategies
Radiosondes
GFS Analyses
Derived Motion Wind Product
CALIPSO
MET-9 SEVIRI Full Disk Imagery
Routine generation of winds product
Collocate DMW product with reference/ground truth
data
Clear-Sky Mask Cloud Products
GFS forecast files (GRIB2)
Analyze/ Visualize
Update L2 Product Algorithm(s), as necessary
Compute comparison statistics
Display Product Ground Truth Data
Re-retrieve single DMW
Search for outliers
Perform Case Study Analysis
8Routine Validation ToolsProduct Visualization
McIDAS-V
McIDAS-X
- Heavy reliance on McIDAS to visualize DMW
products, intermediate outputs, diagnostic data,
ancillary datasets, and reference/ground-truth
9Routine Validation ToolsProduct Visualization
Java-based program written to display satellite
winds vectors over a false color image
10Routine Validation ToolsCollocation Tools
- Collocation Software (DMW and Reference/Ground
Truth Winds) - Radiosondes
- GFS Analysis
- Customized code (built on top of McIDAS) to
perform the routine daily collocation of Level-2
products with their associated reference
(truth) observations - Creation of comprehensive collocation databases
that contain information that enables
comparisons, error analyses
Satellite/Raob winds
Satellite/GFS Winds
11Routine Validation ToolsComparison Statistics
GOES-13 CD WIND RAOB MATCH ERROR
STATISTICS PRESSURE RANGE 100 - 1000
LATITUDE RANGE -90 - 90
SAT GUESS RAOB
RMS DIFFERENCE (m/s) 6.68
6.11 NORMALIZED RMS
0.34 0.31 AVG DIFFERENCE (m/s)
5.51 5.02 STD DEVIATION
(m/s) 3.78 3.48
SPEED BIAS (m/s) -0.97
-1.32 DIRECTION DIF (deg)
14.85 15.06 SPEED (m/s)
18.55 18.20
19.52 SAMPLE SIZE
87100
- Customized codes that enable the generation and
visualization of comparison statistics - Text reports
- Creation of a database of statistics enabling
time series of comparison statistics to be
generated - Use the PGPLOT Graphics Subroutine Library
- Fortran- or C-callable, device-independent
graphics package for making various scientific
graphs - Visualize contents of collocated databases
- McIDAS is used
Satellite DMW vs. Raob Wind OR Satellite
DMW vs. GFS Analysis Wind
12Routine Validation ToolsComparison Statistics
Retrieved Winds (100-400 mb) vs Radiosonde Winds
Retrieved Winds (400-700 mb) vs Radiosonde Winds
13Example Scatter Plot Generated with PGPLOT
Version 3 vs. Version 4 Performance
Black Version 3 Algorithm RMS 7.78 m/s MVD
6.14 m/s Spd Bias -2.00 m/s Speed 17.68
m/s Sample 17,362 Light Blue Version 4
Algorithm (Nested Tracking) RMS 6.89 m/s MVD
5.46 m/s Spd Bias -0.18 m/s Speed 17.91
m/s Sample 17,428
LWIR Cloud-drift Winds August 2006 Meteosat-8,
Band 9
Sat Wind Speed (m/s)
Radiosonde Wind Speed (m/s)
14Validation Strategies
Radiosondes
GFS Analyses
Derived Motion Wind Product
CALIPSO
MET-9 SEVIRI Full Disk Imagery
Routine generation of winds product
Collocate DMW product with reference/ground truth
data
Clear-Sky Mask Cloud Products
GFS forecast files (GRIB2)
Analyze/ Visualize
Update L2 Product Algorithm(s), as necessary
Compute comparison statistics
Display Product Ground Truth Data
Re-retrieve single DMW
Search for outliers
Perform Case Study Analysis
15Deep-Dive Validation Tools
- Stand-alone re-retrieval visualization tool
that enables the generation of a single derived
motion wind vector for a single target scene and
allows for the visualization of wind solution,
tracking diagnostics, target scene
characteristics . PGPLOT library used.
Line Displacement
Element displacement
Largest Cluster measuring motion of front
Second Cluster measuring motion along front
matches raob
16Deep-Dive Validation Tools
- Stand-alone re-retrieval visualization tool
that enables the generation of a single derived
motion wind vector for a single target scene and
allows for the visualization of wind solution,
tracking diagnostics, target scene
characteristics . PGPLOT library used.
Target Scene Characteristics
Feature Tracking Diagnostics
Correlation Surface Plots
Spatial Coherence Plots
17Deep-Dive Validation Tools
Using CALIPSO/CloudSat Data to Validate Satellite
Wind Height Assignments
- Winds team continues to work closely with the
cloud team on cloud height problem (case studies,
most recently) - Leverages unprecedented cloud information offered
by CALIPSO and CloudSat measurements - Enables improved error characterization of
satellite wind height assignments - Enables feedback for potential improvements to
satellite wind height assignments - Improvements to overall accuracy of
satellite-derived winds
GOES-12 Cloud-drift Wind Heights Overlaid on
CALIPSO total attenuated backscatter image at
532nm
Work in progress
18Radiosonde
Deep-Dive Validation Tools
Visualization of reference/ground truth data
using McIDAS-V
Done using McIDAS-V
19Deep-Dive Validation Tools
At what height does satellite wind best fit?
20Deep-Dive Validation Tools
100 250 hPa 251 350 hPa 351 500 hPa
The search for outliers
Vector Difference gt 20 m/s
Large wind barbs are GFS Analysis winds at 200
hPa.
21Come see our Derived Motion Winds Posters
GOES-R AWG Winds Team Current Validation
Activities(Steve Wanzong is manning this poster)
New Methods for Minimizing the Slow Speed Bias
Associated with Atmospheric Motion Vectors
(AMVs)(Wayne Bresky is manning this poster)
22Hurricane Intensity
23Hurricane Intensity Product
- HIE algorithm output is purely textual
(specifically it consists of the current TC
intensity in terms of wind speed in m/s). No
product displays are required. Examples of
output Tailored Products are provided.
24Validation Strategies
- HIE intensity estimates (stored in HIE history
files) can be validated against two different
ground truth data sets either in real-time of
post-storm, depending on the data set used in the
process. - In situ aircraft reconnaissance measurements of
maximum wind speed. - May not be available for part or all of the storm
lifetime, depending on where the storm track is
located. - Working and Final Best Track storm intensity
history. - Available for entire storm lifetime, but may not
be based entirely on in situ data. - Working Best Track is available in real-time
during the storm lifetime. It may not be
accurate due to bad observational data,
inaccurate Dvorak estimates, or TC forecaster
error. - Final Best Track are made available after
extensive analysis of all in situ observations,
estimates from remote sensing methods/applications
, and TC forecast methodology have been examined. - Ground Truth data can be easily obtained via
NOAA Family of Services or FTP sites (such as
NOAA/NHC)
25Routine Validation Tools
- Datasets will include the HIE history file output
for each storm being analyzed. The history files
will be compared directly to the in situ aircraft
reconnaissance measurements of TC intensity or
the Best Track intensity for the storm in
question. - The HIE validation suite will produce statistical
comparisons of the HIE intensity estimates and
the validation data. The statistical analysis
will be provided in terms of wind speed (in m/s)
precision and accuracy metrics as well as
additional error metrics utilized at operational
NOAA TC forecasting and analysis centers. - HIE Validation analysis suite has already been
used by an operational TC forecast center
(NOAA/SAB) to verify the ADT/HIE, so it is
already familiar to organizations who wish to
validate the HIE. - Output products are ASCII text files derived
using a series of C programs and shell scripts.
No proprietary software is currently used.
26Routine Validation Tools
INTENSITY ERRORS (wind speed m/s)
bias rmse aae stdv cnt ADT07L
2.04 7.33 5.61 7.04 23
ADT-BestTrack Intensity Differences dCAT ALL
TD TS H12 H35 lt-20 0 0 0 0 0
-20 0 0 0 0 0 -15 1 0
0 0 1 -10 1 0 0 0 1 -5
4 0 1 1 2 0 9 0 1 6
2 5 4 0 0 1 3 10 2
0 0 0 2 15 2 0 0 0 2
20 0 0 0 0 0 gt20 0 0 0
0 0 23 0 2 8 13 dCAT
ALL TD TS H12 H35 lt 2.5
39.1 0.0 50.0 75.0 15.4 lt 7.5
73.9 0.0 100.0 100.0 53.8 gt10.0
17.4 0.0 0.0 0.0 30.8
- Current intensity validation statistical output
example - Intensity statistical error analysis versus
ground truth (either reconnaissance and/or NHC
Best Track information) - Accuracy and precision measurements are displayed
for the storm in question - Categorical differences in ADT differences from
ground truth can provide quick overview of any
intensity estimate biases - Output layout mirrors output parameters as
utilized in operations by NOAA/SAB
27Deep-Dive Validation Tools
- Analysis of HIE automated storm center
determination algorithm - Proper determination of storm center plays large
role in intensity accuracy - Graphical displays can easily show impact of
improved position over forecast - Statistical analysis can provide accuracy of each
different location method - Further statistical analysis comparisons will be
derived versus various operational tropical
cyclone forecast center intensity estimates - Manual/Subjective Dvorak Technique estimates will
also be compared to automated HIE estimates to
note/define biases and/or derive baseline
accuracy threshold for Dvorak-based methodologies
- NOAA/Satellite Analysis Branch (SAB) and
NOAA/NHC/Tropical Analysis and Forecast Branch
(TAFB), as well as the Joint Typhoon Warning
Center (JTWC) currently perform manual Dvorak TC
intensity estimates - Methods to obtain real-time ground truth
measurements of intensity and current operational
Dvorak estimates would need to be outlined - Output will be created using any graphical
software package since data is based upon simple
ASCII data files
28Deep-Dive Validation Tools
- Graphical timeline example of HIE analysis versus
observational data and/or TC forecast center
Dvorak estimates - Allows for quick analysis of the accuracy of the
HIE performance versus subjective Dvorak
estimates and/or ground truth - Plots can be provided in real-time or in
post-storm analysis mode - SAB Dvorak estimates and NHC Best Track are
displayed here
Timeline of HIE and NESDIS/Satellite Analysis
Branch (SAB) intensity estimates versus NHC Best
Track
29Deep-Dive Validation Tools
- Histogram of HIE and operational center intensity
estimates differences from ground truth - Provides easy display of errors between the two
methodologies - Can easily identify any biases in intensity
differences in either set of estimates - HIE versus SAB Dvorak intensity differences from
NHC Best Track are shown in the graph to the right
Histograms of HIE and NESDIS/SAB intensity
estimates differences versus NHC Best Track
30Deep-DiveValidation Tools
- Display HIE automated storm center position
versus ground truth and forecast interpolation
positions - Provides visual method to determine accuracy of
automated storm center selection position - Can be used to assess accuracy of current storm
forecast from issuing TCFC - Can be compared to aircraft reconnaissance, if
available
Example of image displaying storm center location
information.
31Deep-DiveValidation Tools
Storm center positioning errors
- Current intensity validation statistical output
example - Storm center positioning error analysis versus
ground truth (either reconnaissance and/or NHC
Best Track information) - Accuracy and precision measurements are displayed
for the storm in question or for entire ocean
basin and season - Comparisons with manual positions from TCFC can
be output, if available - Output layout mirrors output parameters as
utilized in current operations by NOAA/SAB
POSITIONING ERRORS (distance in nmi) OVERALL
bias rmse aae stdv cnt SABLAT
0.00 0.17 0.12 0.17 118 SABLON
-0.07 0.24 0.17 0.23 118 SABDIST
13.86 118 ADTLAT 0.04
0.22 0.17 0.22 118 ADTLON -0.05
0.31 0.22 0.31 118 ADTDIST
18.16 118 Estimated Position
Error (nmi) by Fix Method Method Num ()
ADT SAB FORECAST 77 ( 65) 20.1
14.9 SPIRAL 29 ( 24) 16.8 13.9 COMBO
12 ( 10) 8.9 7.2 EXTRAP 0 ( 0)
0.0 0.0 OVERALL 118 18.2 13.9
32Come see our Hurricane Intensity Validation
Poster The GOES-R Hurricane Intensity
Estimation (HIE) Algorithm Overview of Validation
Activity and Methodology (Tim Olander is
manning this poster)