Title: Radar Calibration Radar Snowfall Estimation Automated Precipitation Detection, Amount and Typing
1Radar CalibrationRadar Snowfall Estimation
Automated Precipitation Detection, Amount and
Typing
- Paul Joe, Brian Sheppard, Nick Kouwen
- Isztar Zawadzki, Norman Donaldson
2Outline
- General Comments Wx Radar/Precip Measurements
- Radar Calibration Workshop
- Automated Snowfall Amounts/Typing
3Conclusions
- Radar for QPE not there yet
- Can work well in certain situations
- Long range and large areas problems are being
solved - But not demonstrated to the hydrology community
4General Comments
- Since 1948/52
- Z related to Drop Size Distributions
- Z nD6 RD4.5
- Z 200 R 1.6
- valid for rain and snow
- many more since then (/- 3 dB)
- Z related to R in ideal situation
- short range
- Z related to snow water equivalent
- DSD is only one of the factors
- long range
- Joss and Waldvogel 1990
5Snow/Rainfall Estimation Factors
- Space-time sampling
- Ground Clutter/Anomalous Propagation
- Vertical Profile Correction/Bright Band
- Calibration
- Attenuation
- Partial Beamfilling
- DSD Z-R/Rain Classification and type
- convective vs stratiform
- wet bulb technique (rain vs snow)
- Drop Size Distribution
- Z-R convective, Z-R stratiform, Z-A
- Wind drift problem for snow (application
dependent)
6Ideal SituationClose Range Difference due to DSD
(Joss)
7Snowfall
- 3 seasons of very good comparisions
- well calibrated radar
- well sited gauge
- consistent snow type
- low wind speed
- better than rain results
- effective climatological ZS relationships
- low snow gradients
8Low Level Scanning
0o
GC removal
9(No Transcript)
10Network Processing (QA)
Using two radars goes along way toward filling
gaps
11Vertical Profile of Reflectivity and Range
Convective Stratiform Snow
12Vertical Profile of Reflectivity and Range
13The Swiss experience
14The Swiss experience
15Polarization
16Radar Calibration
17Radar Calibration Workshop Summary
- Complex system - Stable electronic calibration
but consistent? - Rainfall as a success criteria
- Expansive definition of calibration
- various criteria of success
- calibration, performance measurements,
validation, adjustment - Inter-radar/network comparisons
- cross-radar comparisons including TRMM (GPM)
- Validation
- problematic standards
- trend to use basins
- Successful absolute external target
calibrations - Prevalent use of the Sun
18Calibration and Validation(Andy White Nexrad ROC)
- Calibration is the process of quantitatively
defining system responses to known, controlled
signal inputs. - Validation is the process of assessing by
independent means the quality of the data
products derived from system inputs. - Validation is a natural adjunct to calibration.
- Calibration - validation inseparable
Mean Reflectivity Error of -1.47 dB, s 1.34 dB
before solar cal
19Calibration with a DSD gage (POSS) Using the Z-R
relationship for each day or by event(Zawadzki,
MRO Sheppard and Joe, MSC)
By Event and ZR coefficients
34 to 7.5 error reduction
20Range Correction with Raingauges ApproachTo
Resolve a Heterogeneous Network(Daniel
Michelson, SMHI)
Range Correction See also USBR studies
21Kdp/Zdr Self-Consistency for Calibration(Vivek
NCAR)
- Procedure
- Select radial radar data in rain that are free
from any ice and ground clutter. - Compute KDP at each range gate using Z and ZDR.
- Sum the KDP along the radial to obtain the
estimated FDP. - Compute reflectivity bias as estimated
FDP/measured FDP. - Process sufficient of rays to obtain a mean
value of the bias.
3.13 dBZ bias
22Calibration Measurements
- Transmitter
- Frequency, PRF, Average RF power
- Antenna
- Gain, Beamwidths
- Receiver
- Output vs. RF input, IF filter loss, Range
Uncertainty Analysis
23Inter-radar Comparisons
TRMM WSR88D(Meneghini, NASA)
TRMM PR Stability 0.5 dB Consistent
device Compare with WSR88D
Inter-radar Comparisons(Asko Huuskonen FMI)
24Needs
- Consistency of calibration
- Redundant measurements
- Error estimates
- Use small basins to integrate the measurements
(USWRP QPE workshop) - overcome scale and time
sampling
(Uijlenhoet, Princeton)
25Z-S
26High Temporal Snow Rate Measurements
- Experiment to assess our ability to provide 1
minute information on precipitation type and
amount.
27Analysis
- Compare snow amounts by event
- integrate 1 minute results
- No standard for 1 minute rate measurements
- Analysis is necessary but not sufficient
28Vaisala FD12P
Biral VPF730
Fwd Scattering, capacitance
Fwd Back Scattering, pulse detection of
terminal velocity
POSS
WiVis
Microwave CW Doppler
Fischer-Porter Manual Obs
IR scintillation, spectral analysis of
fluctuations, fwd scattering
29(No Transcript)
30Fisher-Porter vs Manual Observations
31Poss vs Manual Obs
32POSS Snow Amounts by Temperature
Bright Band
33POSS and Snowfall
1 minute data
34Z-S Crystal Type
35Automated Precipitation Typing
36Reference Data Set
- Regular manual observations
- hourlies and specials
- Special clinical observations - not done
- minutely manual observations
- usually made during precipitation events
- false alarms not computable since observations
are not made - expensive
- however, optimal if done all the time!
37Window Matching vs Minutely Matching
- Minutely matching - too many ways for mismatch
- Develop window matching to try to mimic how an
observation is made - sensor observation time window is defined
- to match the uncertainty in manual observation
period - percentage of window is defined
- predominant type is determined
- to match the persistence criteria
- observation window determined by an observer
report - one event one observation
- no report no event no comparison (even if
sensor reports)
38Performance MetricHeike Skill Score
HSS1 perfect HSS0 no skill HSS-1 always wrong
39Optimal Detection Results/Setup
40IdentificationHeike Skill Score
All show skill!
41Snow vs Drizzle
Doppler Spectra Mode Analysis
42Conclusions
- All sensors show skill in detection and typing
- Not all sensors report the same types
- Main sensor differences in light and very light
conditions which dominate the statistical results - Window analysis presented to better represent
the capability of the sensor (10-15
improvements) - Analysis is affected by instrument
detectability/sensitivity - Microwave estimates of snowfall water equivalent
amounts superior to optical. However, bright
band effect experienced by radar near 0 C
causes large overestimation of amounts - Room for improvement - must determine the
requirement and criteria for success - Mixed type not studied and require algorithm
development.
43How are we doing with the various
steps? (Zawadzki)
Removing non meteorological data Filtering at
signal processing Morphology of reflectivity and
Doppler Polarization diversity
VPR correction for beam height and
shadows Morphology of reflectivity Adjustments
with gages (N N, Optimization) Probability
Matching
Correction for attenuation Networks of
radars? Polarization diversity?
Z-R relations Polarization diversity?
Morphology of reflectivity?
Validation not enough
44Summary
- Progress on many fronts
- user success criteria
- network consistency needed
- radar adjustments
- calibration
- automated high temporal measurements
- Much more to do
- complete correction system needed
- need to demonstrate
- Need a standard for high temporal rate
measurements - necessary but not sufficient demonstration
45Then ID the target
r 120km (h 2.6km)