Title: Statistics of Seismicity and Uncertainties in Earthquake Catalogs Forecasting Based on Data Assimilation
1Statistics of Seismicity and Uncertainties in
Earthquake CatalogsForecasting Based on Data
Assimilation
- Maximilian J. Werner
- Swiss Seismological Service
- ETHZ
Didier Sornette (ETHZ), David Jackson, Kayo Ide
(UCLA) Stefan Wiemer (ETHZ)
2Statistical Seismology
stochastic and clustered earthquakes
uncertain representations of earthquakes in
catalogs
scientific hypotheses, models, forecasts
3Magnitude Fluctuations
b1
Gutenberg-Richter Law
Relocated Hauksson Catalog, 1984-2002
4Rate Fluctuations
7.1 Hector Mine 1999
7.3 Landers 1992
6.4 Northridge 1994
6.6 Superstition Hills 1987
Rate
Triggered Events
Days since mainshock
Magnitude
Relocated Hauksson Catalog, 1984-2002
Omori-Utsu Law
Productivity Law
5Spatial Fluctuations
7.1 Hector Mine 1999
7.3 Landers 1992
6.4 Northridge 1994
5.4 Oceanside 1986
Relocated Hauksson Catalog, 1984-2002
6Seismicity Models
simple
- Time-independent random (Poisson process)
- Time-dependent, no clustering (renewal process)
- Time-dependent, simple clustering (Poisson
cluster models) - Time-dependent, linear cascades of clusters
(epidemic-type earthquake sequences) - non-linear cascades of clusters
Current gold standard null hypothesis
complex
7A Strong Null Hypothesis
Epidemic-Type Aftershock Sequence (ETAS) model
Ogata (1988, 1998)
Gutenberg-Richter Law
Omori-Utsu Law
Productivity Law
Time-independent spontaneous events
Every earthquake independently triggers
events (of any size)
8Earthquake forecasts
Experimental forecasts for California based on
the ETAS model
9Effects of Undetected Quakes on Observable
Seismicity
- why small earthquakes matter
- why undetected quakes, absent from catalogs,
matter - using a model to simulate their effects
- implications of neglecting them
Sornette Werner (2005a, 2005b), J. Geophys. Res.
10Magnitude Uncertainties Impact Seismic Rate
Estimates, Forecasts and Predictability
Experiments
- Outline
- quantify magnitude uncertainties
- analyze their impact on forecasts in short-term
models - how are noisy forecasts evaluated in current
tests? - how to improve the tests and the forecasts
Werner Sornette (2007), in revision in J.
Geophys. Res.
11Earthquakes, catalogs and models
?
Seismicity Model
Earthquakes
Measurement process
Model parameters
!
Earthquake catalog
!
Calibrated seismicity model
New catalog data
!
neglected
Forecasts
!
exact
noisy
Evaluation of consistency
12Magnitude Noise and Daily Forecasts of Clustering
Models
Collaboratory for the Study of Earthquake
Predictability (CSEP) Regional Earthquake
Likelihood Models (RELM) Daily earthquake
forecast competition
I will focus on random magnitude errors and
short-term clustering models
13Moment Magnitude Uncertainties CMT vs USGS
Distribution of magnitude estimate differences
Hill plot of scale parameter
Laplace distribution
14Short-Term Clustering Models
Productivity Law
Omori-Utsu Law
Gutenberg-Richter Law
These 3 laws are used in models by Vere-Jones
(1970), Kagan and Knopoff (1987), Ogata (1988),
Reasenberg and Jones (1989), Gerstenberger et al.
(2005), Zhuang et al. (2005), Helmstetter et al.
(2006), Console et al. (2007), ...
15A Simple Cluster Model
mainshocks cluster centers
aftershocks clusters
Earthquake rate
Noisy magnitudes
centers
aftershocks
What are the fluctuations of the
deviations?
16Distributions of Perturbed Rates
PDF
PDF
PDF
PDF
17Heavy Tails of Perturbed Rates
for
Survivor function
exponent
Productivity law of aftershocks
Noise scale parameter
Productivity law of aftershocks
Noise scale parameter
- Combination of
- Power law tails
- Catalog realization
- Averaging according
- to Levy or Gauss regime
Survivor function
18Evaluating Noisy Forecasts
How important are the fluctuations in the
evaluation of forecasts?
- Conduct a numerical experiment
- Simulate earthquake reality according to our
simple cluster model - Make reality noisy
- Generate forecasts from noisy data
- Submit forecasts to mock CSEP/RELM test center
- Test noisy forecasts on reality using currently
proposed consistency tests - Reject models if tests confidence is 90 (i.e.
expect 1 in 10 rejected wrongfully) - Calibrate parameters of the experiment to mimic
California
19Numerical Experiment Results
Level of noise
Number of rejected models
Violates assumed 90 confidence bounds
no
0/10
probably
10/60
yes
9/10
yes
7/10
10/10
yes
20Implications
- Forecasts are noisy and not an exact expression
of the models underlying scientific hypothesis. - Variability of observations consistent with model
are non-Poissonian when accounting for
uncertainties. - The particular idiosyncrasies of each model also
cannot be captured by a Poisson distribution. - But the consistency tests assume Poissonian
variability! - Models themselves should generate the full
distribution. - Complex noise propagation can be simulated.
- Two approaches
- Simple bootstrap Sample from past data
distributions to generate many forecasts. - Data assimilation correct observations by prior
knowledge in the form of a model forecast.
21Earthquake Forecasting Based on Data Assimilation
- Outline
- current methods for accounting for uncertainties
- introduction to data assimilation
- how data assimilation can help
- Bayesian data assimilation (DA)
- sequential Monte Carlo methods for Bayesian DA
- demonstration of use for noisy renewal process
Werner, Ide Sornette (2008), in preparation.
22Existing Methods in Earthquake Forecasting
- The Benchmark
- Ignore uncertainties
- Current strategy of operational forecasts (e.g.
cluster models)
- The Bootstrap
- Sample from plausible observations to generate
average forecast - Renewal processes with noisy occurrence times
- Paleoseismological studies (Rhoades et al., 1994
Ogata, 2002)
- The Static Bayesian
- consider entire data set and correct observations
by model forecast - Renewal processes with noisy occurrence times
- Paleoseismological studies (Ogata, 1999)
- Generalize to multi-dimensional, marked point
processes - Use Bayesian framework for optimal use of
information - Provide sequential forecasts and updates
23Data Assimilation
- Talagrand (1997) The purpose of data
assimilation is to determine as accurately as
possible the state of the atmospheric (or
oceanic) flow, using all available information - Statistical combination of observations and
short-range forecasts produce initial conditions
used in model to forecast. (Bayes theorem) - Advantages
- General conceptual framework for uncertainties
- Constrain unknown initial conditions
- Account for observational noise, system noise,
parameter uncertainties - Deal with missing observations
- Best possible recursive forecast given all
information - Include different types of data
24Data Assimilation
25Bayesian Data Assimilation
Unobserved states
Noisy observations
- This is a conceptual solution only.
- Analytical solution only available under
additional assumptions - Kalman filter Gaussian distributions, linear
model - Approximations
- local Gaussian extended Kalman filter
- ensembles of local Gaussians ensemble Kalman
filter - particle filters non-linear model, arbitrary
evolving distributions
Initial condition
Model forecast
Data likelihood
Obtain posterior
Using Bayes theorem
Sequentially
Prediction
Update
26Sequential Monte Carlo Methods
- flexible set of simulation-based techniques for
estimating posterior distributions - no applications yet to point process models (or
seismology)
particles
weights
27Temporal Renewal Processes
Noise
Renewal process
Forecast
Likelihood (observation)
Analysis / Posterior
Werner, Ide and Sornette (2007), in prep
28Numerical Experiment
Model
Noisy observations
Parameters
29Step 1
30Step 2
31Step 5
32Outlook
- Data assimilation of more complex point processes
and operational implementation (non-linear,
non-Gaussian DA) - Including parameter estimation
- Estimating and testing (forecasting) corner
magnitude, - based on geophysics, EVT
- including uncertainties (Bayesian?)
- Spatio-temporal dependencies of seismicity?
- Estimating extreme ground motions shaking
- Interest in better spatio-temporal
characterization of seismicity (spatial, fractal
clustering) - Improved likelihood estimation of parameters in
clustering models - (scaling laws in seismicity, critical phenomena
and earthquakes)