Title: Generalized Minimum Bias Models
1Generalized Minimum Bias Models
- By
- Luyang Fu, Ph. D.
- Cheng-sheng Peter Wu, FCAS, ASA, MAAA
2Agenda
- History and Overview of Minimum Bias Method
- Generalized Minimum Bias Models
- Conclusions
- Mildenhalls Discussion and Our Responses
- QA
3History on Minimum Bias
- A technique with long history for actuaries
- Bailey and Simon (1960)
- Bailey (1963)
- Brown (1988)
- Feldblum and Brosius (2002)
- In the Exam 9.
- Concepts
- Derive multivariate class plan parameters by
minimizing a specified bias function - Use an iterative method in finding the
parameters
4History on Minimum Bias
- Various bias functions proposed in the past for
minimization - Examples of multiplicative bias functions
proposed in the past
5History on Minimum Bias
- Then, how to determine the class plan parameters
by minimizing the bias function? - One simple way is the commonly used iterative
method for root finding - Start with a random guess for the values of xi
and yj - Calculate the next set of values for xi and yj
using the root finding formula for the bias
function - Repeat the steps until the values converge
- Easy to understand and can program in almost any
tools
6History on Minimum Bias
- For example, using the balanced bias functions
for the multiplicative model
7History on Minimum Bias
- Past minimum bias models with the iterative
method
8Issues with the Iterative Method
- Two questions regarding the iterative method
- How do we know that it will converge?
- How fast/efficient that it will converge?
- Answers
- Numerical Analysis or Optimization textbooks
- Mildenhall (1999)
- Efficiency is a less important issue due to the
modern computation power
9Other Issues with Minimum Bias
- What is the statistical meaning behind these
models? - More models to try?
- Which models to choose?
10Summary on Minimum Bias
- A non-statistical approach
- Best answers when bias functions are minimized
- Use of iterative method for root finding in
determining parameters - Easy to understand and can program in many tools
11Minimum Bias and Statistical Models
- Brown (1988)
- Show that some minimum bias functions can be
derived by maximizing the likelihood functions of
corresponding distributions - Propose several more minimum bias models
- Mildenhall (1999)
- Prove that minimum bias models with linear bias
functions are essentially the same as those from
Generalized Linear Models (GLM) - Propose two more minimum bias models
12Minimum Bias and Statistical Models
- Past minimum bias models and their corresponding
statistical models
13Statistical Models - GLM
- Advantages include
- Commercial softwares and built-in procedures
available - Characteristics well determined, such as
confidence level - Computation efficiency compared to the iterative
procedure - Issues include
- Required more advanced knowledge for statistics
for GLM models - Lack of flexibility
- Rely on the commercial softwares or built-in
procedures - Assume the distribution of exponential families.
- Limited distribution selections in popular
statistical software. - Difficult to program yourself
14Motivations for Generalized Minimum Bias Models
- Can we unify all the past minimum bias models?
- Can we completely represent the wide range of GLM
and statistical models using Minimum Bias Models?
- Can we expand the model selection options that go
beyond all the currently used GLM and minimum
bias models? - Can we improve the efficiency of the iterative
method?
15Generalized Minimum Bias Models
- Starting with the basic multiplicative formula
- The alternative estimates of x and y
- The next question is how to roll up Xi,j to Xi,
and Yj,i to Yj
16Possible Weighting Functions
- First and the obvious option - straight average
to roll up - Using the straight average results in the
Exponential model by Brown (1988)
17Possible Weighting Functions
- Another option is to use the relativity-adjusted
exposure as weight function - This is Bailey (1963) model, or Poisson model by
Brown (1988).
18Possible Weighting Functions
- Another option using the square of
relativity-adjusted exposure - This is the normal model by Brown (1988).
19Possible Weighting Functions
- Another option using relativity-square-adjusted
exposure - This is the least-square model by Brown (1988).
20Generalized Minimum Bias Models
- So, the key for generalization is to apply
different weighting functions to roll up Xi,j
to Xi and Yj,i to Yj - Propose a general weighting function of two
factors, exposure and relativityWpXq and WpYq - Almost all published to date minimum bias models
are special cases of GMBM(p,q) - Also, there are more modeling options to choose
since there is no limitation, in theory, on (p,q)
values to try in fitting data comprehensive and
flexible
212-parameter GMBM
- 2-parameter GMBM with exposure and relativity
adjusted weighting function are
222-parameter GMBM vs. GLM
p q GLM
1 -1 Inverse Gaussian
1 0 Gamma
1 1 Poisson
1 2 Normal
232-parameter GMBM and GLM
- GMBM with p1 is the same as GLM model with the
variance function of - Additional special models
- 0ltqlt1, the distribution is Tweedie, for pure
premium models - 1ltqlt2, not exponential family
- -1ltqlt0, the distribution is between gamma and
inverse Gaussian - After years of technical development in GLM and
minimum bias, at the end of day, all of these
models are connected through the game of
weighted average.
243-parameter GMBM
- One model published to date not covered by the
2-parameter GMBM Chi-squared model by Bailey and
Simon (1960) - Further generalization using a similar concept of
link function in GLM, f(x) and f(y) - Estimate f(x) and f(y) through the iterative
method - Calculate x and y by inverting f(x) and f(y)
253-parameter GMBM
263-parameter GMBM
- Propose 3-parameter GMBM by using the power link
function f(x)xk
273-parameter GMBM
- When k2, p1 and q1
- This is the Chi-Square model by Bailey and Simon
(1960) - The underlying assumption of Chi-Square model is
that r2 follows a Tweedie distribution with a
variance function
28Further Generalization of GMBM
- In theory, no limitation in selecting the
weighting functions - another possible
generalization is to select the weight functions
separately and differently between x and y - For example, suppose x factors are stable and y
factors are volatile. We may only want to use x
in the weight function for y, but not use y in
the weight function for x. - Such generalization is beyond the GLM framework.
29Numerical Methodology for the Iterative Method
- Use the mean of the response variable as the base
- Starting points
- Use the latest relativities in the iterations
- All the reported GMBMs converge within 8 steps
30A Severity Case Study
- Data the severity data for private passenger
auto collision given in Mildenhall (1999) and
McCullagh and Nelder (1989). - Testing goodness of fit
- Absolute Bias
- Absolute Percentage Bias
- Pearson Chi-square Statistic
- Fit hundreds of combination for k, p and q k
from 0.5 to 3, p from 0 to 2, and q from -2.5 to 4
31A Severity Case Study
- Model Evaluation Criteria
- Weighted Absolute Bias (Bailey and Simon 1960)
- Weighted Absolute Percentage Bias
32A Severity Case Study
- Model Evaluation Criteria
- Pearson Chi-square Statistic (Bailey and Simon
1960) - Combine Absolute Bias and Pearson Chi-square
33A Severity Case Study
Criterion p q k
wab 2 0 3
wapb 2 0 3
Chi-square 1 1 2
combined 1 -0.5 2.5
34Conclusions
- 2 and 3 Parameter GMBM can completely represent
GLM models with power variance functions - All published to date minimum bias models are
special cases of GMBM - GMBM provide additional model options for data
fitting - Easy to understand and does not require advanced
statistical knowledge - Can program in many different tools
- Calculation efficiency is not a issue because of
modern computer power.
35Mildenhalls Discussion
- Statistical models are always better than
non-statistical models - GMBM dont go beyond GLM
- - GMBM (k,p,q) can be replicated by the
transformed GLMs - with rk as the response variable, wp as the
weight, and variance function as V(µ)µ2-q/k. - - When it is not exponential family (1ltqlt2), GLM
numerical algorithm (recursive re-weighted least
square) can still apply - Recursive re-weighted least square is extremely
fast. - In theory, agree with Mildenhall in practice,
subject to discussion
36Our Responses to Mildenhalls Discussion
- Are statistical models always better in practice?
- Require at least intermediate level of
statistical knowledge. - Statistical model results can only be provided by
statistical softwares. For example, GLM is very
difficult to implement in Excel without
additional software - Popular statistical softwares provide limited
distribution selections.
37Our Responses to Mildenhalls Discussion
- Are statistical models always better in practice?
- Few softwares provide solutions for distributions
with other power variance functions, such as
Tweedie and non-exponential distributions - It requires advanced statistical and programming
knowledge to program the above distributions
using the recursive re-weighted least square
algorithm - Costs involved acquiring softwares and knowledge
38Our Responses to Mildenhalls Discussion
- Calculation Efficiency
- Recursive re-weighted least square algorithm
converges with fewer iterations. - GMBM also converges fast with actuarial data. It
generally converges within 20 iterations by our
experience. - The cost in additional convergence is small and
the timing difference between GMBM and GLM is
negligible with modern powerful computers.
39Q A