Title: Stochastic Processes
1Stochastic Processes
2Markov Jump Processes
3Markov Jump Processes - Overview
- Treatment analogous to Markov chains
- One step transition probabilities replaced by
transition rates. - Transition graph still useful.
- Transition matrix rows now sum to zero.
- Chapman-Kolmogorov equations hold
- Working now with differential integral
equations so closed form solutions might be
difficult to findnumerical methods used, in
particular, for solving the Chapman-Kolmogorov
equations.
4Markov Jump Processes - Overview
- Standard Example Sickness Model
- Draw transition graph
- Define transition rates
- Derive differential equations for PHS(s,t) and
PHH(s,t) - Give transition matrix
5Overview - Note (by definition)
6Overview Properties of Transition Rates
- ?ij(t)?0 if i?j
- ?ii(t)?0 and,
- by differentiating ?Pij(s,t)1 wrt t at ts
gives - ?ii(s)- ? ?ij(s) where ? is over all j ?i
7Markov Jump Processes
- Definition A continuous-time Markov process with
a discrete state space is called a Markov jump
process. - Its transition probabilities, Pij(s,t) obey the
Chapman-Kolmogorov equations - Pij(s,t)?Pik(s,u)Pkj(u,t) for all u,sltultt
- Proof as in the Markov chain case.
8Some Technical Assumptions
- We assume Pij(s,t) are continuously
differentiable (C11) in s and t. - Note that
- Pij(s,s) 0 if i?j
- Pij(s,s) 1 if ij
- This implies that the following is well-defined
- Where is the Kronecker delta (i.e.,
if i?j )
9Some Technical Assumptions
- Equivalently, as h?0, we can write
- Note that this gives a 1-1 relationship between
transition probabilities over a small interval
and transition rates. - All the information of the process is captured in
the transition rates they fully characterise
the process.
10Kolmogorovs Forward Equations
- Consider the Chapman-Kolmogorov equations
- Pij(s,t)?Pik(s,u)Pkj(u,t)
- Differentiate wrt t then
- Which gives us Kolmogorovs forward equations.
11Kolmogorovs Backward Equations
- Consider the Chapman-Kolmogorov equations
- Pij(s,t)?Pik(s,u)Pkj(u,t)
- Differentiate wrt s then
- Which gives us Kolmogorovs backward equations.
12Note
- In general the forward and backward systems of
equations are wholly equivalent and this can be
formally shown so when - If they are not equivalent then use the backward
equations.
13Time-homogeneous Markov Jump Process
- If the process is time-homogeneous then
Pij(s,t)Pij(0,t-s). So write simply as Pij(t-s). - This implies that the transition matrix has all
constant elements. - By Kolmogorovs forward equations,
- With initial condition P(0)I.
- Solution P(t)etA, where eX defined to be
14Example
- Consider the time-homogeneous Markov Jump Process
with two states 0,1 with transition matrix - What is the transition probability matrix P(t)?
15Answer
16Where were going
- We can solve completely the time- homogeneous
MJPwould like to do the same for the general
time-inhomogeneous case. - First, we revisit the simple Poisson Process and
understand it propertiesespecially the
distribution of its holding times - Then we show that the time-homogeneous MJP has
the same distribution of its holding times as the
Poisson Process. - Similarly, the holding time distribution of the
general time-inhomogeneous case is shown to be
similar. - So, we know when the process will jump. It is
straightforward to see the prob. that it will
jump into any given state. Hence, we have fully
characterised the MJP when it jumps and into
what state.
17Where were going
- This allows us to look at any given MJP and write
down intimating-looking equations (the integrated
form of Kolmogorovs backward and forward
equations) for the transition probabilities. - I indicate how to solve these equations
numerically. - And I conclude with some words on how to estimate
the parameters of a MJP from data.
18Poisson Process (from before)
- A Poisson process with rate ? is a
continuous-time process Nt, t?0 such that - N00
- Nt has independent increments
- Nt has Poisson distributed increments, i.e.,
193 Properties of Poisson Processes
- Prop. 1 (Superposition) The 46A bus and the 10
bus arrive at the particular bus stop in the
manner of indep. Poisson processes with
parameters ? and ? respectively. Then the
arrivals of either numbered bus is a Poisson
process with parameter (intensity) ??. - Prop. 2 (Thinning) Several buses stop at a
particular bus stop in the manner of a Poisson
process with intensity ?, and the probability p
that it is a number 10 is independent of all the
buses. Then the arrivals of the 10 bus is a
Poisson process, intensity ?p.
203 Properties of Poisson Processes
- Definition The first holding time is T0 such
that T0inftXt?X0. In general, the ith holding
time is Tiinf tXit?Xi. Holding times are
also known as inter-event times. - Prop. 3 (Holding Times). The probability
distribution of holding time Tj in Poisson
process, intensity ?, is exponential, parameter
?. - Note recall that the exponential has the
memoryless property, i.e., PTgttuTgttPTgtu
21Poisson Process (equivalent formulations)
- Theorem Let Xt be an increasing integer valued
process with X00, which is right continuous. Let
?gt0. Then Xt is a Poisson process if any of the
following hold - Xt has stationary, indep. increments for each t,
Xt has Poisson distribution with parameter ?t - Xt is a Markov jump process with indep.
increments and transition rates given by - ?ij(t) ?, if ji1, otherwise ?ij(t)0, i?j.
- The holding times T0, T1,of Xt are indep.
exponential with parameter ? and XT0Tn-1n
22Structure of Markov Jump Processes
- Look at the simpler time-homogeneous caseand we
show that some insights from the Poisson process
carry through to time-homogeneous case and then
general Markov Processes. - Result 1 The 1st holding time of a
time-homogeneous Markov jump process with
transition rates ?ij(t) ?ij is exponentially
distributed with parameter -?ii, i.e., - PTogttX0ie-(-?ii)t
- Proof On Board
23Time-homogeneous Markov Jump Processes
- But we must also characterise into which state
they jump - and this has straightforward form
- PXtojX0i?ij/-?ii where i?j.
- Also, XTo is dependent of To.
- Note that the mean holding time in state j is
1/(-?jj ) and this is often used to estimate
transition rates.
24The Time Inhomogeneous Case
- The time homogeneous case gives valuable
insights. - But applicable models tend to be time
inhomogeneous - E.g., survival model is age dependent
- E.g., sickness model is again age dependent.
- Give overview of standard survival model and its
solution and then generalise
25The Time Inhomogeneous Case
- Definition Let Xt be a general Markov jump
process then the residual holding time denoted Rs
is the random variable that describes the amount
of time between s and the next jump, i.e., - Rsgtw,XsiXui,s?u?sw
- Put Xs XsRs then it can be shown that
- PXs jXsi,Rsw?ij(sw)/(-?ii(sw))
- This way of looking at general Markov processes
is a powerful computational tool.
26Continuation of Markov Jump Processes
27The Integrated Form of Kolmogorovs Backward
Equations
, that is, one conditions on the 1st jump of
the process out of i after time s.
28The Integrated Form of Kolmogorovs Forward
Equations
, that is, one conditions on the last jump of
the process into j before time t.
29Applications
- The Sickness Death Model
- Sickness Death with duration dependence
- Marriage
- Finally,numerical methods to help us solve some
of the nasty equations we encounter.
30Numerical Methods
- We have used our previous insights into MJP to
write down equations describing the process in
two ways - The differential form, i.e., Kolmogorovs forward
or backward equations - The integrated (or integral) form, i.e., the
integrated form of Kolmogorovs forward or
backward equations. - Now, as we have seen, it is a straightforward
matter to solve explicitly the time homogeneous
case. - In general, though, it is not possible to find an
explicit (closed form) solution in terms of
well-known functions. We must use numerical
methods to approximate the solution.
31One Numerical Method to Solve the Differential
Form Eulers Method
- The differential form is given by Kolmogorovs
forward or backward equations -
- or in matrix form
- with initial condition P(0)I.
32One Numerical Method to Solve the Differential
Form Eulers Method
- Let
- P(s,s)I
- And, for given step size h, then
-
- P(s,smh)P(s,s(m-1)h) h.
P(s,s(m-1)h).A(s(m-1)h) - So, recursively, we estimate P(s,t) at (t-s)/h
equally spaced mesh points between s and t. Other
points can be estimated by linear interpolation. - Errors in this method are proportional to (of the
order) h. - With a little more effort we could find a
numerical procedure where the approximation
errors are of the order of h4 (e.g., 4th order
Runge-Kutta method).
33One Numerical Method to Solve the Integrated Form
- Consider the Integrated Form of Kolmogorovs
Forward Equations - Apply the following recursive approximation
procedure - Let
- And
34One Numerical Method to Solve the Integrated Form
(cont.)
- Now to evaluate the latter expression we need to
approximate an integral. Use some high order
numerical approximation procedure, e.g.,
Simpsons Rule - Where,
- h(b-a)/m
- And,
- xInteger part of x
- Recall that Simpsons Rule is of the order of h4.
35Final Words on Testing Markov Models/Estimation
of Parameters
- So we have a real process that we think can
adequately be modelled using a Markov process.
Two key problems - How do I estimate parameters from gathered data?
- How do I check that the model is adequate for my
purpose?
36Markov Chain Estimation
- Suppose we have x1,x2,,xN observations of our
process. - For time homogeneous Markov chain, the transition
probabilities can be estimated as - Where ni is the number of times t, 1?t?(N-1),
such that xtI - Where nij is the number of times t, 1?t?(N-1),
that xti and xt1j. - Clearly, nijBinomial (Ni, pij), so we can
calculate confidence intervals for our estimates.
37Markov Chain Evaluation
- The key property assumed in the model, and to be
tested against the data, is the Markov property. - An effective test statistic is the chi-square
goodness of fit, checking to see that transition
prob. of successive triplets only depend on the
final transition probability - Which has sr-q-1 degrees of freedom, where
- s is number of states visited before time N
(i.e., nigt0) - q is number of pairs (i,j) where nijgt0
- r is the number of triplets where nijnjkgt0
38Markov Jump Process Estimation
- We can estimate the parameters in the
time-homogeneous case, as - ?ii -1/(average duration in state i for
completed visits) - ?ij -pij/?ii
39Markov Jump Process Evaluation
- Does our model adequately capture the underlying
reality? - Maybe test
- Times in a state are exponentially
distributeduse chi-square goodness of fit. - Does Markov property holdtest no memory in
triples as in Markov chain case. - Use graphs to make visually other tests such as
state into which it hops is independent of time
in previous state. No pattern should be evident.
40Estimation Evaluation of Time Inhomogeneous
Markov proceses
- An order of magnitude more complicated.
- Special techniques usedsee other actuarial
courses to estimate and test decrements in,
say, the simple survival model (mortality
statistics), the sickness model, etc.
41Completes Markov Jump Processes
42Stochastic Processes