Stochastic Processes

1 / 43
About This Presentation
Title:

Stochastic Processes

Description:

Recall a Markov chain is a discrete time Markov process with an at most ... Model #1: The No Claims Discount (NCD) system is where the motor insurance ... – PowerPoint PPT presentation

Number of Views:158
Avg rating:3.0/5.0
Slides: 44
Provided by: friendspro

less

Transcript and Presenter's Notes

Title: Stochastic Processes


1
Stochastic Processes
  • Shane Whelan
  • L527

2
Chapter 2 Markov Chains
3
Markov Chain - definition
  • Recall a Markov chain is a discrete time Markov
    process with an at most countable state space,
    i.e.,
  • A Markov process is a sequence of rvs, X0, X1,
    such that
  • PXnjX0a,X2b,,XmiPXnjXmi
  • where mltn.

4
Overview Example
  • Markov chains are often displayed by a transition
    graph states linked by arrows when positive
    probability of transition in that direction,
    generally with transition probabilities shown
    alongside, e.g.

6
3
1
1
1
2/3
4
1
2
1
5
1
1/3
3/5
1/5
0
5
Overview Example
  • Starting from 0, show that the prob. of hitting 6
    is ¼.
  • Starting from 1, show that the prob. of hitting 3
    is 1.
  • Starting from 1, show that it takes on average 3
    steps to hit 3.
  • Starting from 1, show that the long-run
    proportion of time spent in 2 is 3/8.
  • As the number of steps increases within number ,
    show that the transition prob. from state 0 to
    state 1 limits to 9/32.
  • As the number of steps increases within number,
    show that the transition prob. from state 0 to
    state 4 is not defined.

6
Transition Probabilities
  • Transition probabilities are denoted
  • Prob. in state j at time n, given that at time m
    process is in state i
  • And a one-step transition probability is

7
Consequences
  • The distribution of a Markov chain is fully
    specified once the following are given
  • The initial probability distribution, qkPX0k
  • The one-step transition probabilities,
  • Whence the prob. of any path, PX0a,X1b,,Xni,
    is readily deduced.
  • Whence, with time, we can answer most questions.

8
The Chapman-Kolmogorov Equations
  • Proposition The transition probabilities of a
    Markov chain obey the Chapman-Kolmogorov
    equations, i.e.,
  • Proof Trivial consequence of Theorem of Total
    Probabilities.
  • Whence the term chain can you see the link?

9
Eureka!
  • Markov Chain problems largely ones of Matrix
    Multiplication by Chapman-Kolmogorov.

10
Time-homogeneous Markov chains
  • Definition A Markov chain is said to be
    time-homogenous if
  • i.e., the transition probabilities are
    independent of timeso knowing what state the
    process is in uniquely identifies the transition
    probabilities.

11
Exercise
  • Let Xn be a time-homogeneous Markov chain. Show
    that
  • That is, the kth step transition probability from
    i to j is just ij-entry of the 1-step transition
    matrix (P(1)ij) taken to the power of k (for
    time-homogeneous Markov chain).

12
Time-homogeneous Markov chains
is called the (n-m)-step transition probability
and simplies the Chapman-Kolmogorov equations
to...
13
Time-homogeneous Markov chains
  • Define the transition matrix P by
  • (P)ijpij (an NxN matrix where N is the
    cardinality of the state space) then the k-step
    transition probability is given by
  • Clearly, we have
  • Matrices with this latter property known as
    stochastic matrices.

14
Boardwork
  • Look at general solution to two-state Markov
    chain.

15
Models based on Markov Chains
  • Model 1 The No Claims Discount (NCD) system is
    where the motor insurance premium depends on the
    drivers claims record. It is a simple example of
    a Markov chain.
  • Instance Three states 0 discount 25
    discount and 50 discount. A claim-free year
    results in a transition to a higher discount (or
    remain at the highest). A claim moves to the next
    lower discount level (or remain at 0).

16
Model 2
  • Consider the 4-state NCD model given by
  • State 0 0 Discount
  • State 1 25 Discount
  • State 2 40 Discount
  • State 3 60 Discount
  • Here the transition rules are move up one
    discount level (or stay at max) if no claim in
    the previous year. Move down one-level if claim
    in previous year but not the year before move
    down 2 levels if claim in two immediatley
    preceeding years.
  • For concreteness, let the prob. of a claim-free
    year be, say, 75.

17
Model 2
  • This is not a Markov chain
  • PXn0Xn2, Xn-11? PXn0Xn2, Xn-13
  • but
  • We can simply construct a Markov chain from
    Model 2. Consider the 5-state model with states
    0,1,3 as before but define
  • State 2 40 discount and no claim in previous
    year.
  • State 2- 40 discount and claim in the previous
    year.
  • Now the transition matrix is given by

18
Model NCD 2
19
More Complicated NCD Models
  • Two possible enhancements to models
  • Make accident rate dependent on state (this is of
    course the notion behind this up-dating risk
    assessment system)
  • Make the transition probabilities time-dependent
    (a time-inhomogeneous Markov chain) to reflect,
    say, faster motorbikes and younger drivers.

20
Simple Random Walk
  • As before, we have Xn?nZi, where PZi1 p,
    PZi-11-p.
  • The process has independent increments hence is
    Markovian.
  • The transition graph and transition matrix are
    infinite

21
Simple Random Walk
  • Transition matrix given by
  • The n-step probabilites are calculated as

22
Simple Random Walk
  • A simple random walk is not just
    time-homogeneous, it is also space-homogeneous,
    i.e.,
  • for all k.
  • The only parameters affecting the transition
    probabilities for n-steps in a random walk are
    overall distance (j-i) covered and no. of steps
    (n).

23
Simple Random Walk with Boundary Conditions
  • Basic model as before but this times with the
    added boundary conditions
  • Reflecting boundary at 0 PXn11Xn01
  • Absorbing boundary at 0 PXn10Xn01
  • Mixed boundary at 0 PXn10Xn0? and
    PXn11Xn01- ?.
  • One can, of course, have upper boundaries as well
    as lower ones and both in one model.
  • Practical applications prob. of ruin for
    gambler or, with different Zi, a general
    insurance company.

24
Simple Random Walk with Boundary Conditions
  • The transition matrix with mixed boundary
    conditions, upper and lower, is given by
  • Take ?1 for a lower absorbing barrier (at 0),
    and ?0 for a lower reflecting barrier (at 0).

25
A Model of Accident Proneness
  • Let us say that only one accident can occur in
    the unit time period so that Yi is a Bernoulli
    trial (Yi1 or 0 only).
  • Now it seems reasonable to put
  • i.e., the prob. of an accident at time n1 is a
    function of the past number of claims.
  • Also, f(.) and g(.) are increasing functions with
    0?f(m) ?g(m) for all m.
  • Clearly the Yis are not Markovian but the
    cumulative number of accidents Xn?Yi is a Markov
    chain with state space 0,1,2,3,

26
A Model of Accident Proneness
  • Does it make sense to have a time-independent
    accident proneness model (i.e., g(n) a constant) ?

27
Exercise Accident Proneness Model
  • Let f(xn)0.5xn and g(n)n1
  • Hence, PYn1Xn(0.5xn)/(n1)
  • What is prob.of driver with no accident in first
    year not having an accident in second year too?
  • What is prob of an accident in 11th year given an
    accident in each of the previous ten years?
  • What is the ijth entry in the one-step transition
    matrix of the Markov chain Xn?

28
The Long-Term Distribution of a Markov Chain
  • Definition We say that ?j, j?S, is a stationary
    probability for a Markov chain with transition
    matrix P if
  • ? ?P, where ? (?1, ?2,.., ?n), nS
  • or, equivalently,

29
The Long-Term Distribution of a Markov Chain
  • So if the Markov chain comes across a stationary
    prob. distribution in its evolution then, from
    then on, the distribution of the Xns are
    invariant the chain becomes a stationary
    process from then.
  • In general Markov chains do not have a stationary
    distribution and, if they do, they can have more
    than one.
  • The simple random walk does not have a stationary
    distribution.
  • The simple random walk with upper and lower
    boundary conditions has at least one stationary
    distribution, with uniqueness depending on values
    of ? and ?.

30
The Long-Term Distribution of a Markov Chain
  • Theorem A Markov chain with a finite state
    space has at least one stationary probability
    distribution.
  • Proof NOT ON COURSE

31
Example 1
  • Consider a chain with only two-states and a
    transition matrix given by . Find its
    stationary distribution.
  • Answer (2/5, 3/5).

32
Example 2
  • Compute the stationary distribution of NCD model
    2. Recall the transition matrix is given by
  • Answer (13/169,12/169,9/169,27/169, 108/169)
  • 1/169(13,12,9,27,108)

33
Pointers in Solving for Stationary Distributions
  • The n equations are not independent as rows in
    matrix sum to unity. Equivalently, this can be
    seen by the normalisation (or scaling)
    requirement
  • Hence one can delete one equation without losing
    information.
  • Hence solve first in terms of one of the ?i and
    then apply normalisation.
  • The general solving technique is Gaussian
    Elimination.
  • The discarded equation gives check on solution.

34
When is Solution Unique?
  • Definition A Markov chain is irreducible if for
    any i,j, pij(n)gt0 for some n. That is any state j
    can be reached in a finite number of steps from
    any other state i.
  • The best way to determine irreducibility is to
    draw the transition graph.
  • Examples the NCD models (12), the simple random
    walk model without boundary conditions are all
    irreducible. The random walk with an absorbing
    barrier is not irreducible.

35
When is Solution Unique?
  • Theorem An irreducible Markov chain with a
    finite state space has a unique stationary
    probability distribution.
  • Proof NOT ON COURSE

36
Exercise
  • Is the process with the following transition
    matrix irreducible?
  • What is the stationary distribution(s) of the
    process?

37
The Long-Term Behaviour of Markov Chains
  • Definition A state i in said to be periodic with
    period dgt1 if pii(n)0 unless n (mod d) 0. If a
    state is not periodic then it is called
    aperiodic.
  • If a state is periodic then lim pii(n) does not
    exist as n??.
  • Interesting fact an irreducible Markov chain is
    either aperiodic or all its states have the same
    period.

38
Theorem
  • Theorem Let pij(n) be the n-step transition
    probability of an irreducible aperiodic Markov
    chain on a finite state space. Then for every
    i,j,
  • where ? is the stationary probability
    distribution.
  • Proof NOT ON COURSE
  • Importance no matter what the initial state, it
    will converge to (unique) stationary probability
    distribution, i.e., most of the time the Markov
    chain will be arbitrarily close to the stationary
    probability distribution (in the long run).

39
Example
  • Consider the time-homogeneous Markov chain given
    by on state space 0,1
  • We know (as finite state space and irreducible)
    that it has a unique stationary distribution.
  • However, the process will never reach it unless
    (trivially) it starts in the stationary
    distribution
  • because at least one state is periodic.

40
Example
  • Consider the time-homogeneous Markov chain on
    state space 0,1 with transition matrix P given
    by
  • Now PnP for all n.
  • So it has finite state space, is irreducible and
    is aperiodic.
  • Hence lim pij(n) exists.
  • Here lim pij(n) ½ as n??.

41
Exercise
  • Is the following Markov chain irreducible? What
    is/are its stationary distribution(s)?

42
Ends Markov Chains
43
Stochastic Processes
Write a Comment
User Comments (0)