Title: DSGE Models and Optimal Monetary Policy
1DSGE Models and Optimal Monetary Policy
2A framework of analysis
- Typified by Woodfords Interest and Prices
- Sometimes called DSGE models
- Also known as NNS models
- Strongly micro-founded models
- Prominent role for monetary policy
- Optimising agents and policymakers
3What do we assume?
- Model is stochastic, linear, time invariant
- Objective function can be approximated very well
by a quadratic - That the solutions are certainty equivalent
- Not always clear that they are
- Agents (when they form them) have rational
expectations or fixed coefficient extrapolative
expectations
4Linear stochastic model
- We consider a model in state space form
- u is a vector of control instruments, s a vector
of endogenous variables, e is a shock vector - The model coefficients are in A, B and C
5Quadratic objective function
- Assume the following objective function
- Q and R are positive (semi-) definite symmetric
matrices of weights - 0 lt ? 1 is the discount factor
- We take the initial time to be 0
6How do we solve for the optimal policy?
- We have two options
- Dynamic programming
- Pontryagins minimum principle
- Both are equivalent with non-anticipatory
behaviour - Very different with rational expectations
- We will require both to analyse optimal policy
7Dynamic programming
- Approach due to Bellman (1957)
- Formulated the value function
- Recognised that it must have the structure
8Optimal policy rule
- First order condition (FOC) for u
- Use to solve for policy rule
9The Riccati equation
- Leaves us with an unknown in S
- Collect terms from the value function
- Drop z
10Riccati equation (cont.)
- If we substitute in for F we can obtain
- Complicated matrix quadratic in S
- Solved backwards by iteration, perhaps by
11Properties of the solution
- Principle of optimality
- The optimal policy depends on the unknown S
- S must satisfy the Riccati equation
- Once you solve for S you can define the policy
rule and evaluate the welfare loss - S does not depend on s or u only on the model and
the objective function - The initial values do not affect the optimal
control
12Lagrange multipliers
- Due to Pontryagin (1957)
- Formulated a system using constraints as
- ? is a vector of Lagrange multipliers
- The constrained objective function is
13FOCs
- Differentiate with respect to the three sets of
variables
14Hamiltonian system
- Use the FOCs to yield the Hamiltonian system
- This system is saddlepath stable
- Need to eliminate the co-states to determine the
solution - NB Now in the form of a (singular) rational
expectations model (discussed later)
15Solutions are equivalent
- Assume that the solution to the saddlepath
problem is - Substitute into the FOCs to give
16Equivalence (cont.)
- We can combine these with the model and eliminate
s to give - Same solution for S that we had before
- Pontryagin and Bellman give the same answer
- Norman (1974, IER) showed them to be
stochastically equivalent - Kalman (1961) developed certainty equivalence
17What happens with RE?
- Modify the model to
- Now we have z as predetermined variables and x as
jump variables - Model has a saddlepath structure on its own
- Solved using Blanchard-Kahn etc.
18Bellmans dedication
- At the beginning of Bellmans book Dynamic
Programming he dedicates it thus - To Betty-Jo
- Whose decision processes defy analysis
19Control with RE
- How do rational expectations affect the optimal
policy? - Somewhat unbelievably - no change
- Best policy characterised by the same algebra
- However, we need to be careful about the jump
variables, and Betty-Jo - We now obtain pre-determined values for the
co-states ? - Why?
20Pre-determined co-states
- Look at the value function
- Remember the reaction function is
- So the cost can be written as
- We can minimise the cost by choosing some
co-states and letting x jump
21Pre-determined co-states (cont.)
- At time 0 this is minimised by
- We can rearrange the reaction function to
- Where
etc
22Pre-determined co-states (cont.)
- Alternatively the value function can be written
in terms of the x and the zs as - The loss is
23Cost-to-go
- At time 0, z0 is predetermined
- x0 is not, and can be any value
- In fact is a function of z0 (and implicitly u)
- We can choose the value of ?x at time 0 to
minimise cost - We choose it to be 0
- This minimises the cost-to-go in period 0
24Time inconsistency
- This is true at time 0
- Time passes, maybe just one period
- Time 1 becomes time 0
- Same optimality conditions apply
- We should reset the co-states to 0
- The optimal policy is time inconsistent
25Different to non-RE
- We established before that the non-RE solution
did not depend on the initial conditions (or any
z) - Now it directly does
- Can we use the same solution methods?
- DP or LM?
- Yes, as long as we re-assign the co-states
- However, we are implicitly using the LM solution
as it is open-loop the policy depends
directly on the initial conditions
26Where does this fit in?
- Originally established in 1980s
- Clearest statement Currie and Levine (1993)
- Re-discovered in recent US literature
- Ljungqvist and Sargent Recursive Macroeconomic
Theory (2000, and new edition) - Compare with Stokey and Lucas
27How do we deal with time inconsistency?
- Why not use the principle of optimality
- Start at the end and work back
- How do we incorporate this into the RE control
problem? - Assume expectations about the future are fixed
in some way - Optimise subject to these expectations
28A rule for future expectations
- Assume that
- If we substitute this into the model we get
29A rule for future expectations
- The pre-determined model is
- Using the reaction function for x we get
30Dynamic programming solution
- To calculate the best policy we need to make
assumptions about leadership - What is the effect on x of changes in u?
- If we assume no leadership it is zero
- Otherwise it is K, need to use
31Dynamic programming (cont.)
- FOC for u for leadership
- where
- This policy must be time consistent
- Only uses intra-period leadership
32Dynamic programming (cont.)
- This is known in the dynamic game literature as
feedback Stackelberg - Also need to solve for S
- Substitute in using relations above
- Can also assume that x unaffected by u
- Feedback Nash equilibrium
- Developed by Oudiz and Sachs (1985)
33Dynamic programming (cont.)
- Key assumption that we condition on a rule for
expectations - Could condition on a time path (LM)
- Time consistent by construction
- Principle of optimality
- Many other policies have similar properties
- Stochastic properties now matter
34Time consistency
- Not the only time consistent solutions
- Could use Lagrange multipliers
- DP is not only time consistent it is subgame
perfect - Much stronger requirement
- See Blake (2004) for discussion
35Whats new with DSGE models?
- Woodford and others have derived welfare loss
functions that are quadratic and depend only on
the variances of inflation and output - These are approximations to the true social
utility functions - Can apply LQ control as above to these models
- Parameters of the model appear in the loss
function and vice versa (e.g. discount factor)
36DGSE models in WinSolve
- Can set up micro-founded models
- Can set up micro-founded loss functions
- Can explore optimal monetary policy
- Time inconsistent
- Time consistent
- Taylor-type approximations
- Lets do it!