- Overview
Your DP model (called MyModel
here) will be derived from the Bellman class or from one of the built-in derived classes. Think of these classes as a template for each point in your state space \(\Theta\). You pick one of the templates to start with then customize to match your model.
Each Bellman class embodies a specification of the DP model, especially the iid continuous state vector, denoted \(\zeta\) which smooths choice probabilities. This customization is fundamental because the form of \(\zeta\) (or the lack thereof) determines the calculations required to iterate on Bellman's equation at each point \(\theta.\) Each derived class of Bellman substitutes customized routines ("methods") to carry out these tasks.
Thus, the choice of parent class for MyModel
depends on the action value equation:
$$v(\alpha;\zeta,\theta)\quad \equiv\quad U(\alpha;\theta) + \zeta_\alpha + \delta EV(\theta^{\,\prime}).$$
The state value function \(V(\theta)\) must integrate over \(\zeta.\) This is carried out internally by the virtual thetaEMax() or its replacement. It does not have to be coded by the user. The default method, thetaEMax(), assumes there is no \(\zeta.\) Thus, the default does no integration.
Solution methods are coded separately from Bellman. They are derived from the Method class and described in Methods. Some methods may only operate if the user's model is derived from a compatible class of Bellman or has other required characteristics. For example, a case of Bellman specialization is whether MyModel
involves solving for reservation values. This is a different kind of continuous shock than \(\zeta\) and requires different calculations for Bellman's equation. In this case, the parent class for MyModel
must derive from the OneDimensionalChoice class because reservation value models can only allow a single action variable.
- The Minimal Template
#import "DDP"
class MyModel : Bellman {
// declare static data members
Utility();
// Optional methods to replace built-in versions
FeasibleActions();
Reachable();
ThetaUtility();
OutcomesGivenEpsilon();
}
- User-Contributed Elements of
MyModel
and MyCode
- Utility()
MyModel
must supply a replacement for Utility(). Since utility depends on the current state, the method must be automatic (not static). Here is an example with one state variable and one action and how they might determine utility.
#import "DDP"
struct MyModel : Bellman {
static decl d, s; // One decision and one state variable
Utility();
}
MyModel::Utility() {
return CV(s)*CV(d);
}
So this is a model where \(\alpha = (d)\) and \(\theta = (s)\) and \(U(\alpha;\theta)=sd.\)
As explained elsewhere, if s
contains a state variable its "value" is not simply themselves. Likewise a
. Their current values are retrieved by sending them to CV(). Also, note that \(U()\) at a state is always treated as a vector-valued function in DDP . So CV(d)
is a column vector. As a state variable s
is a scalar at \(\theta\).
- Reachable States
A state is unreachable if it cannot occur given initial conditions. For example, a person cannot have 20 years of labour market experience at age 18. Including unreachable states in the state space wastes computation and storage but does not cause any errors.
MyModel
can optionally provide a replacement for the virtual Reachable() method. The built-in version of Reachable
returns TRUE, meaning all states are marked as reachable. The user can provide a replacement with returns an indicator for whether the current state is reachable or not.
- Example.
- Mark as unreachable all states at which \(x\) and \(y\) add up to a value greater than 5:
MyModel::Reachable() {
return ! (CV(x)+CV(y)> 5);
}
StateVariables defined in DDP have their own Reachable
method which are called when creating the state space and before MyModel::Reachable()
is called. This means that in many cases the user does not need to code Reachable
. For example, in the case of too much experience at a given age, the ActionCounter state variable will automatically prune states from a finite horizon model based on the condition.
- Restricted Feasible Action spaces / matrices
MyModel
can optionally provide a replacement for the virtual FeasibleActions() method to make the feasible choice set to vary with the endogenous state \(\theta\). That is, the action space \(A\) is really \(A(\theta)\). Again, the default is that all values constructed from the action variables added to the model are feasible.
- Example.
- Only action vectors with
d
less than or equal to the value of state variable s
are feasible.
MyModel::FeasibleActions() {
return CV(d) .<= CV(s);
}
The dot operator .<
is the element-by-element less-than operator in Ox. So this returns a vector of length equal to the number of values d
takes on containing 0s and 1s. When setting up spaces DDP will call FeasibleActions
at each point in the state space. It will then create a list of different feasible sets. Each point \(\theta\) contains an index into this list to ensure only feasible action values are returned by CV(d)
when the model is being solved/used.
- Important: feasibility must be static. That is, the conditions returned by
FeasibleActions
must be determined at the creation of the spaces and cannot depend on changing elements of the model. For example, suppose p
is the price of units of an action d
that the agent takes as given. And suppose s
is the agent's income the current state. Then one might tempted to impose the budget constraint like this:
MyModel::FeasibleActions() {
return CV(p) * CV(d) .<= CV(s);
}
However, if p
is changing due to an equilibrium calculation this is incorrect because FeasibleActions
is only called once inside DP::CreatSpaces() so it cannot be used for a dynamic condition like this. Instead, Utility
must impose this condition. Ox understands \(-\infty\), so you can assign a infeasible choice that value to ensure that it will not be optimal (and will be given 0 probability):
MyModel::Utility() {
decl dv = CV(d);
return CV(p)*dv .<= CV(s) .? dv .: -.Inf;
}
The .? … .: …
operation is an inline if-statement
that will check the
element-by-element condition at the start and assign the other values listed depending on whether the element is TRUE or FALSE. This case if the value of p
changes and a value of d
is no longer affordable it will dynamically have utility equal to \(-\infty\).
- ThetaUtility
Suppose the utility for your model has the form
$$U() = f\left( \alpha; g\left(\theta\right),\eta,\epsilon\right).$$
That is, there is a function \(g()\) of the endogenous state variables that is common for all values of the IID (exogenous and semi-exogenous) state variables. If MyModel
only provides the required Utility()
function then \(g(\theta)\) is recomputed for each value of the IID shocks.
This inefficiency can be eliminated by providing ThetaUtility()
which is called for each \(\theta\) immediately before looping over the IID exogenous state values and calling Utility()
. A simple example:
$$U = a(xb + e - d) + d.$$
Here \(\theta=(x),\) \(\alpha=(a)\) is a binary choice, and \(\epsilon=(e)\) an IID shock to value of \(a=1\). \(b\) and \(d\) are parameters. So \(g() = xb\), which is not expensive to recompute unnecessarily. However, in some models this \(\theta\)-constant component of utility is very involved whereas the IID contributions are simple.
struct MyModel : Bellman {
⋮
static decl a, x, xb, e;
⋮
ThetaUtility();
⋮
MyModel::ThetaUtility() {
xb = CV(x)*b;
}
MyModel::Utility() {
return CV(a)*(xb+AV(e)-d) + d;
}
ThetaUtility
stores the computed value in a static member of the model, xb
. If xb
were not declared static an additional location in memory would be created for each point \(\theta.\) It can be static even though the value of the state variable \(x\) depends on the \(\theta\). As DDP moves through the state space the value of xb
is updated with the current value before Utility()
is called for the current value of \(\theta\) and \(\epsilon.\) In complicated models, there may be many calculations that depend on endogenous states and estimated parameters. Using ThetaUtility()
not only eliminates redundant computation it does so without additional storage that grows with the state space.
- Hooks and Update Time
MyModel
can use Add() to have a static method/function called at different points in solution methods. MyModel
can also use SetUpdateTime() to set when solution methods should update transition probabilities and utility of actions. This allows transitions and utility to depend on fixed and random effect variables, but if they do not wasted computations can be avoided by updating higher up in the process.
- Auxiliary Variables
MyModel
can add AuxiliaryValues for simulating outcomes and accounting for partial observability of the state. MyCode
must sandwich the commands that add actions and states to the model between calls to DPparent::Initialize(…)
and DPparent::CreateSpaces(…)
. MyModel
can supply its own version of these two methods, but then they must call the parent versions. If MyModel
does not have its own versions, then the prefix DPparent::
is not needed because a reference to Initialize()
will refer to the parent's version.
|
Aind |
|
index into CList, determines \(A(\theta)\). |
EV |
|
EV(θ) |
Nxt |
|
TransStore x η-Array of feasible endogenous state
indices and transitions
\(P(\theta^\prime;\alpha,\eta,\theta)\). |
pandv |
|
\(v(\alpha;\epsilon,\eta,\theta)\) and \(P*()\). |
Type |
|
Integer code to classify state (InSubSample,LastT,Terminal). |
|
ActVal |
virtual |
Compute \(v(\alpha;\theta)\) for all values of \(\epsilon\) and \(\eta\). |
AutoAuxiliaryValues |
virtual |
|
Bellman |
|
Creator function for a new point in \(\Theta\), initializing the automatic (non-static) members. |
CreateSpaces |
static |
Calls the DP version. |
Delete |
static |
Delete the current DP model and reset. |
ExogExpectedV |
virtual |
Completes \(v(\alpha;\cdots,\eta,\theta)\) by adding discounted expected value to utilities for a given \(\eta\). |
FeasibleActions |
virtual |
Default \(A(\theta)\): all actions are feasible at all states, except for terminal states. |
GetPandV |
|
|
IgnoreExogenous |
virtual |
Return FALSE: Elements of the exogenous vector are looped over when computing U and EV. |
Initialize |
static |
Base Initialize function. |
InSS |
virtual |
Return TRUE if full iteration over exogenous values and transitions to be carried out at this point (in subsample). |
KernelCCP |
virtual |
|
MedianActVal |
|
KeaneWolpin: Computes v() and V for out-of-sample states. |
MyopicActVal |
|
For Myopic agent or StaticClock or just Initial, v=U and no need to evaluate EV. |
OnlyFeasible |
|
Extract and return rows of a matrix that correspond to feasible actions at the current state. |
OutcomesGivenEpsilon |
virtual |
Default / virtual routine. |
Reachable |
virtual |
Return TRUE: Default indicator function for whether the current state is reachable from initial conditions or not. |
SetTheta |
virtual |
Sets up a single point θ in the state space. |
Smooth |
virtual |
Default Choice Probabilities: no smoothing. |
StateToStatePrediction |
|
Compute \(P(\theta^\prime;\theta)\). |
thetaEMax |
virtual |
Default Emax operator at \(\theta.\)
Not called by User code directly
Derived classes provide a replacement. |
ThetaTransition |
|
Computes the endogneous transition given \(\eta.\)
Not called by User code directly
Loops over all state variables to compute \(P(\theta^\prime ; \alpha, \eta )\). |
ThetaUtility |
virtual |
Default to be replaced by user if necessary. |
UpdatePtrans |
|
Compute endogenous state-to-state transition \(P^\star(\theta'|\theta)\) for the current
state \(\theta\). |
UseEps |
|
Return TRUE if Utility depends on exogenous vector at this state. |
Utility |
virtual |
Default U() ≡ 0. |