Your DP model (called MyModel
here) will be derived from the Bellman class or from one of the built-in derived classes. Think of these classes as a template for each point in your state space Θ. You pick one of the templates to start with then customize to match your model.
Each Bellman class embodies a specification of the DP model, especially the iid continuous state vector, denoted ζ which smooths choice probabilities. This customization is fundamental because the form of ζ (or the lack thereof) determines the calculations required to iterate on Bellman's equation at each point θ. Each derived class of Bellman substitutes customized routines ("methods") to carry out these tasks.
Thus, the choice of parent class for MyModel
depends on the action value equation:
v(α;ζ,θ)≡U(α;θ)+ζα+δEV(θ′).
The state value function V(θ) must integrate over ζ. This is carried out internally by the virtual thetaEMax() or its replacement. It does not have to be coded by the user. The default method, thetaEMax(), assumes there is no ζ. Thus, the default does no integration.
Solution methods are coded separately from Bellman. They are derived from the Method class and described in Methods. Some methods may only operate if the user's model is derived from a compatible class of Bellman or has other required characteristics. For example, a case of Bellman specialization is whether MyModel
involves solving for reservation values. This is a different kind of continuous shock than ζ and requires different calculations for Bellman's equation. In this case, the parent class for MyModel
must derive from the OneDimensionalChoice class because reservation value models can only allow a single action variable.
#import "DDP"class MyModel : Bellman { // declare static data members Utility();
}// Optional methods to replace built-in versions
FeasibleActions(); Reachable(); ThetaUtility(); OutcomesGivenEpsilon();
MyModel
and MyCode
MyModel
must supply a replacement for Utility(). Since utility depends on the current state, the method must be automatic (not static). Here is an example with one state variable and one action and how they might determine utility.
#import "DDP"struct MyModel : Bellman { static decl d, s; // One decision and one state variable Utility(); } MyModel::Utility() { return CV(s)*CV(d); }
As explained elsewhere, if s
contains a state variable its "value" is not simply themselves. Likewise a
. Their current values are retrieved by sending them to CV(). Also, note that U() at a state is always treated as a vector-valued function in DDP . So CV(d)
is a column vector. As a state variable s
is a scalar at θ.
A state is unreachable if it cannot occur given initial conditions. For example, a person cannot have 20 years of labour market experience at age 18. Including unreachable states in the state space wastes computation and storage but does not cause any errors.
MyModel
can optionally provide a replacement for the virtual Reachable() method. The built-in version of Reachable
returns TRUE, meaning all states are marked as reachable. The user can provide a replacement with returns an indicator for whether the current state is reachable or not.
MyModel::Reachable() { return ! (CV(x)+CV(y)> 5); }
StateVariables defined in DDP have their own Reachable
method which are called when creating the state space and before MyModel::Reachable()
is called. This means that in many cases the user does not need to code Reachable
. For example, in the case of too much experience at a given age, the ActionCounter state variable will automatically prune states from a finite horizon model based on the condition.
MyModel
can optionally provide a replacement for the virtual FeasibleActions() method to make the feasible choice set to vary with the endogenous state θ. That is, the action space A is really A(θ). Again, the default is that all values constructed from the action variables added to the model are feasible.
d
less than or equal to the value of state variable s
are feasible.
MyModel::FeasibleActions() { return CV(d) .<= CV(s); }The dot operator
.<
is the element-by-element less-than operator in Ox. So this returns a vector of length equal to the number of values d
takes on containing 0s and 1s. When setting up spaces DDP will call FeasibleActions
at each point in the state space. It will then create a list of different feasible sets. Each point θ contains an index into this list to ensure only feasible action values are returned by CV(d)
when the model is being solved/used.
FeasibleActions
must be determined at the creation of the spaces and cannot depend on changing elements of the model. For example, suppose p
is the price of units of an action d
that the agent takes as given. And suppose s
is the agent's income the current state. Then one might tempted to impose the budget constraint like this:
MyModel::FeasibleActions() { return CV(p) * CV(d) .<= CV(s); }However, if
p
is changing due to an equilibrium calculation this is incorrect because FeasibleActions
is only called once inside DP::CreatSpaces() so it cannot be used for a dynamic condition like this. Instead, Utility
must impose this condition. Ox understands −∞, so you can assign a infeasible choice that value to ensure that it will not be optimal (and will be given 0 probability):
MyModel::Utility() { decl dv = CV(d); return CV(p)*dv .<= CV(s) .? dv .: -.Inf; }The
.? … .: …
operation is an inline if-statement
that will check the
element-by-element condition at the start and assign the other values listed depending on whether the element is TRUE or FALSE. This case if the value of p
changes and a value of d
is no longer affordable it will dynamically have utility equal to −∞.
Suppose the utility for your model has the form
U()=f(α;g(θ),η,ϵ).
That is, there is a function g() of the endogenous state variables that is common for all values of the IID (exogenous and semi-exogenous) state variables. If MyModel
only provides the required Utility()
function then g(θ) is recomputed for each value of the IID shocks.
This inefficiency can be eliminated by providing ThetaUtility()
which is called for each θ immediately before looping over the IID exogenous state values and calling Utility()
. A simple example:
U=a(xb+e−d)+d.
Here θ=(x), α=(a) is a binary choice, and ϵ=(e) an IID shock to value of a=1. b and d are parameters. So g()=xb, which is not expensive to recompute unnecessarily. However, in some models this θ-constant component of utility is very involved whereas the IID contributions are simple.
struct MyModel : Bellman { ⋮ static decl a, x, xb, e; ⋮ ThetaUtility(); ⋮MyModel::ThetaUtility() { xb = CV(x)*b; } MyModel::Utility() { return CV(a)*(xb+AV(e)-d) + d; }
ThetaUtility
stores the computed value in a static member of the model, xb
. If xb
were not declared static an additional location in memory would be created for each point θ. It can be static even though the value of the state variable x depends on the θ. As DDP moves through the state space the value of xb
is updated with the current value before Utility()
is called for the current value of θ and ϵ. In complicated models, there may be many calculations that depend on endogenous states and estimated parameters. Using ThetaUtility()
not only eliminates redundant computation it does so without additional storage that grows with the state space.
MyModel
can use Add() to have a static method/function called at different points in solution methods. MyModel
can also use SetUpdateTime() to set when solution methods should update transition probabilities and utility of actions. This allows transitions and utility to depend on fixed and random effect variables, but if they do not wasted computations can be avoided by updating higher up in the process.
MyModel
can add AuxiliaryValues for simulating outcomes and accounting for partial observability of the state. MyCode
must sandwich the commands that add actions and states to the model between calls to DPparent::Initialize(…)
and DPparent::CreateSpaces(…)
. MyModel
can supply its own version of these two methods, but then they must call the parent versions. If MyModel
does not have its own versions, then the prefix DPparent::
is not needed because a reference to Initialize()
will refer to the parent's version.
The DPDebug class is the base for output routines and other tasks that are related to debugging and reporting.
Most classes in niqlow have a Volume
member which will determine how much output is produced during execution. In particular Volume controls how much output about the dynamic program is put out during and after a solution method. You can get more output by turning up the Volume
. See NoiseLevels. For example, DP::Volume = NOISY;
will produce the most output and DP::Volume = SILENT;
the least. The default setting for all Volume
variables is QUIET
, one level above SILENT
.
When you call Initialize() it opens a timestamped log file. Output that is expected to be very large, like dumps of the value function or state transitions, are sent there instead of to the screen. Other parts of niqlow will write to other timestamped log files.
MyModel
is derived from Bellman or from a class
derived from Bellman.
Public fields | ||
![]() |
index into CList, determines A(θ). | |
![]() |
EV(θ) | |
![]() |
TransStore x η-Array of feasible endogenous state indices and transitions P(θ′;α,η,θ). | |
![]() |
v(α;ϵ,η,θ) and P∗(). | |
![]() |
Integer code to classify state (InSubSample,LastT,Terminal). | |
Public methods | ||
![]() |
virtual | Compute v(α;θ) for all values of ϵ and η. |
![]() |
virtual | |
![]() |
Creator function for a new point in Θ, initializing the automatic (non-static) members. | |
![]() |
static | Calls the DP version. |
![]() |
static | Delete the current DP model and reset. |
![]() |
virtual | Completes v(α;⋯,η,θ) by adding discounted expected value to utilities for a given η. |
![]() |
virtual | Default A(θ): all actions are feasible at all states, except for terminal states. |
![]() |
||
![]() |
virtual | Return FALSE: Elements of the exogenous vector are looped over when computing U and EV. |
![]() |
static | Base Initialize function. |
![]() |
virtual | Return TRUE if full iteration over exogenous values and transitions to be carried out at this point (in subsample). |
![]() |
virtual | |
![]() |
KeaneWolpin: Computes v() and V for out-of-sample states. | |
![]() |
For Myopic agent or StaticClock or just Initial, v=U and no need to evaluate EV. | |
![]() |
Extract and return rows of a matrix that correspond to feasible actions at the current state. | |
![]() |
virtual | Default / virtual routine. |
![]() |
virtual | Return TRUE: Default indicator function for whether the current state is reachable from initial conditions or not. |
![]() |
virtual | Sets up a single point θ in the state space. |
![]() |
virtual | Default Choice Probabilities: no smoothing. |
![]() |
Compute P(θ′;θ). | |
![]() |
virtual | Default Emax operator at θ.
Not called by User code directly
Derived classes provide a replacement. |
![]() |
Computes the endogneous transition given η.
Not called by User code directly
Loops over all state variables to compute P(θ′;α,η). |
|
![]() |
virtual | Default to be replaced by user if necessary. |
![]() |
Compute endogenous state-to-state transition P⋆(θ′|θ) for the current state θ. | |
![]() |
Return TRUE if Utility depends on exogenous vector at this state. | |
![]() |
virtual | Default U() ≡ 0. |
Public methods | ||
![]() |
Task to construct θ transition, the endogenous state vector. | |
![]() |
The inner work at θ when computing transition probabilities. | |
![]() |
Update dynamically changing components of the program at the time chosen by the user. |
Public fields | ||
![]() |
static | The smoothing method to use. |
![]() |
static | Smoothing parameter ρ (logit or normal method) |
Public methods | ||
![]() |
static | Set up the ex-post smoothing state space. |
![]() |
static | Initialize an ex post smoothing model. |
![]() |
Extreme Value Ex Post Choice Probability Smoothing. | |
![]() |
static | Set the smoothing method (kernel) and parameter. |
ζ: vector of IID errors for each feasible α
F(zα)=e−e−zα/ρ
v(α;ϵ,η,θ)=exp{ρ(U+δ∑θ′P(θ′;α,η,θ)EV(θ′))}V(ϵ,η,θ)=log(∑αv(α;ϵ,η,θ))EV(θ)=∑ϵ,ηV(ϵ,η)P(ϵ)P(η)
Ρ*(α;ε,η,γ) =
Public fields | ||
![]() |
static const | |
![]() |
static | Hotz-Miller estimation task. |
![]() |
static const | |
![]() |
static | current value of rho . |
![]() |
static | Choice prob smoothing ρ. |
Public methods | ||
![]() |
static | calls the Bellman version, no special code. |
![]() |
static | Initialize DP with extreme-value smoothing. |
![]() |
virtual | |
![]() |
static | Set the smoothing parameter ρ. |
![]() |
virtual | Extreme Value Ex Ante Choice Probability Smoothing. |
![]() |
virtual | Iterate on Bellman's equation at θ using Rust-styled additive extreme value errors. |
A discrete approximation to ζ enters the state vector if the decision is to accept (d>0
).
Public fields | ||
![]() |
static | Discrete state variable of kept ζ. |
![]() |
static | |
Public methods | ||
![]() |
static | Create spaces for a KeepZ model. |
![]() |
virtual | |
![]() |
static | Initialize a KeepZ model. |
![]() |
static | Set the dynamically kept continuous state variable. |
This is the base class for a static discrete model with extreme value shocks added to the value of actions.
Public fields | ||
![]() |
static | The decision variable. |
Public methods | ||
![]() |
||
![]() |
static | Create state space for McFadden models. |
![]() |
static | Initialize a McFadden model (one-shot, one-dimensional choice, extreme value additive error with ρ=1.0). |
Public fields | ||
![]() |
static | |
![]() |
static | |
![]() |
static | |
Public methods | ||
![]() |
static | Create spaces and set up quadrature for integration over the IID normal errors. |
![]() |
Complete v(α;⋯,η,θ) by integrating over IID normal additive value shocks. | |
![]() |
static | Initialize a normal Gauss-Hermite integration over independent choice-specific errors. |
![]() |
static | Initialize a normal Gauss-Hermite integration over independent choice-specific errors. |
![]() |
static | Update vector of standard deviations for normal components. |
Public fields | ||
![]() |
static | Current variance matrix. |
![]() |
static | array of GHK objects |
![]() |
static | replications for GHK |
Public methods | ||
![]() |
static | Create spaces and set up GHK integration over non-iid errors. |
![]() |
Use GHK to integrate over correlated value shocks. | |
![]() |
static | Initialize GHK correlated normal solution. |
![]() |
static | Initialize the integration parameters. |
![]() |
static | Update the Cholesky matrix for the correlated value shocks. |
ζ: vector of normal shocks
Note: a user should base MyModel
on either NIID or NnotIID
which are derived from this base.
Public fields | ||
![]() |
static | User-supplied Choleski. |
![]() |
static | Current Choleski matrix for shocks (over all feasible actions) |
Public methods | ||
![]() |
virtual | Compute v(α;θ) for all values of ϵ and η. |
![]() |
static | Calls the Bellman version and initialize Chol. |
![]() |
static | Initialize the normal-smoothed model. |
This is the base class required for models solved by finding cutoffs (reservation values)
in a continuous error using the ReservationValues method.
The user's model provides the required information about the distribution of ζ.
OneDimensionalChoice
.
Uz(z)
: the utility matrix at a given vector of cut-offs z. Uz(z)
should return a d.N × d.N-1
matrix equal to the
utility of each value of d=i
at ζ=zj. In the case of a binary choice there is just one cut-off and Uz(z)
returns a column vector of
the utilities of the two choices at z
Internally the difference between adjacent values of d
is computed from this matrix.
EUtility()
: an array of size d.N
that returns the expected utlity of d=j
for values of z in the interval (z*j-1,z*j)
and the corresponding probabilities Ρ[z &in (z*j-1,z*j) ]. EUtility()
gets
z*star
from the data member zstar.Public fields | ||
![]() |
static | single action variable. |
![]() |
static | scratch space for E[U] in z* intervals. |
![]() |
static | space for current Prob(z) in z* intervals. |
![]() |
TRUE: solve for z* at this state. | |
![]() |
N::Aind-1 x 1 of reservation value vectors. | |
Public methods | ||
![]() |
virtual | |
![]() |
virtual | The default indicator whether a continuous choice is made at θ. |
![]() |
static | Create spaces and check that α has only one element. |
![]() |
virtual | |
![]() |
Returns z* at the current state θ. | |
![]() |
Dynamically update whether reservation values should be computed at this state. | |
![]() |
static | Create a one dimensional choice model. |
![]() |
virtual | Sets z* to z. |
![]() |
virtual | Smoothing in 1d models. |
![]() |
||
![]() |
virtual | Default 1-d utility, returns 0. |
![]() |
virtual |
A model where there is :
The user simply supplies a (required) static utility function which
is called from the built-in version here.
Public fields | ||
![]() |
static | contains U() sent by the user's code. |
Public methods | ||
![]() |
static | Short-cut for a model with a single state so the user need not (but can) create a Bellman-derived class. |
![]() |
virtual | Built-in Utility that calls user-supplied function. |
This is a base class for a multi-sector static discrete model with normally correlated shocks.
Public fields | ||
![]() |
static | The sector-decision variable. |
![]() |
static | Vector-valued prices of sectors. |
Public methods | ||
![]() |
static | Call NnotIID. |
![]() |
static | Initialize a Roy model: static, one-dimensional choice with correlated normal error. |
![]() |
virtual | Built-in utility for Roy models. |
Public fields | ||
![]() |
static | The binary decision variable. |
Public methods | ||
![]() |
static | Currently this just calls the ExtremValue version, no special code. |
![]() |
static | Initialize a Rust model (Ergodic, binary choice, extreme value additive error with ρ=1.0). |
state | state vector |
picked | TRUE: in sub sample of states for full solution. FALSE: will be approximated
This is called in CreateSpaces() for each clone of
|
User code must call CreateSpaces for the parent class that MyModel
is derived from.
It will ultimately call this routine.
Since static variables are used, only one DP model can be stored at one time. The primary use of this routine is to enable testing programs to run different problems. User code would call this only if it will set up a different DP model.
The same model with different solution methods and different parameters can be solved using the same structure.
Delete allows the user to start from scratch with a different model (horizons, actions, and states).
The key output from the model must be saved or used prior to deleting it.
The columns that are updated are indexed as elo : ehi. The element of the transition used is η= all[onlysemiexog].
decl et =I::all[onlysemiexog]; pandv[][I::elo : I::ehi] += I::CVdelta*sumr(Nxt[Qrho][et].*N::VV[I::later][Nxt[Qit][et]]);
This is a virtual method. MyModel
provides its own to replace it if some actions
are infeasible at some states.
MyModel
has a binary action d
for which d=1
is feasible only if t < 10
. Otherwise, only actions with d=0
are feasible. The following will impose that restriction on feasible actions:
MyModel::FeasibleActions() { return CV(d)==0 || I::t < 10; }
The user can replace this virtual method to skip iterating over the exogenous state vector at different points in the the endogenous state space θ.
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.
User code must call the |
Type ≥ INSUBSAMPLE && Type!=LASTT
.
myU | A × m matrix
|
static const decl Uv = <0.1; 0.5; 0.7; -2.5>; ⋮ MyModel::Utility() { return OnlyFeasible(Uv); }
The built-in version returns TRUE. The user can provide a replacement for this virtual method to trim the state space at the point of creating the state space. Many state variables will trim the state space automatically with a non-stationary clock assuming initial values are 0.
inV | expected value integrating over endogenous and semi-endogenous states.
Smooth is called for each point in the state space during value function iteration, but only in the last iteration (deterministic aging or fixed point tolerance has been reached.) It uses EV which should be set to the current value of the current state by thetaEMax()
|
Emax
operator at θ.
Not called by User code directly
Derived classes provide a replacement.
If setPstar then Ρ*(α) is computed using virtual Smooth()
Loops over all state variables to compute P(\theta^\prime ; \alpha, \eta ). For StateBlocks the root of the block is called to compute the joint transition.
Accounts for the (vector) of feasible choices A(\theta) and the semi-exogenous states in \eta. that can affect transitions of endogenous states but are themselves exogenous.
Stores results in Nxt array of feasible indices of next period states and conforming matrix of probabilities.
This updates either Ptrans or NKptrans. This is called in PostEMax and only if setPstar is TRUE and the clock is Ergodic or NKstep is true.
MyModel
provides a replacement.
This simply returns a vector of 0s of length equal to the number of
feasible action vectors
U(A(\theta)) = \overrightarrow{0}
This task loops over \eta and \theta to compute at each \theta
P(\theta^\prime ; \alpha,\eta, \theta) It is stored at each point in |theta as a matrix transition probabilities.
When this task is run is determined by UpdateTime
If a state variable can be placed in \epsilon instead of \eta and \theta it reduces computation and storage signficantly.
PreUpdate
hooks (see Hooks).
Distribution()
of random effects.Update()
them and set the actual values.Then span the \eta and \Theta spaces to compute and store endogenous transitions at each point.
Method | the SmoothingMethods, default = NoSmoothing
|
rho the smoothing parameter Default value is 1.0. |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
Sets pandv equal to P^{\star}\left(\alpha;\theta\right) = {e^{\rho(v(\alpha;\theta)-V)}\over {\sum}_{\alpha\in A(\theta)} e^{\rho( v(\alpha;\theta)-V) }}.
Method | the SmoothingMethods, default = NoSmoothing
|
smparam the smoothing parameter ρ AV compatible object. If it evaluates to less than 0 when called no smoothing occurs. |
rho | AV compatible, the smoothing parameter ρ. CV(rho) < 0, sets ρ = DBL_MAX_E_EXP (i.e. no smoothing).
|
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.
With ρ = 0 choice probabilities are completely smoothed. Each feasible choices becomes equally likely. |
rho | AV compatible object. If it evaluates to less than 0 when called no smoothing occurs. |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
d | Binary ActionVariable not already added to the model integer, number of options (action variable created) [default = 2] |
N | integer, number of points for approximation |
held | object that determines if z is retained. |
Calls the ExtremeValue version, no special code.
Nchoices | integer, number of options. |
userState | a Bellman-derived object that represents one point \theta in the user's endogenous state space Θ. |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
GQLevel | integer[default=7], depth of Gauss-Hermite integration |
StDev | integer [default] set all action standard deviations to 1/\sqrt{2} CV compatible A×1 vector of standard deviations of action-specific errors. |
AV(Chol) is called for each γ, so σ can include random effects.
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
R | integer, number of replications [default=1] |
iseed | integer, seed for random numbers [default=0] |
AChol | CV compatible vector of lower triangle of Cholesky matrix for full Action vector [default equals lower triangle of I] |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
The user-supplied replacement is called for each \theta during ReservationValue solving. That means whether a choice is made at \theta can depend on fixed values.
The answer is stored in solvez
.
Method | SmoothingMethods, default = NoSmoothing
|
smparam the smoothing parameter (e.g. ρ or σ) Default value is 1.0. |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ. |
d | ActionVariable not already added to the model integer, number of options (action variable created) [default = 2] |
z. |
BorU | either Bellman-derived object or a static function that simply returns utility. |
Method | ex-post choice probability SmoothingMethods [default=NoSmoothing] |
… | ActionVariables to add to the model
StaticProgram
Initialize() and CreateSpaces() this makes it impossible
to add any state variables to the model. |
NorVLabels | integer [default=2], number of options/sectors array of Labels |
P_or_UserState | 0 [default] initialize sector prices to 0 CV() compatible vector of prices The first 2 options will use the built in Utility that simply returns CV(p). a Roy-derived object (allowing for custom Utility) |
userState | a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.
|