[ Search |  Up Level |  Project home |  Index |  Class hierarchy ]

 Bellman.ox

Bellman is the class derived from DP that contains dynamic (across states) members and methods; this explains how the user constructs Utility, Reachable States and other aspects of a state \(\theta\).

 ⇩ 

  1. Overview
  2. Your DP model (called MyModel here) will be derived from the Bellman class or from one of the built-in derived classes. Think of these classes as a template for each point in your state space \(\Theta\). You pick one of the templates to start with then customize to match your model.

    Each Bellman class embodies a specification of the DP model, especially the iid continuous state vector, denoted \(\zeta\) which smooths choice probabilities. This customization is fundamental because the form of \(\zeta\) (or the lack thereof) determines the calculations required to iterate on Bellman's equation at each point \(\theta.\) Each derived class of Bellman substitutes customized routines ("methods") to carry out these tasks.

    Thus, the choice of parent class for MyModel depends on the action value equation: $$v(\alpha;\zeta,\theta)\quad \equiv\quad U(\alpha;\theta) + \zeta_\alpha + \delta EV(\theta^{\,\prime}).$$ The state value function \(V(\theta)\) must integrate over \(\zeta.\) This is carried out internally by the virtual thetaEMax() or its replacement. It does not have to be coded by the user. The default method, thetaEMax(), assumes there is no \(\zeta.\) Thus, the default does no integration.

    Solution methods are coded separately from Bellman. They are derived from the Method class and described in Methods. Some methods may only operate if the user's model is derived from a compatible class of Bellman or has other required characteristics. For example, a case of Bellman specialization is whether MyModel involves solving for reservation values. This is a different kind of continuous shock than \(\zeta\) and requires different calculations for Bellman's equation. In this case, the parent class for MyModel must derive from the OneDimensionalChoice class because reservation value models can only allow a single action variable.

  3. The Minimal Template
  4. #import "DDP"
    

    class MyModel : Bellman { // declare static data members Utility();

    // Optional methods to replace built-in versions FeasibleActions(); Reachable(); ThetaUtility(); OutcomesGivenEpsilon();
    }

  5. User-Contributed Elements of MyModel and MyCode
    1. Utility()
    2. MyModel must supply a replacement for Utility(). Since utility depends on the current state, the method must be automatic (not static). Here is an example with one state variable and one action and how they might determine utility.
      #import "DDP"
      

      struct MyModel : Bellman { static decl d, s; // One decision and one state variable Utility(); } MyModel::Utility() { return CV(s)*CV(d); }

      So this is a model where \(\alpha = (d)\) and \(\theta = (s)\) and \(U(\alpha;\theta)=sd.\)

      As explained elsewhere, if s contains a state variable its "value" is not simply themselves. Likewise a. Their current values are retrieved by sending them to CV(). Also, note that \(U()\) at a state is always treated as a vector-valued function in DDP . So CV(d) is a column vector. As a state variable s is a scalar at \(\theta\).

    3. Reachable States
    4. A state is unreachable if it cannot occur given initial conditions. For example, a person cannot have 20 years of labour market experience at age 18. Including unreachable states in the state space wastes computation and storage but does not cause any errors.

      MyModel can optionally provide a replacement for the virtual Reachable() method. The built-in version of Reachable returns TRUE, meaning all states are marked as reachable. The user can provide a replacement with returns an indicator for whether the current state is reachable or not.

      Example.
      Mark as unreachable all states at which \(x\) and \(y\) add up to a value greater than 5:
      MyModel::Reachable() {
          return ! (CV(x)+CV(y)> 5);
          }
      

      StateVariables defined in DDP have their own Reachable method which are called when creating the state space and before MyModel::Reachable() is called. This means that in many cases the user does not need to code Reachable. For example, in the case of too much experience at a given age, the ActionCounter state variable will automatically prune states from a finite horizon model based on the condition.

    5. Restricted Feasible Action spaces / matrices
    6. MyModel can optionally provide a replacement for the virtual FeasibleActions() method to make the feasible choice set to vary with the endogenous state \(\theta\). That is, the action space \(A\) is really \(A(\theta)\). Again, the default is that all values constructed from the action variables added to the model are feasible.

      Example.
      Only action vectors with d less than or equal to the value of state variable s are feasible.
      MyModel::FeasibleActions() {
          return CV(d) .<= CV(s);
          }
      
      The dot operator .< is the element-by-element less-than operator in Ox. So this returns a vector of length equal to the number of values d takes on containing 0s and 1s. When setting up spaces DDP will call FeasibleActions at each point in the state space. It will then create a list of different feasible sets. Each point \(\theta\) contains an index into this list to ensure only feasible action values are returned by CV(d) when the model is being solved/used.

      Important: feasibility must be static. That is, the conditions returned by FeasibleActions must be determined at the creation of the spaces and cannot depend on changing elements of the model. For example, suppose p is the price of units of an action d that the agent takes as given. And suppose s is the agent's income the current state. Then one might tempted to impose the budget constraint like this:
      MyModel::FeasibleActions() {
          return CV(p) * CV(d) .<= CV(s);
          }
      
      However, if p is changing due to an equilibrium calculation this is incorrect because FeasibleActions is only called once inside DP::CreatSpaces() so it cannot be used for a dynamic condition like this. Instead, Utility must impose this condition. Ox understands \(-\infty\), so you can assign a infeasible choice that value to ensure that it will not be optimal (and will be given 0 probability):
      MyModel::Utility() {
          decl dv = CV(d);
          return  CV(p)*dv .<= CV(s)  .?  dv .: -.Inf;
          }
      
      The .? … .: … operation is an inline if-statement that will check the element-by-element condition at the start and assign the other values listed depending on whether the element is TRUE or FALSE. This case if the value of p changes and a value of d is no longer affordable it will dynamically have utility equal to \(-\infty\).

    7. ThetaUtility
    8. Suppose the utility for your model has the form $$U() = f\left( \alpha; g\left(\theta\right),\eta,\epsilon\right).$$ That is, there is a function \(g()\) of the endogenous state variables that is common for all values of the IID (exogenous and semi-exogenous) state variables. If MyModel only provides the required Utility() function then \(g(\theta)\) is recomputed for each value of the IID shocks.

      This inefficiency can be eliminated by providing ThetaUtility() which is called for each \(\theta\) immediately before looping over the IID exogenous state values and calling Utility(). A simple example: $$U = a(xb + e - d) + d.$$ Here \(\theta=(x),\) \(\alpha=(a)\) is a binary choice, and \(\epsilon=(e)\) an IID shock to value of \(a=1\). \(b\) and \(d\) are parameters. So \(g() = xb\), which is not expensive to recompute unnecessarily. However, in some models this \(\theta\)-constant component of utility is very involved whereas the IID contributions are simple.

      struct MyModel : Bellman {
          ⋮
          static decl a, x, xb, e;
          ⋮
          ThetaUtility();
          ⋮
      

      MyModel::ThetaUtility() { xb = CV(x)*b; } MyModel::Utility() { return CV(a)*(xb+AV(e)-d) + d; }

      ThetaUtility stores the computed value in a static member of the model, xb. If xb were not declared static an additional location in memory would be created for each point \(\theta.\) It can be static even though the value of the state variable \(x\) depends on the \(\theta\). As DDP moves through the state space the value of xb is updated with the current value before Utility() is called for the current value of \(\theta\) and \(\epsilon.\) In complicated models, there may be many calculations that depend on endogenous states and estimated parameters. Using ThetaUtility() not only eliminates redundant computation it does so without additional storage that grows with the state space.

    9. Hooks and Update Time
    10. MyModel can use Add() to have a static method/function called at different points in solution methods. MyModel can also use SetUpdateTime() to set when solution methods should update transition probabilities and utility of actions. This allows transitions and utility to depend on fixed and random effect variables, but if they do not wasted computations can be avoided by updating higher up in the process.

    11. Auxiliary Variables
    12. MyModel can add AuxiliaryValues for simulating outcomes and accounting for partial observability of the state. MyCode must sandwich the commands that add actions and states to the model between calls to DPparent::Initialize(…) and DPparent::CreateSpaces(…). MyModel can supply its own version of these two methods, but then they must call the parent versions. If MyModel does not have its own versions, then the prefix DPparent:: is not needed because a reference to Initialize() will refer to the parent's version.

  • Debug Output and Options
  • The DPDebug class is the base for output routines and other tasks that are related to debugging and reporting.

    Most classes in niqlow have a Volume member which will determine how much output is produced during execution. In particular Volume controls how much output about the dynamic program is put out during and after a solution method. You can get more output by turning up the Volume. See NoiseLevels. For example, DP::Volume = NOISY; will produce the most output and DP::Volume = SILENT; the least. The default setting for all Volume variables is QUIET, one level above SILENT.

    When you call Initialize() it opens a timestamped log file. Output that is expected to be very large, like dumps of the value function or state transitions, are sent there instead of to the screen. Other parts of niqlow will write to other timestamped log files.

    Author:
    © 2011-2023 Christopher Ferrall

    Documentation of Items Defined in Bellman.ox  ⇧ 

     Bellman : DP

    Base class for any DP problem and each point \(\theta\) in the endogenous state space.

    Models based on this class correspond have no continuous shocks \(\zeta\) and no ex-post smoothing.

    That is, action value is utility plus discounted expected future value:
    $$v\left( A(\theta) ;\cdots \right) = U(A) + \delta EV(\theta^\prime).$$

    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP1 (equation 6).

    Static members of the class are inherited from the base DP class.

    MyModel is derived from Bellman or from a class derived from Bellman.
    Public fields
     Aind index into CList, determines \(A(\theta)\).
     EV EV(θ)
     Nxt TransStore x η-Array of feasible endogenous state indices and transitions \(P(\theta^\prime;\alpha,\eta,\theta)\).
     pandv \(v(\alpha;\epsilon,\eta,\theta)\) and \(P*()\).
     Type Integer code to classify state (InSubSample,LastT,Terminal).
    Public methods
     ActVal virtual Compute \(v(\alpha;\theta)\) for all values of \(\epsilon\) and \(\eta\).
     AutoAuxiliaryValues virtual
     Bellman Creator function for a new point in \(\Theta\), initializing the automatic (non-static) members.
     CreateSpaces static Calls the DP version.
     Delete static Delete the current DP model and reset.
     ExogExpectedV virtual Completes \(v(\alpha;\cdots,\eta,\theta)\) by adding discounted expected value to utilities for a given \(\eta\).
     FeasibleActions virtual Default \(A(\theta)\): all actions are feasible at all states, except for terminal states.
     GetPandV
     IgnoreExogenous virtual Return FALSE: Elements of the exogenous vector are looped over when computing U and EV.
     Initialize static Base Initialize function.
     InSS virtual Return TRUE if full iteration over exogenous values and transitions to be carried out at this point (in subsample).
     KernelCCP virtual
     MedianActVal KeaneWolpin: Computes v() and V for out-of-sample states.
     MyopicActVal For Myopic agent or StaticClock or just Initial, v=U and no need to evaluate EV.
     OnlyFeasible Extract and return rows of a matrix that correspond to feasible actions at the current state.
     OutcomesGivenEpsilon virtual Default / virtual routine.
     Reachable virtual Return TRUE: Default indicator function for whether the current state is reachable from initial conditions or not.
     SetTheta virtual Sets up a single point θ in the state space.
     Smooth virtual Default Choice Probabilities: no smoothing.
     StateToStatePrediction Compute \(P(\theta^\prime;\theta)\).
     thetaEMax virtual Default Emax operator at \(\theta.\) Not called by User code directly

    Derived classes provide a replacement.

     ThetaTransition Computes the endogneous transition given \(\eta.\) Not called by User code directly

    Loops over all state variables to compute \(P(\theta^\prime ; \alpha, \eta )\).

     ThetaUtility virtual Default to be replaced by user if necessary.
     UpdatePtrans Compute endogenous state-to-state transition \(P^\star(\theta'|\theta)\) for the current state \(\theta\).
     UseEps Return TRUE if Utility depends on exogenous vector at this state.
     Utility virtual Default U() ≡ 0.

    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     EndogTrans

    Public methods
     EndogTrans Task to construct \(\theta\) transition, the endogenous state vector.
     Run The inner work at \(\theta\) when computing transition probabilities.
     Transitions Update dynamically changing components of the program at the time chosen by the user.

     ExPostSmoothing : Bellman : DP

    Base class for DP problem when choice probabilities are smoothed ex-post.

    Utility() has no continuous shock \(\zeta\). So action values are the same form as in Bellman.

    After \(V(\theta)\) is computed, choice probabilities can take different forms:
    left unsmoothed (same as Bellman)
    smoothed ex post with a kernel in values according to one of the SmoothingMethods sent as argument to CreateSpaces().
    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP3 (equation 9).
    Public fields
     Method static The smoothing method to use.
     rho static Smoothing parameter \(\rho\) (logit or normal method)
    Public methods
     CreateSpaces static Set up the ex-post smoothing state space.
     Initialize static Initialize an ex post smoothing model.
     Logistic Extreme Value Ex Post Choice Probability Smoothing.
     SetSmoothing static Set the smoothing method (kernel) and parameter.
    Inherited methods from Bellman:
    ActVal, AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, thetaEMax, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     ExtremeValue : Bellman : DP

    The base class for models that include an additve extreme value error in action value.

    Specification
    $$v(\alpha,\cdots) = Utility(\alpha,\cdots) + \zeta_\alpha$$

    \(\zeta\): vector of IID errors for each feasible \(\alpha\)

    $$F(z_\alpha) = e^{ -e^{-z_\alpha/\rho} }$$

    Bellman Equation Iteration.

    $$\eqalign{ v(\alpha ; \epsilon,\eta,\theta) &= \exp\{ \rho( U + \delta \sum_{\theta^\prime} P(\theta^\prime;\alpha,\eta,\theta) EV(\theta^\prime) ) \}\cr V(\epsilon,\eta,\theta) &= \log \left(\sum_{\alpha} v(\alpha ; \epsilon,\eta,\theta) \right)\cr EV(\theta) &= \sum_{\epsilon,\eta} V(\epsilon,\eta)P(\epsilon)P(\eta) \cr }$$

    Choice Probabilities
    Once EV() has converged
    Ρ*(α;ε,η,γ) =
    

    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP2 with Extreme Value shocks (equation 8).
    Public fields
     hib static const
     HMQ static Hotz-Miller estimation task.
     lowb static const
     rh static current value of rho .
     rho static Choice prob smoothing ρ.
    Public methods
     CreateSpaces static calls the Bellman version, no special code.
     Initialize static Initialize DP with extreme-value smoothing.
     KernelCCP virtual
     SetRho static Set the smoothing parameter \(\rho\).
     Smooth virtual Extreme Value Ex Ante Choice Probability Smoothing.
     thetaEMax virtual Iterate on Bellman's equation at θ using Rust-styled additive extreme value errors.
    Inherited methods from Bellman:
    ActVal, AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     KeepZ : OneDimensionalChoice : ExPostSmoothing : Bellman : DP

    A OneDimensionalChoice model with discretized approximation to "accepted" past \(\zeta\).

    A discrete approximation to \(\zeta\) enters the state vector if the decision is to accept (d>0).
    Public fields
     keptz static Discrete state variable of kept ζ.
     myios static
    Public methods
     CreateSpaces static Create spaces for a KeepZ model.
     DynamicTransit virtual
     Initialize static Initialize a KeepZ model.
     SetKeep static Set the dynamically kept continuous state variable.

    Inherited methods from OneDimensionalChoice:
    AutoAuxiliaryValues, Continuous, EUtility, Getz, HasChoice, Setz, Smooth, SysSolve, Utility, Uz
    Inherited methods from ExPostSmoothing:
    Logistic, SetSmoothing
    Inherited methods from Bellman:
    Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from OneDimensionalChoice:
    d, EUstar, pstar, solvez, zstar
    Inherited fields from ExPostSmoothing:
    Method, rho
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     McFadden : ExtremeValue : Bellman : DP

    Myopic choice problem (\(\delta=0.0\)) with standard Extreme Value \(\zeta\).

    This is the base class for a static discrete model with extreme value shocks added to the value of actions.
    Public fields
     d static The decision variable.
    Public methods
     ActVal
     CreateSpaces static Create state space for McFadden models.
     Initialize static Initialize a McFadden model (one-shot, one-dimensional choice, extreme value additive error with ρ=1.0).

    Inherited methods from ExtremeValue:
    KernelCCP, SetRho, Smooth, thetaEMax
    Inherited methods from Bellman:
    AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from ExtremeValue:
    hib, HMQ, lowb, rh, rho
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     NIID : Normal : Bellman : DP

    Class for adding correlated normal smoothing shocks to action value.

    \(\zeta \sim N(0,|Sigma)\)

    User provides the vectorized Choleski decomposition of \(\Sigma\)

    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP2 (equation 7).
    Choice probabilities are computed using GHK smooth simulation.
    See also:
    SetIntegration
    Public fields
     GQLevel static
     GQNODES static
     MM static
    Public methods
     CreateSpaces static Create spaces and set up quadrature for integration over the IID normal errors.
     ExogExpectedV Complete \(v(\alpha;\cdots,\eta,\theta)\) by integrating over IID normal additive value shocks.
     Initialize static Initialize a normal Gauss-Hermite integration over independent choice-specific errors.
     SetIntegration static Initialize a normal Gauss-Hermite integration over independent choice-specific errors.
     UpdateChol static Update vector of standard deviations for normal components.
    Inherited methods from Bellman:
    AutoAuxiliaryValues, Bellman, Delete, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from Normal:
    AChol, Chol
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     NnotIID : Normal : Bellman : DP

    Class for adding correlated normal smoothing shocks to action value.

    \(\zeta \sim N(0,|Sigma)\)

    User provides the vectorized Choleski decomposition of \(\Sigma\)

    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP2 (equation 7).
    Choice probabilities are computed using GHK smooth simulation.
    See also:
    SetIntegration
    Public fields
     BigSigma static Current variance matrix.
     ghk static array of GHK objects
     R static replications for GHK
    Public methods
     CreateSpaces static Create spaces and set up GHK integration over non-iid errors.
     ExogExpectedV Use GHK to integrate over correlated value shocks.
     Initialize static Initialize GHK correlated normal solution.
     SetIntegration static Initialize the integration parameters.
     UpdateChol static Update the Cholesky matrix for the correlated value shocks.
    Inherited methods from Bellman:
    AutoAuxiliaryValues, Bellman, Delete, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from Normal:
    AChol, Chol
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     Normal : Bellman : DP

    The containter class for models that include additve normal smoothing shocks.

    Specification
    $$v(\alpha,\cdots) = Utility(\alpha,\cdots) + \zeta_\alpha$$

    \(\zeta\): vector of normal shocks

    Note: a user should base MyModel on either NIID or NnotIID which are derived from this base.

    CCPs are discussed in OODP 2.1.3.
    Models based on this class have choice probabilities of the form CCP2 (equation 7).
    Public fields
     AChol static User-supplied Choleski.
     Chol static Current Choleski matrix for shocks (over all feasible actions)
    Public methods
     ActVal virtual Compute \(v(\alpha;\theta)\) for all values of \(\epsilon\) and \(\eta\).
     CreateSpaces static Calls the Bellman version and initialize Chol.
     Initialize static Initialize the normal-smoothed model.
    Inherited methods from Bellman:
    AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     OneDimensionalChoice : ExPostSmoothing : Bellman : DP

    One-dimensional action models with user defined distribution of \(\zeta\).

    This is the base class required for models solved by finding cutoffs (reservation values) in a continuous error using the ReservationValues method.

    The user's model provides the required information about the distribution of \(\zeta\).

    The reservation value solution works when

    The restrictions above do not apply if other solution methods are applied to a OneDimensionalChoice.

    The user provides methods that return:
    Public fields
     d static single action variable.
     EUstar static scratch space for E[U] in z* intervals.
     pstar static space for current Prob(z) in z* intervals.
     solvez TRUE: solve for z* at this state.
     zstar N::Aind-1 x 1 of reservation value vectors.
    Public methods
     AutoAuxiliaryValues virtual
     Continuous virtual The default indicator whether a continuous choice is made at \(\theta\).
     CreateSpaces static Create spaces and check that α has only one element.
     EUtility virtual
     Getz Returns z* at the current state \(\theta\).
     HasChoice Dynamically update whether reservation values should be computed at this state.
     Initialize static Create a one dimensional choice model.
     Setz virtual Sets z* to z.
     Smooth virtual Smoothing in 1d models.
     SysSolve
     Utility virtual Default 1-d utility, returns 0.
     Uz virtual
    Inherited methods from ExPostSmoothing:
    Logistic, SetSmoothing
    Inherited methods from Bellman:
    Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from ExPostSmoothing:
    Method, rho
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     OneStateModel : ExPostSmoothing : Bellman : DP

    Base class for a model with a single state.

    A model where there is :

    The user simply supplies a (required) static utility function which is called from the built-in version here.
    Public fields
     U static contains \(U()\) sent by the user's code.
    Public methods
     Initialize static Short-cut for a model with a single state so the user need not (but can) create a Bellman-derived class.
     Utility virtual Built-in Utility that calls user-supplied function.

    Inherited methods from ExPostSmoothing:
    CreateSpaces, Logistic, SetSmoothing
    Inherited methods from Bellman:
    ActVal, AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, thetaEMax, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from ExPostSmoothing:
    Method, rho
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     Roy : NnotIID : Normal : Bellman : DP

    Myopic choice problem (\(\delta=0.0\)) over \(J\) sectors with correlated Normal \(\zeta\).

    This is a base class for a multi-sector static discrete model with normally correlated shocks.
    Public fields
     d static The sector-decision variable.
     Prices static Vector-valued prices of sectors.
    Public methods
     CreateSpaces static Call NnotIID.
     Initialize static Initialize a Roy model: static, one-dimensional choice with correlated normal error.
     Utility virtual Built-in utility for Roy models.

    Inherited methods from NnotIID:
    ExogExpectedV, SetIntegration, UpdateChol
    Inherited methods from Bellman:
    AutoAuxiliaryValues, Bellman, Delete, FeasibleActions, GetPandV, IgnoreExogenous, InSS, KernelCCP, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from NnotIID:
    BigSigma, ghk, R
    Inherited fields from Normal:
    AChol, Chol
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     Rust : ExtremeValue : Bellman : DP

    Special case of Extreme value.
    Public fields
     d static The binary decision variable.
    Public methods
     CreateSpaces static Currently this just calls the ExtremValue version, no special code.
     Initialize static Initialize a Rust model (Ergodic, binary choice, extreme value additive error with ρ=1.0).
    Inherited methods from ExtremeValue:
    KernelCCP, SetRho, Smooth, thetaEMax
    Inherited methods from Bellman:
    ActVal, AutoAuxiliaryValues, Bellman, Delete, ExogExpectedV, FeasibleActions, GetPandV, IgnoreExogenous, InSS, MedianActVal, MyopicActVal, OnlyFeasible, OutcomesGivenEpsilon, Reachable, SetTheta, StateToStatePrediction, ThetaTransition, ThetaUtility, UpdatePtrans, UseEps, Utility
    Inherited methods from DP:
    Actions, AuxiliaryOutcomes, DrawFsamp, DrawGroup, DrawOneExogenous, EndogenousStates, ExogenousStates, GetAind, GetPinf, GetPstar, GetTrackTrans, GetUseEps, GroupVariables, Indicators, Interactions, KLaggedAction, KLaggedState, MakeGroups, MultiInteractions, onlyDryRun, RecomputeTrans, SemiExogenousStates, SetClock, SetDelta, Settheta, SetUpdateTime, SetVersion, StorePalpha, SubSampleStates, SyncAct, UpdateDistribution, ValuesCounters
    Inherited fields from ExtremeValue:
    hib, HMQ, lowb, rh, rho
    Inherited fields from Bellman:
    Aind, EV, Nxt, pandv, Type
    Inherited fields from DP:
    Blocks, Chi, ClockType, counter, curREdensity, delta, EOoE, EStoS, ETT, gdist, IOE, L, logf, lognm, MyVersion, NKptrans, NKvindex, NxtExog, parents, S, SS, States, SubVectors, tod, tom, userState, Volume, XUT

     Bellman

     ActVal

    virtual Bellman :: ActVal ( )
    Compute \(v(\alpha;\theta)\) for all values of \(\epsilon\) and \(\eta\). Not called by User code directly

     Aind

    decl Aind [public]
    index into CList, determines \(A(\theta)\).

     AutoAuxiliaryValues

    virtual Bellman :: AutoAuxiliaryValues ( )

     Bellman

    Bellman :: Bellman ( state , picked )
    Creator function for a new point in \(\Theta\), initializing the automatic (non-static) members. Not called by User code directly
    Parameters:
    state state vector
    picked TRUE: in sub sample of states for full solution.
    FALSE: will be approximated

    This is called in CreateSpaces() for each clone of MyModel

    Determine if the state is terminal
    Set \(A(\theta)\).

    See also:
    AddA

     CreateSpaces

    static Bellman :: CreateSpaces ( )
    Calls the DP version.

    User code must call CreateSpaces for the parent class that MyModel is derived from.

    It will ultimately call this routine.


     Delete

    static Bellman :: Delete ( )
    Delete the current DP model and reset.

    Since static variables are used, only one DP model can be stored at one time. The primary use of this routine is to enable testing programs to run different problems. User code would call this only if it will set up a different DP model.

    The same model with different solution methods and different parameters can be solved using the same structure.

    Delete allows the user to start from scratch with a different model (horizons, actions, and states).

    The key output from the model must be saved or used prior to deleting it.


     EV

    decl EV [public]
    EV(θ)

     ExogExpectedV

    virtual Bellman :: ExogExpectedV ( )
    Completes \(v(\alpha;\cdots,\eta,\theta)\) by adding discounted expected value to utilities for a given \(\eta\). Not called by User code directly

    The columns that are updated are indexed as elo : ehi. The element of the transition used is \(\eta = \) all[onlysemiexog].

    decl et =I::all[onlysemiexog];
    pandv[][I::elo : I::ehi] += I::CVdelta*sumr(Nxt[Qrho][et].*N::VV[I::later][Nxt[Qit][et]]);
    

     FeasibleActions

    virtual Bellman :: FeasibleActions ( )
    Default \(A(\theta)\): all actions are feasible at all states, except for terminal states. Not called by User code directly

    This is a virtual method. MyModel provides its own to replace it if some actions are infeasible at some states.

    Returns:
    Mx1 indicator column vector
    1=row is feasible at current state
    0 otherwise.
    Example:
    Suppose MyModel has a binary action d for which d=1 is feasible only if t < 10. Otherwise, only actions with d=0 are feasible. The following will impose that restriction on feasible actions:
    MyModel::FeasibleActions() {
        return  CV(d)==0 || I::t < 10;
    }
    
    Comments:
    default is to return a vector of ones, making all actions feasible at all states. This is not called at unreachable or terminal states.
    See also:
    Alpha, Actions, ActionVariable

     GetPandV

    Bellman :: GetPandV ( col )

     IgnoreExogenous

    virtual Bellman :: IgnoreExogenous ( )
    Return FALSE: Elements of the exogenous vector are looped over when computing U and EV.

    The user can replace this virtual method to skip iterating over the exogenous state vector at different points in the the endogenous state space \(\theta.\)


     Initialize

    static Bellman :: Initialize ( userState )
    Base Initialize function.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

    User code must call the Initialize() of the parent class that MyModel is derived from. That routine will ensure this is called.


     InSS

    virtual Bellman :: InSS ( )
    Return TRUE if full iteration over exogenous values and transitions to be carried out at this point (in subsample).
    Returns:
    Type ≥ INSUBSAMPLE && Type!=LASTT

     KernelCCP


     MedianActVal

    Bellman :: MedianActVal ( )
    KeaneWolpin: Computes v() and V for out-of-sample states. Not called by User code directly

     MyopicActVal

    Bellman :: MyopicActVal ( )
    For Myopic agent or StaticClock or just Initial, v=U and no need to evaluate EV. Not called by User code directly

    .


     Nxt

    decl Nxt [public]
    TransStore x η-Array of feasible endogenous state indices and transitions \(P(\theta^\prime;\alpha,\eta,\theta)\).

     OnlyFeasible

    Bellman :: OnlyFeasible ( myU )
    Extract and return rows of a matrix that correspond to feasible actions at the current state.
    Parameters:
    myU A × m matrix

    Returns:
    selectifr(myU,Alpha::Sets[Aind])
    Example:
    A model has four possible actions and constant utility, but not all actions are feasible at each state.
    static const decl Uv = <0.1; 0.5; 0.7; -2.5>;
    ⋮
    MyModel::Utility() {
        return OnlyFeasible(Uv);
        }
    
    See also:
    FeasibleActions, Sets, Aind

     OutcomesGivenEpsilon

    virtual Bellman :: OutcomesGivenEpsilon ( )
    Default / virtual routine. Called when computing predictions. Not called by User code directly

     pandv

    decl pandv [public]
    \(v(\alpha;\epsilon,\eta,\theta)\) and \(P*()\).

     Reachable

    virtual Bellman :: Reachable ( )
    Return TRUE: Default indicator function for whether the current state is reachable from initial conditions or not.

    The built-in version returns TRUE. The user can provide a replacement for this virtual method to trim the state space at the point of creating the state space. Many state variables will trim the state space automatically with a non-stationary clock assuming initial values are 0.


     SetTheta

    virtual Bellman :: SetTheta ( state , picked )
    Sets up a single point θ in the state space. This is the default of the virtual routine. It calls the creator for Bellman. The users replacement for this must call this or the parent version.

     Smooth

    virtual Bellman :: Smooth ( )
    Default Choice Probabilities: no smoothing. Not called by User code directly
    Parameters:
    inV expected value integrating over endogenous and semi-endogenous states.

    Smooth is called for each point in the state space during value function iteration, but only in the last iteration (deterministic aging or fixed point tolerance has been reached.) It uses EV which should be set to the current value of the current state by thetaEMax()

    Comments:
    This is virtual, so the user's model can provide a replacement to do tasks at each θ during iteration.
    See also:
    pandv

     StateToStatePrediction

    Bellman :: StateToStatePrediction ( intod )
    Compute \(P(\theta^\prime;\theta)\). Not called by User code directly

     thetaEMax

    virtual Bellman :: thetaEMax ( )
    Default Emax operator at \(\theta.\) Not called by User code directly

    Derived classes provide a replacement.

    Computes
    $$\eqalign{ V(\epsilon,\eta,\theta) = \max_{\alpha\in A(\theta)} v(\alpha;\epsilon,\eta,\theta)\cr Emax(\theta) &= \sum_\epsilon \sum_\eta P(\epsilon)P(\eta)V(\epsilon,\eta,\theta).\cr}$$

    If setPstar then Ρ*(α) is computed using virtual Smooth()

    Comments:
    Derived DP problems replace this routine to account for \(\zeta\) or alternatives to Bellman iteration.

     ThetaTransition

    Bellman :: ThetaTransition ( )
    Computes the endogneous transition given \(\eta.\) Not called by User code directly

    Loops over all state variables to compute \(P(\theta^\prime ; \alpha, \eta )\). For StateBlocks the root of the block is called to compute the joint transition.

    Accounts for the (vector) of feasible choices \(A(\theta)\) and the semi-exogenous states in \(\eta\). that can affect transitions of endogenous states but are themselves exogenous.

    Stores results in Nxt array of feasible indices of next period states and conforming matrix of probabilities.

    See also:
    ExogenousTransition

     ThetaUtility

    virtual Bellman :: ThetaUtility ( )
    Default to be replaced by user if necessary. This function is called before looping over \(\epsilon\) and \(\eta\). The user can place code in the replacement to avoid dupcliate calculations. This virtual function must be replaced when using KeaneWolpin solution method.
    Returns:
    NaN so that it won't accidentally because user did not provide a replacement.

     Type

    decl Type [public]
    Integer code to classify state (InSubSample,LastT,Terminal). This avoids multiple integer values at each point in the state space. Defined in StateTypes. Set in CreateSpaces() and SubSampleStates()
    See also:
    MakeTerminal, Last, StateTypes

     UpdatePtrans

    Bellman :: UpdatePtrans ( )
    Compute endogenous state-to-state transition \(P^\star(\theta'|\theta)\) for the current state \(\theta\). Not called by User code directly

    This updates either Ptrans or NKptrans. This is called in PostEMax and only if setPstar is TRUE and the clock is Ergodic or NKstep is true.

    If StorePA is also true then Palpha is also updated.


     UseEps

    Bellman :: UseEps ( )
    Return TRUE if Utility depends on exogenous vector at this state.

     Utility

    virtual Bellman :: Utility ( )
    Default U() ≡ 0. This is a virtual method. MyModel provides a replacement. This simply returns a vector of 0s of length equal to the number of feasible action vectors $$U(A(\theta)) = \overrightarrow{0}$$

     EndogTrans

     EndogTrans

    EndogTrans :: EndogTrans ( )
    Task to construct \(\theta\) transition, the endogenous state vector. Not called by User code directly

    This task loops over \(\eta\) and \(\theta\) to compute at each \(\theta\)

    $$P(\theta^\prime ; \alpha,\eta, \theta)$$ It is stored at each point in \(|theta\) as a matrix transition probabilities.

    When this task is run is determined by UpdateTime

    Comments:
    The endogenous transition must be computed and stored at each point in the endogenous state space Θ.

    If a state variable can be placed in \(\epsilon\) instead of \(\eta\) and \(\theta\) it reduces computation and storage signficantly.

    See also:
    SetUpdateTime

     Run

    EndogTrans :: Run ( )
    The inner work at \(\theta\) when computing transition probabilities. Not called by User code directly
    1. Do Hooks scheduled for AtThetaTrans
    2. Compute and store the endogenous \(P(\theta^\prime;\alpha,\theta)\). by calling ThetaTransition

     Transitions

    EndogTrans :: Transitions ( instate )
    Update dynamically changing components of the program at the time chosen by the user. Not called by User code directly

    Things to do before spanning the \(\eta\) and \(\Theta\) state spaces
    1. Call PreUpdate hooks (see Hooks).
    2. Loop over all States: call its Update() method and Distribution() of random effects.
    3. Loop over actions to update Update() them and set the actual values.
    4. Compute the exogenous transitions, \(P(\epsilon^\prime)\) and \(P(\eta^\prime)\).

    Then span the \(\eta\) and \(\Theta\) spaces to compute and store endogenous transitions at each point.

    See also:
    SetUpdateTime, UpdateTimes

     ExPostSmoothing

     CreateSpaces

    static ExPostSmoothing :: CreateSpaces ( Method , rho )
    Set up the ex-post smoothing state space.
    Parameters:
    Method the SmoothingMethods, default = NoSmoothing
    rho the smoothing parameter
    Default value is 1.0.

     Initialize

    static ExPostSmoothing :: Initialize ( userState )
    Initialize an ex post smoothing model.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

     Logistic

    ExPostSmoothing :: Logistic ( )
    Extreme Value Ex Post Choice Probability Smoothing.

    Sets pandv equal to $$P^{\star}\left(\alpha;\theta\right) = {e^{\rho(v(\alpha;\theta)-V)}\over {\sum}_{\alpha\in A(\theta)} e^{\rho( v(\alpha;\theta)-V) }}.$$

    See also:
    RowLogit

     Method

    static decl Method [public]
    The smoothing method to use.
    See also:
    SmoothingMethods

     rho

    static decl rho [public]
    Smoothing parameter \(\rho\) (logit or normal method)

     SetSmoothing

    static ExPostSmoothing :: SetSmoothing ( Method , smparam )
    Set the smoothing method (kernel) and parameter.
    Parameters:
    Method the SmoothingMethods, default = NoSmoothing
    smparam the smoothing parameter ρ AV compatible object. If it evaluates to less than 0 when called no smoothing occurs.

     ExtremeValue

     CreateSpaces

    static ExtremeValue :: CreateSpaces ( )
    calls the Bellman version, no special code.

     hib

    static const decl hib [public]

     HMQ

    static decl HMQ [public]
    Hotz-Miller estimation task.

     Initialize

    static ExtremeValue :: Initialize ( rho , userState )
    Initialize DP with extreme-value smoothing.
    Parameters:
    rho AV compatible, the smoothing parameter ρ.
    CV(rho) < 0, sets ρ = DBL_MAX_E_EXP (i.e. no smoothing).
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

    With ρ = 0 choice probabilities are completely smoothed. Each feasible choices becomes equally likely.


     KernelCCP


     lowb

    static const decl lowb [public]

     rh

    static decl rh [public]
    current value of rho .

     rho

    static decl rho [public]
    Choice prob smoothing ρ.

     SetRho

    static ExtremeValue :: SetRho ( rho )
    Set the smoothing parameter \(\rho\).
    Parameters:
    rho AV compatible object. If it evaluates to less than 0 when called no smoothing occurs.

     Smooth

    virtual ExtremeValue :: Smooth ( )
    Extreme Value Ex Ante Choice Probability Smoothing. Not called by User code directly

     thetaEMax

    virtual ExtremeValue :: thetaEMax ( )
    Iterate on Bellman's equation at θ using Rust-styled additive extreme value errors. Not called by User code directly

     KeepZ

     CreateSpaces

    static KeepZ :: CreateSpaces ( )
    Create spaces for a KeepZ model.

     DynamicTransit


     Initialize

    static KeepZ :: Initialize ( userState , d )
    Initialize a KeepZ model.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.
    d Binary ActionVariable not already added to the model
    integer, number of options (action variable created) [default = 2]

     keptz

    static decl keptz [public]
    Discrete state variable of kept ζ.

     myios

    static decl myios [public]

     SetKeep

    static KeepZ :: SetKeep ( N , held )
    Set the dynamically kept continuous state variable.
    Parameters:
    N integer, number of points for approximation
    held object that determines if z is retained.

     McFadden

     ActVal


     CreateSpaces

    static McFadden :: CreateSpaces ( )
    Create state space for McFadden models.

    Calls the ExtremeValue version, no special code.


     d

    static decl d [public]
    The decision variable.

     Initialize

    static McFadden :: Initialize ( Nchoices , userState )
    Initialize a McFadden model (one-shot, one-dimensional choice, extreme value additive error with ρ=1.0).
    Parameters:
    Nchoices integer, number of options.
    userState a Bellman-derived object that represents one point \(\theta\) in the user's endogenous state space Θ.

     NIID

     CreateSpaces

    static NIID :: CreateSpaces ( )
    Create spaces and set up quadrature for integration over the IID normal errors.

     ExogExpectedV

    NIID :: ExogExpectedV ( )
    Complete \(v(\alpha;\cdots,\eta,\theta)\) by integrating over IID normal additive value shocks.

     GQLevel

    static decl GQLevel [public]

     GQNODES

    static decl GQNODES [public]

     Initialize

    static NIID :: Initialize ( userState )
    Initialize a normal Gauss-Hermite integration over independent choice-specific errors.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

     MM

    static decl MM [public]

     SetIntegration

    static NIID :: SetIntegration ( GQLevel , AChol )
    Initialize a normal Gauss-Hermite integration over independent choice-specific errors.
    Parameters:
    GQLevel integer[default=7], depth of Gauss-Hermite integration
    StDev integer [default] set all action standard deviations to \(1/\sqrt{2}\)
    CV compatible A×1 vector of standard deviations of action-specific errors.

     UpdateChol

    static NIID :: UpdateChol ( )
    Update vector of standard deviations for normal components. Not called by User code directly

    AV(Chol) is called for each γ, so σ can include random effects.

     NnotIID

     BigSigma

    static decl BigSigma [public]
    Current variance matrix.

     CreateSpaces

    static NnotIID :: CreateSpaces ( )
    Create spaces and set up GHK integration over non-iid errors.

     ExogExpectedV

    NnotIID :: ExogExpectedV ( )
    Use GHK to integrate over correlated value shocks. Not called by User code directly

     ghk

    static decl ghk [public]
    array of GHK objects

     Initialize

    static NnotIID :: Initialize ( userState )
    Initialize GHK correlated normal solution.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

     R

    static decl R [public]
    replications for GHK

     SetIntegration

    static NnotIID :: SetIntegration ( R , iseed , AChol )
    Initialize the integration parameters.
    Parameters:
    R integer, number of replications [default=1]
    iseed integer, seed for random numbers [default=0]
    AChol CV compatible vector of lower triangle of Cholesky matrix for full Action vector [default equals lower triangle of I]

     UpdateChol

    static NnotIID :: UpdateChol ( )
    Update the Cholesky matrix for the correlated value shocks. This routine is added to the preUpdate Hook so it is called after parameters may have changed. Not called by User code directly

     Normal

     AChol

    static decl AChol [public]
    User-supplied Choleski.

     ActVal

    Compute \(v(\alpha;\theta)\) for all values of \(\epsilon\) and \(\eta\). Not called by User code directly

     Chol

    static decl Chol [public]
    Current Choleski matrix for shocks (over all feasible actions)

     CreateSpaces

    static Normal :: CreateSpaces ( )
    Calls the Bellman version and initialize Chol.

     Initialize

    static Normal :: Initialize ( userState )
    Initialize the normal-smoothed model.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

     OneDimensionalChoice

     AutoAuxiliaryValues

    virtual OneDimensionalChoice :: AutoAuxiliaryValues ( )

     Continuous

    virtual OneDimensionalChoice :: Continuous ( )
    The default indicator whether a continuous choice is made at \(\theta\). The user's model can replace this to return FALSE if ordinary discrete choice (or no choice) occurs at the state.

    The user-supplied replacement is called for each \(\theta\) during ReservationValue solving. That means whether a choice is made at \(\theta\) can depend on fixed values.

    The answer is stored in solvez.

    Returns:
    TRUE

     CreateSpaces

    static OneDimensionalChoice :: CreateSpaces ( Method , smparam )
    Create spaces and check that α has only one element.
    Parameters:
    Method SmoothingMethods, default = NoSmoothing
    smparam the smoothing parameter (e.g. ρ or σ)
    Default value is 1.0.

     d

    static decl d [public]
    single action variable.

     EUstar

    static decl EUstar [public]
    scratch space for E[U] in z* intervals.

     EUtility


     Getz

    OneDimensionalChoice :: Getz ( )
    Returns z* at the current state \(\theta\).

     HasChoice

    OneDimensionalChoice :: HasChoice ( )
    Dynamically update whether reservation values should be computed at this state. This allows the option to depend on fixed effects.

     Initialize

    static OneDimensionalChoice :: Initialize ( userState , d )
    Create a one dimensional choice model.
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.
    d ActionVariable not already added to the model
    integer, number of options (action variable created) [default = 2]

     pstar

    static decl pstar [public]
    space for current Prob(z) in z* intervals.

     Setz

    virtual OneDimensionalChoice :: Setz ( z )
    Sets z* to z. Not called by User code directly This is called when solving for z*.
    Parameters:
    z.

     Smooth

    virtual OneDimensionalChoice :: Smooth ( )
    Smoothing in 1d models. Not called by User code directly

     solvez

    decl solvez [public]
    TRUE: solve for z* at this state. Otherwise, ordinary discrete choice.

     SysSolve


     Utility

    virtual OneDimensionalChoice :: Utility ( )
    Default 1-d utility, returns 0.
    See also:
    OneDimensional::EUstar

     Uz


     zstar

    decl zstar [public]
    N::Aind-1 x 1 of reservation value vectors.

     OneStateModel

     Initialize

    static OneStateModel :: Initialize ( UorB , Method , ... )
    Short-cut for a model with a single state so the user need not (but can) create a Bellman-derived class.
    Parameters:
    BorU either Bellman-derived object or a static function that simply returns utility.
    Method ex-post choice probability SmoothingMethods [default=NoSmoothing]
    ActionVariables to add to the model

    Calls Initialize() with either BorU or new OneStateModel().
    Sets the ClockTypes to StaticProgram
    Sends the optional arguments to Actions()
    Calls CreateSpaces()

    By calling both Initialize() and CreateSpaces() this makes it impossible to add any state variables to the model.

     U

    static decl U [public]
    contains \(U()\) sent by the user's code.

     Utility

    virtual OneStateModel :: Utility ( )
    Built-in Utility that calls user-supplied function.

     Roy

     CreateSpaces

    static Roy :: CreateSpaces ( )
    Call NnotIID.

     d

    static decl d [public]
    The sector-decision variable.

     Initialize

    static Roy :: Initialize ( NorVLabels , P_or_UserState )
    Initialize a Roy model: static, one-dimensional choice with correlated normal error.
    Parameters:
    NorVLabels integer [default=2], number of options/sectors
    array of Labels
    P_or_UserState 0 [default] initialize sector prices to 0
    CV() compatible vector of prices
    The first 2 options will use the built in Utility that simply returns CV(p).
    a Roy-derived object (allowing for custom Utility)

     Prices

    static decl Prices [public]
    Vector-valued prices of sectors.

     Utility

    virtual Roy :: Utility ( )
    Built-in utility for Roy models.
    Returns:
    OnlyFeasible(CV(Prices))

     Rust

     CreateSpaces

    static Rust :: CreateSpaces ( )
    Currently this just calls the ExtremValue version, no special code.

     d

    static decl d [public]
    The binary decision variable.

     Initialize

    static Rust :: Initialize ( userState )
    Initialize a Rust model (Ergodic, binary choice, extreme value additive error with ρ=1.0).
    Parameters:
    userState a Bellman-derived object that represents one point θ in the user's endogenous state space Θ.

    Comments:
    The action variable is created by this function and stored in d The value of the smoothing parameter ρ can be changed.