[ Search |  Up Level |  Project home |  Index |  Class hierarchy ]

 Variables.ox

Describes how to represent actions, states, and other discrete quantities in your DP model.

 ⇩ 

    Contents
  1. Overview
  2. Action Variables: the choices your agent makes at each state
  3. State Variables: values that define the state of the problem; how to use them and how to create new ones.
  4. The Clock: a special state variable for keeping time in DDP
  5. Time Invariants: solving different DP problems based on values that differ across agents but not during the agent's problem; these are analogous to random and fixed effects in panel data models.
  6. Auxiliary Values: tracking functions of actions and state variables in data sets; these play an important role in estimation by matching data that depends on the current outcome without expanding the space.
  7. Conditional Continuous Choices: optimizing at points in the state space; this is in progress
  8. Volume: (getting output specific to a variable)

  1. Overview
  2. A variable in a DP model is an object derived from Quantity and can be of various types:

    The ActionVariable class represents a choice in the model (such as whether to work or not) and is an element of the action vector \(\alpha\).

    The StateVariable class is designed to be a member of one of the discrete state vectors (\(\epsilon,\eta,\theta,\gamma\)). There are many derived types of state variables explained below.

    The AuxiliaryValue class represents values that depend on actions and states but are not required to solve the model. They are members of the auxiliary vector \(\chi\).

    As explained elsewhere, the user creates a class for their DP model. The class should declare a static data member for each variable of these types. New objects of the appropriate class are created and then assigned to the variables while the model is being built: after calling Initialize() and before calling CreateSpaces().

    Creating an object for a variable does not add the variable to the model. It must be added to the model after being created by calling the corresponding function during the build phase:

    Send action variables to Actions() to add them to \(\alpha\)
    Send state variables to EndogenousStates() to add them to \(\theta\) (the non-specialized state vector).
    Send IID state variables to either SemiExogenousStates() or ExogenousStates() to add them to \(\eta\) and \(\epsilon\), respectively.
    Send fixed and random effect variables to GroupVariables() to add them to \(\gamma\) (they will be assigned to the appropriate sub vector \(\gamma_f\) or \(\gamma_r\).)
    AuxiliaryOutcomes() adds auxiliary values to \(\chi.\)

  3. Action Variables
    1. Basics
    2. The action vector \(\alpha\) is the vector of discrete actions chosen conditional on the state of the DP. A generic element of \(\alpha\) is denoted a. An action variable takes on \(a.N\) different values. The action vector includes \(\alpha\).N variables, denoted a0 through a(\(\alpha\).N)‾.

      Action variables are represented as an ActionVariable and are added to MyModel and to \(\alpha\) using Actions(). Calls to Actions() and other model-building routines is part of MyCode that user writes unless the class they are using includes the definition of the action variables.

      In the basic definition of a DDP model, \(A\) is the set of possible actions. In DDP the set of actions is built up by adding action variables to \(\alpha\). So \(A\) emerges from the properties of the variables added to \(\alpha\).

      The Possible Actions Matrix \(A\) is the matrix of all possible action vectors. It is the Cartesian product of the actions added to \(\alpha.\) The word possible is used instead of feasible, because this notion of \(A\) is constructed mechanically from properties of the actions. It has nothing to do with the interpretation of the actions. Feasible actions are defined below.

      An action refers to a particular value of the action vector \(\alpha\), which in turn is a row of \(A\). An action variable refers to a column of \(A\), but note that values of an action variable are repeated within the column. DDP constructs \(A\) when MyCode calls CreateSpaces().

      Alternative Notation

      Discrete choice is described in two ways other than the one above used in DDP. For example, suppose there are two action variables and 6 total action vectors: $$\eqalign{ &\alpha = (a_0,a_1)\cr &a_0.N = 3\cr &a_1.N = 2\cr &\alpha.D =6\cr}$$ Some papers would treat this as a single action with six values. And/or papers may define a vector of 6 indicator variables to denote which choice was made: di=I{a=i}.

      The three different approaches to coding discrete choices are illustrate in this table:

         DDP Approach       | Single Action |    Indicator Vectors
         a0         a1      |       a       |    d0  d1  d2  d3  d4  d5
      ----------------------+---------------+----------------------------
         0          0       |       0       |    1   0   0   0   0   0
         1          0       |       1       |    0   1   0   0   0   0
         2          0       |       2       |    0   0   1   0   0   0
         0          1       |       3       |    0   0   0   1   0   0
         1          1       |       4       |    0   0   0   0   1   0
         2          1       |       5       |    0   0   0   0   0   1
      

      One reason to use an action vector like \(\alpha\) is that each variable can be interpreted as a dimension of choice. There is no obvious interpretation of a=3 in the table, and the interpretation would change with the dimensions of the action variables. This approach would force MyModel to decode into an action vector to make their code look like a mathematical model.

      Another reason to use the DDP approach: the action vector approach makes it natural to impose feasibility conditions on the choice set, as discussed below.

      Indicator vectors make it possible to write any utility as a sum: \(U = \sum_k d_k u_k\). But as with the single action approach the interpretation is not obvious and \(U()\) will in no way resemble usual mathematical notation for an objective.

    3. Actual, Current and Empirical Values of an Action Variable
    4. Usually an action variable can be an object of the base ActionVariable class. The user rarely needs to define a derived class.

      Current Value
      DDP requires MyModel to handle/process the full matrix of feasible actions in utility and transitions. For this reason, MyModel will rarely if ever access the current value of an action, a.v. The vector of values of an action variable \(a\) is returned by CV(a).

      Actual Value

      Optionally, the user can create a class derived from ActionVariable and supply a Update function with it. This allows the user to associate a meaningful value for the numbers 0 to N‾, stored as a.actual. DDP updates an action each time the problem is resolved, so actual values can depend on Parameter values. AV(a) will return the actual values of a that correspond to each feasible action vector \(\alpha\). The default virtual Update() simply sets actual to the vector < 0 : N-1 >.

      Example of Custom Update()

      Unless you want to provide actual values for an action you do not need to create a derived class. Let a denote a discrete choice of hours to work each week. The maximum number of hours, H, depends on a parameter which can differ across individuals but is constant across the state space.

      Actual Hours action variable
      struct Hours : ActionVariable {
              decl H;
              Hours(const N,const H);
              Update();
              }
      Hours::Hours(const N,const H) {
              this.H = H;
              ActionVariable("h",N);
              }
      Hours::Update() {
              actual = AV(H)* (vals / N);
              }
      ⋮
      MyModel::MaxH() { return CV(hasch) ? 30 : 50; }
      ⋮
      MyModel::Initialize() {
         ⋮
         GroupVariables(hasch = new FixedEffect("",2));
         Actions(h = new Hours(5,MyModel::MaxH));
         ⋮
         }
      MyModel::Utility() {
              return CV(wage)*CV(h);
              }
      
      The code segment allows the maximum hour to be 30 if the person has a (young) child, otherwise 50 hours. And then lets each type choose 5 different levels of hours between 0 and their max hours. Utility will be total earnings.

    5. Possible and Feasible Actions
    6. \(A\) is defined as the set of possible actions, built up by adding action variables to \(\alpha\). All possible actions is the matrix: $$ A \equiv \times_{j=0,\dots, \alpha.N-1 } \{ 0, 1, \dots, a_j.N-1 \}$$ An action vector \(\alpha\) is a row of the matrix A. An action variable is a column of \(A\). Its position is the property a.pos.

      Feasibility is a property MyModel imposes on \(\alpha\). The model may say that some logically possible actions cannot be chosen because they do not make sense given the interpretation of \(\alpha\). Or some actions are ruled infeasible for convenience to avoid calculations that are relatively unimportant. We can write feasibility as a property of the state. Or it can be written as a function of the state, which is more standard. Using both versions, the feasible actions at \(\theta\) is a matrix property: $$\forall \theta \in \Theta,\qquad A(\theta) \equiv \theta.A \subseteq A.$$

      FeasibleActions(): by default all possible actions are feasible unless MyModel says otherwise. \(A(\theta) \equiv A\) for all endogenous states. This default assumption is produced by including a built-in virtual method, Bellman::FeasibleActions(). meaning that MyModel can replace it with its own version. This is explained in Bellman.

      The A List

      CreateSpaces constructs the possible matrix A, stored as the first element of a list: AA[0]. It then calls FeasibleActions(A) for each reachable state \(\theta\). Every time a different value is returned by FeasibleActions() a new matrix is added to the A list. The index into the list is then stored as Aind. That is, there is not a different matrix at each \(\theta.\) Instead there is a list of different feasible matrices and \(A(\theta)\) is really an index into that list.

      The user's (optional) feasible action method should determine feasibility using the action variables stored in the derived DP model. So if a contains an ActionVariable its value are available as CV(a). In short, \(\theta\).A ↔ A[Aind]. If MyModel does not specify feasible actions, then Aind = \(\theta\).j = 0 for all states.

      The A list contains the actual values not the current values. If the user provides an Update() for a derived class of action variable, then A[Aind] is the matrix of actual values. Since the default is that actual values equal current values (a.actual[a.v] = a.v), then A[Aind] is always possible to use. The uncoded list of matrices is available as Matrix.


  4. State Variables
    1. Content of the StateVariable Section
    2. Overview
    3. State Variable Transitions
    4. Augmented and Specialized State Variables
    5. Creating a new or derived Autonomous State Variable
    6. Example of a new state variable
    7. Further Considerations
    8. State Blocks
    9. How to Create a State Block
    10. Accessing State Block Values
    11. The Overall Transition

    1. Overview
    2. A state variable s is a Discrete quantity. It is defined by at least the 3 required elements:
      L
      a short tag for the variable (can be blank, )
      N
      The number of values it takes, which are 0 … N‾
      Transit()
      The transition for the variable, \(P_s(s\prime | \alpha,\eta,\theta)\).
      The transition can depend on other state variables and actions. These dependencies are created by sending the other variable objects when creating the state variable.
      Many predefined state variable classes are available that correspond to ones in the literature, so a user may not need to define a new one.
      Special or customized state variables can be created using ways to augment another state variable. For example, a state variable can be "frozen" or "reset" under certain conditions by augmenting it and without recoding the transition.

      State variables are classified by how their transitions relate to actions and the current value and transitions of other state variables. Either they are
      Autonomous
      which means their transition to new states does not have to be coordinated with transitions of other states;
      The transition of autonomous variables can depend on the current values other states.
      or they are
      Coevolving
      in which case they must be a member of a StateBlock.
      Within a state block the innovations of member state variables can be correlated in an arbitrary way.

      Most aspects of DP models can be handled by autonomous variables because the transition can depend on current values of other variables. Only certain kinds of correlations between two or more variables require coevolving variables in a block.

      The model places discrete state variables/blocks into one of four vectors.
      State Variable Vectors
      IID TransitionsGeneral TransitionNo Transitions (Group Variables)
      Exogenous
      (\(\epsilon\))
      Semi-Exogenous
      (\(\eta\))
      Endogenous
      (\(\theta\))
      RandomEffect
      (\(\gamma_r\))
      FixedEffect
      (\(\gamma_f\))
      Generic Elementehqrf
      Form of \(P\)s\(P_s(s^\prime)\)\(P_s(s^\prime|\alpha,\epsilon,\eta,\theta)\)\(g^\prime=g\)
      ExplanationIID, exogenous to everything.Transition can depend on current states and actions.value is fixed for this problem.
      Special form of
      \(P_q\), \(q\in\theta\), \(q\neq s\)
      cannot enter \(P_q\)can enter transition of other variablescan enter but
      see SetUpdateTime
      Explanation\(q\) independent of \(\epsilon\), given \(\alpha\) \(\eta\) and \(\theta\) can affect \(q\)default \(r\) can only enter \(U()\)can enter \(P_q\)
      Computing ImplicationCompute & store \(P_\epsilon\) and \(P_\eta\) once on each solution. Compute and store \(P_\theta\) at each \(\theta\), store as a matrix in \(\alpha\) and \(\eta\) Store \(P^\star(\alpha|\dots\gamma_r)\)Reuse space for each fixed effect.

      Example
      Include an Semi-Exogenous IID jump process and an Endogenous tracker of its previous value to a DDP model:
      class MyModel : DPparent {
          ⋮
          static decl e, q;
          ⋮
          static Initialize();
          }
      ⋮
      MyModel::Initialize() {
          DPparent::Initialize(new MyModel());
          ⋮
          ExogenousStates(e = new SimpleJump("iid",5));
          EndogenousStates(q = new LaggedState("lag",e));
          ⋮
          CreateSpaces();
          }
      

    3. State Variable Transitions
    4. The overall state transition is computed as follows
      $$P\left(\epsilon^{\,\prime},\eta^{\,\prime},\theta^{\,\prime}\ |\ \alpha,\epsilon,\eta,\theta\right) = P_\epsilon(\epsilon^{\,\prime}) P_\eta(\eta^{\,\prime}) P_\theta\left(\theta^{\,\prime}\ |\ \alpha,\eta,\theta\right).$$ That is, the joint probability distribution of all state variables next period is the product of the probabilities of the three state vectors. So the vectors evolve independently conditional on the current state. Further, the IID nature of \(\epsilon\) and \(\eta\) mean that their probability is constant across current states and actions. In turn, $$\eqalign{ P_\epsilon(\epsilon^{\,\prime}) &= \prod_{k=0\dots \epsilon.N-1} P_{e_k}\left(e^{\,\prime}_k\right)\cr P_\eta(\eta^{\,\prime}) &= \prod_{k=0\dots \eta.N-1} P_{h_k}\left(h^{\,\prime}_k\right)\cr P_\theta\left(\theta^{\,\prime}\ |\ \alpha,\eta,\theta\right) &= \prod_{k=0\dots \theta.N-1} P_{q_k}\left(q^{\,\prime}_k\ |\ \alpha,\eta,\theta\right).\cr}$$

      It is important to note that ek, hk and qk represents either an autonomous state variable or a whole state block, which is responsible for returning the joint transition of its coevolving members.

      Autonomous
      Let the value of a state variable s realized next period be denoted s'. The probability that \(s'\) takes on the value z in the next period be written generally as $$P(s' = z | \alpha,\epsilon,\eta,\theta) = p_s(z,\alpha,\epsilon,\eta,\theta).$$

      Exogenous
      State Variables classified as (fully) Exogenous are denoted generically as e and are elements of the vector \(\epsilon\).

      Endogenous
      State Variables classified as Endogenous are denoted generically s and are elements of the vector \(\theta\).

      A single variable that is both Exogenous and Autonomous would satisfy the usual definition of a simple IID random variable.
      In that case $$P(e' = z | \alpha,\epsilon,\eta,\theta) = p_e(z).$$ That is, the transition probabilities are exogenous to the current actions, state values and their transitions.

      Coevolving
      A State Variable is Coevolving if it is not autonomous, meaning it is not conditionally independent of all other variables. Variables whose transitions are dependent on each other must all be a member of a StateBlock.

      Endogenous and Autonomous
      An endogenous and autonomous state has transition probabilities that can depend on the current state and action, but they cannot be correlated with the transition of any other state variable: $$Prob(q\prime = z | \alpha,\epsilon,\theta,\chi) = P_q(z,\alpha,\epsilon,\theta,\chi).$$ That is, the transition probabilities are exogenous to the current actions, state values and their transitions. Note that an endogenous state variable's transition can depend on exogenous states in \(\epsilon\) and \(\eta\).

      An autonomous StateVariable s has marginal transition probabilities that enter the overall transition probability as multiplicative factors, conditional on current actions, states and parameters. This is in contrast to Coevolving state variables which are part of a StateBlock. The difference with an Autonomous state is that its transition is jointly distributed with one or more other Coevolving states.

      Predefined State Variables

      DDP comes with the predefined state variables listed above that can represent many processes in estimated models. But the set is by no means exhaustive and a serious user of DDP will inevitably need include slight variations or wholly new kinds of variables. Although defining a new kind of state variable is not trivial, it can take a less time and be less prone to error than coding from scratch and then modifying all the code to add or modify variables.

    5. Augmented and Special State Variables
    6. Some derived classes of state variables play special roles in DDP. Two major ones are TimeVariables which serve as the DDP clock and TimeInvariants which track variables that are fixed during a DDP and stored in the \(\gamma\) vector. DDP will re-solve the model for each value of \(\gamma\). Your model may include state variables that are already defined in DDP but require some changes to accommodate the environment. The next section discusses how to create a new derived class for your state variable, but it may be possible to avoid that using the Augmented state variable feature.

      Example: Suppose your model has a state variable q which is of the RandomUpDown() class. That is, each period q may go up or down by one unit or remain unchanged depending choices and other state variables. However, when another state variable h goes from 0 to 1 you want the value of q to become constant and not subject to the RandomUpDown transition. That is, you want to Freeze the current value of q when h is 1.

      Example
      Augment a base state variable with the Freeze feature. In this case \(q\) increases by 1 with probability 0.5 each period until the agent chooses \(a=1\). Then \(q\) freezes at its current value from the next period on.
      class MyModel : DPparent {
          ⋮
          static decl a, h, q;
          …
          static Initialize();
          }
      ⋮
      MyModel::Initialize() {
          DPparent::Initialize(new MyModel());
          ⋮
          Actions(a = new ActionVariable("a",2) ) ;
          EndogenousStates(
              h = new PermanentChoice("h",a);
              q = new Freeze(new RandomUpDown("q",10,<0.5;0.5>),h);
          ⋮
          CreateSpaces();
          }
          
      Notice that a new state variable is passed to the Freeze() constructor. In essence this underlying random variable is hidden from the model, which only sees the variables passed to EndogenousStates. What the model sees is the augmented variable q which acts like a RandomUpDown variable unless h=1. As always, the other state variables or actions that q depends on have to be passed to its creator (or constructor).

      In fact, Freeze is a special case of a more general type of Augmented state variable, namely a ValueTriggered() state variable.

    7. Creating a new or derived Autonomous State Variable
    8. Below is an example of how to define a new state variable is provided. It shows both a very basic coding which can get the job done but which might be limiting in some ways. So a second version of the code is shown which is more robust. Several advanced elements of the Ox programming language determine how and why you create a state variable this way. These features may be confusing to someone just starting to program in Ox, especially with no prior experience with object-oriented programming in an interpreted and dynamically-typed language.

      Classes, Structs and constructor routines
      Derived classes and Virtual methods
      Assigning objects creates pointers but copying scalars and matrices creates duplicates. This means that a state variable stored as an object can be copied to several places, each referring to the same thing.

      Step-by-step Instructions to Create a New State Variable

      1. Duplicate the State Variable Template File
      2. Pick a name for your new kind of StateVariable.
        It cannot have spaces or symbols except _. These instructions will refer to «VarName» for whatever name you chose.

        Copy niqlow/templates/DynamicPrograms/StateVariable.ox to a file called «VarName».ox.
        Contents of Source: niqlow/templates/DynamicPrograms/StateVariable.ox
            
        

        Decide how you want to include your code.
        Ox has two compiler directive: #include or #import. See the Ox documentation for more details. If you use #import "«VarName»" then you must create two separate files, #import «VarName».h and #import «VarName».ox. If you use #include "«VarName».ox" then the two parts of the code appear in one file.

      3. Choose a base class to derive your variable from.
      4. If your variable is closely related to one of the pre-defined variables listed above then you might be able to choose that as your base class. This may (greatly) reduce the new code you have to write. If you start from scratch then choose as the base class either Random or NonRandom. These two categories have no effect, but allow DDP to summarize models that include your state variable more accurately. That is, DDP treats random and nonrandom state variables the same way.

      5. Declare needed elements (members and methods).
      6. Your new class (or struct) has to know everything necessary to compute the transition of the state variable at any state.

        One approach would be to access elements of the DP model itself. This would limit the reliability of your code, as it would rely on certain variables being defined and named the same way as you require. Instead, the preferred way to access necessary information is to store it in the members of your class and if necessary use methods to process that information.

        Any state variable requires two pieces of information when it is created.
        a label L and the number of different values it takes on, N. The constructor for state variables is defined as StateVariable::StateVariable(const L, const N) to ensure these values are set.

        There is one required and two optional methods that must be provided for each new state variable.

        The required method is the constructor for the state variable. The optional methods are called Transit() and Update() that compute the transition and update the actual discrete values associated with the state variable. The word optional is somewhat inaccurate, because every state variable must have those functions. The issue is whether your state variable needs their own versions of them or can they inherit the version from a parent class.

      7. Code the constructor function
      8. How will your code for the state variable get the information it needs stored? The answer is you make the user pass the location or value of the information when your state variable is created using the new operator.
        In Ox a class does not have to have a constructor, but the parent constructors of a derived class are not invoked automatically. In DDP your state variable has to call the constructor of the base class (or StateVariable() if it is not derived from another class below Random and NonRandom). So this means you must declare and define a constructor for your state variable which is called by new, if only to ensure that the parent constructor is called.

        You decide which information the user needs to supply for the variable, make them arguments to the constructor and then store the information in members. The example below will make all the jargon in the last sentence clearer that trying to explain it at this point. Is a destructor method needed? The new operator calls the constructor of a class. The delete operator calls the destructor. Again, in Ox, neither of these needs to exist, but in DDP a state variable needs a constructor. Typically it will not need a destructor and here is why: State variables are presumed to exist the whole time the program is running. DDP does provide a method to clean up a model that is no longer needed to save memory and to reinitialize for a new DP problem. DDP will delete each state variable but it cannot remove dynamically allocated variables defined within classes. So if your state variable has any new commands that are not paired with a delete in your code, then yes, you should write a destructor which will delete these objects. However, the chance that not doing this creates a significant memory leak is small given the anticipated use of DDP.

      9. Code the Transit() method
      10. Besides a constructor, your state variable must also come with a transition.

        If it is derived from another state variable and your version will follow the same transition then you do not need to write a new one. That is because Transit() is, in most cases, declared as a virtual method. So if your state variable has no transition of its own, the one from the parent variable will be called in its place. This is why it must be called Transit(), because it replaces another version of a Transit() that would be called if necessary. Transit will often need to access matrix of current feasible actions at the current state, which is available in Alpha::C that is set every time the point in the state space is changed or initialized. Each row of Alpha::C is an action vector \(\alpha\) each column is an action variable \(a\). Transit() must report back the transitions of an instance of the state variable from a given current point in the state space.

        Example: Possible Argument Passed to Transit()
        The model has two binary choice variables, \(i\) and \(j\). Then the argument passed to Transit() might look like this:
          Alpha::C
          i    j
          0    0
          1    0
          0    1
          1    1
        

        Your code has to return two items, sent back in an Ox array.

        The first is a row vector of feasible values: the values this state could take on next period. For convenience call this vector F. That is, suppose one value, say 3, may be feasible only if a certain action is taken. Then 3 must be in F, even if 3 is not feasible if another action were taken. The elements of F do not need to be sorted. The other output is a matrix of transition probabilities, P. The rows of the matrix must match the rows of Alpha::C; the columns must match the columns of the row vector F. The ij element of P is the probability that your state variable takes on the jth value in F given the agent chooses the ith action in the feasible set.

        The array returned as the value of Transit() might be sent using code that looks like this: return {F,P};, presuming you have declared local variables F and P and filled them with the correct information in the correct format. Notice that the only explicit information sent to Transit() is the feasible matrix. Yet it must send back the transition at a specific state in the state space. How does your code know what state is the current state? Further, how does your code know the current value of parameters which might affect the probabilities? The answer is that the value or, if the quantity is changing like the current state, the location of the value must be stored in a member of the state. The only way to get this information is through the constructor function discussed above.

        The example will

      11. Debug your code and make sure it does what you want it to do.

    9. Example: Previous Occupation State Variable
    10. To make the example look better, suppose you have decided to set «VarName» to be MyStateVar. This will be used wherever the name of the new derived class belongs. In math notation we will refer to the new state variable as y. But in code y might not be descriptive enough. Instead, you might add y to the DP model with code that looks like this:

      decl mystate;  // in the class definition of MyStateVar
      ⋮
      mystate = new MyStateVar("y",4,…);
      EndogenousStates(mystate);
      
      The is not literal. It is there because in this example the constructor MyStateVar() will require other arguments. The state variable is stored in an Ox variable called mystate, but we will refer to it as y, which is the label given to it.

      What is y supposed to be?

      It is an indicator for the previous value of another state x, but only if a binary choice variable i is 1 last period. For example, if x is an occupation code and the choice variable i indicates a choice to work, then y equals the occupation the person worked at last period if they did work. Otherwise y should take on the value 0. Simply put, the value of y next period is y' = ix. First, the y process as the feasible state next period:
      y' = ix
      
      Now, as a transition probability:
        Prob(y' = z) = 1 if z = ix
                      0 if z ≠ ix.
      

      Because the transition probabilities for y are 0 and 1, MyStateVar should be classified as a NonRandom state variable. This is a special case of an autonomous process because the probabilities do not depend on the next values of the other states, such as x'. If that were the case, the user has to create a StateBlock which handles the transitions for multiple Coevolving variables. (DDP has no way of knowing for sure that the probabilities are always 0 and 1, at all states and for all possible parameter values, so that is why you manually categorize it as nonrandom because you know that the 0 and 1 are hardcoded into the DNA of the state variable.) Because the transition depends on the action chosen y is nonrandom but it is not deterministic. A deterministic random variable would be something like a seasonal cycle.

      Once the program is running some of the members of mystate might have these values:
          mystate  {
              .L = "y"
              .N = 4
              .v = 0
              .vals = <0,1,2,3>
              .pos = 1
              ⋮
              }
      
      That is, y takes on 4 different values (0,1,2,3). At the current state it happens to have value v=0. It also happens to be the second state variable in the state vector (pos=1).

      Next, suppose «VarName» has been added to a finite horizon model with three state variables x, y and z.

      At some point in the process suppose the current value of the discrete state vector is:
          State Vector
         x    y   z   t
         2    0   3   8
      
      The variable t is the age variable in the finite horizon. Now we can represent the transition of y' at this particular state as follows:
          Alpha::C          Prob(y')   trim 0 cols →     Prob(y')
           i    j     y'=  0   1   2   3             y'=0   2
          -----------------------------------------------------
          0    0          1   0   0   0                1   0
          1    0          0   0   1   0                0   1
          0    1          1   0   0   0                1   0
          1    1          0   0   1   0                0   1
      
      y takes on four different values, but given that x=2, only two of those values are feasible next period, 0 or 2. The overall transition routine in DDP is smart enough to drop the columns of all zeros, but some computation can be reduced by trimming those columns inside Transit(). Thus, we can focus on the trimmed 2-column representation of the transition probability.

      Transit() returns the vector y' and the matrix Prob(y').

      It does this by returning an array of two elements. See Ox doc entry for return. Note that the first column of Prob(y') is simple the value 1-i. And the second column is simply i.

      Use the constructor to support Transit.
      How will Transit() know that x=2 currently? And how will it know that i is in the first column of Alpha::C not the second? This is where the object-oriented approach to building a dynamic programming model comes in handy compared to using vectors of values to represent the state vector. Transit() will know about x and i because it will be constructed to know them as the following code illustrates:
      Version 1. Source: niqlow/templates/DynamicPrograms/MyStateVar1.ox
      
      

      The constructor for MyStateVar requires the user to send the instance of some state variable class that corresponds to x.
      It will be stored in a member occup to point to the right variable. Because an argument and a member is both called occup we refer to the member with that name using the this. operator. The constructor also asks for an instance of an action variable which corresponds to i and stored in work. Once the members are stored in members they are available for use by any method of the class, including Transit().

      State variables in a discrete DP must have a pre-specified number of values, and so it is with mystate.

      How many values can it take on? The answer is: as many as the variable occup can take on. This property is always stored in the N member, so the constructor can get mystate's value from occup: MyStateVar inherits the number of different occupations directly from the argument occup. So the new base constructor passes that value through to the base constructor called by the constructor of a class derived from StateVariable. Since NonRandom is really a container it does not have its own constructor. So StateVariable() is called directly.

      So if mystate->Transit() is called it will know about x and i, but how will it know about the current value they have?

      DDP stores the current value of states added to the model in its v member, which was shown above. Thus the current value of x equals occup.v. In the code for Transit the expression 0~occup.v generates, for the state above, the row vector <0 2>. These are the only feasible values next period, or possible values of y'. So the current value of x is not accessed from a vector of numbers, which might be the way DP code in FORTRAN might do.
      Instead it is accessed from a public member of a variable stored within the MyStateVar class. It does not matter to MyStateVar whether occup has the label x. It does not matter that occup happens to be first in the state vector. It does matter, of course, that occup is indeed a state variable, and it has to know this before anything is done in the model.

      The second expression in the return statement is simply Ox code to copy in 1-i and i. However, these are column vectors and they get their value from values from Alpha::C.

      Which column of Alpha::C will i be in?

      The answer is whichever column the work choice variable is stored in. Again, the constructor requires the argument work which gets copied to the member with the same name. And DDP puts the position of the action in the action matrix in pos. So whatever column it is any instance of MyStateVar will know which one it is.

      How is Transit() used?

      Knowing the feasible values and their transition probabilities for a single state variable is not sufficient to solve a DP problem. The values of this state variable combine the feasible values of all other state variables to determine the possible states next period give the current state. If the value function were stored as a multidimensional array (or multi-dimensional matrix, which is not possible in Ox) then this state's possible values would simply be inserted into the right index for V, as in V[x][y][z][a]. However, in Ox this would be inefficient, and without knowing ahead of time how many states will be added to the model it is not possible to write the code. (There is no way in Ox or C to write something like V[x]...[a] to allow a dynamic number of dimensions.)

      DDP is designed to avoid this problem by storing V as a one dimensional vector regardless of the number of state variables. The state is stored as a vector (internally), and the size of a vector can be determined dynamically. Associated with the state vector is a vector of offsets or indices. Multiply the state vector and the offsets to determine the index of the state in the V vector. Each state variable has an offset which depends on the other state variables and the number of values they take on.

      DDP handles the offset. The Transit() function only has to return the list of feasible values and their probabilities. Actually, DDP keeps track of several offset vectors to index different aspects of the overall solution algorithm.

    11. Further Considerations
      1. Ensure DDP keeps the actual values updated.
      2. When you write the code for your state variable you force the user (possibly yourself) to provide information required to code the transition for y. But you are relying on DDP to process the state variables in the model, for example by looping through all the feasible current values of x and y. Further, DDP must know that i is one of the actions and include a column for it in the feasible matrix. This is done by using methods specific to the DP class for adding elements of the model. Namely, the EndogenousStates and Actions methods. So we can now complete the code that would add a variable in the model that corresponds to y and which can reliably follow the transition y' = ix.

        Code Segment showing use of MyStateVar.
            static decl i, x, mystate;
            x = new StateVariable("x",4);
            i = new ActionVariable("i",2);
            mystate = new MyStateVar("y",x,i);
            Actions(i);
            EndogenousStates(x,mystate);
        

      3. Make the code robust
      4. To catch errors it is helpful to check the arguments sent to the constructor.

        Although most mistakes in passing arguments would generate errors once the code starts running, the error may not occur until much later than when MyStateVar is called. Worse, since Ox is dynamically typed, and since it initializes static variables at 0 it is possible for incorrect information sent to the constructor to mimic valid information.

        Even when using the variable for yourself it is useful to check the inputs.

        If others will build on your code then it is extremely helpful to check arguments for them to develop code quickly. The key is that the arguments must be of the right type. The Ox function isclass() is very useful here, because it checks that the arguments are derived from the correct base class (or the correct intermediate class).

        The transit code can be made a little better.

        In particular, for the interpretation given so far the x variable takes on the value 0. This might be code for not being associated with any occupation, which is fine. But suppose you want this variable to handle the case that a person always has an occupation. Then y=0 is ambiguous. It could mean the person did not work last period or they did work but in the occupation coded as 0. The problem is that state variables always take on the values 0 to N-1, at least when using the v member.

        You can resolve this by referring to the actual member of occup, not v.

        Then, if real occupations are not coded as 0 to N-1 but 1 to N (or any other set of values), the ambiguity in coding ix can be resolved. A value of 0 for this state variable indicates the person did not work last period and starts out with no occupation. If there is an occupation actually coded as 0, then this state variable will infer that not working does not reset occupation. That is, occupation is determined by something else (such as the kinds of jobs the person chooses to look for). Note that by default> actual is the same as v, but if the state or action variable has its own Update() method it can set the actual codes associated with the internal values 0…N‾.
        Version 2. Source: niqlow/templates/DynamicPrograms/MyStateVar.ox
                
            

      5. Duplicates versus Pointers
      6. Ox does some very subtle things with memory. To understand it requires some understanding of the C programming language, including unions of structures. Quoting two parts (scope and classes) of the Ox documentation:
        Note that Ox assignment of arithmetic types and string type implies copying over the contents from the right-hand side to the left-hand side. Ox accesses an object through a reference to the object which is created using the new operator. An object is removed from memory using the delete operator (if there is no matching delete, the object will exist until the program terminates).

        These details are important as these lines of Ox code illustrate
        Source: niqlow/examples/classreference.ox
        
                
        Produces output Source: niqlow/examples/classreference.txt
                
                
        In the first three lines b=a and then changing b does not change the value of a. However, changing the member v does change the corresponding value of a. In the first three assignments, Ox clones the right hand side of the assignment. But when assigning a variable that is currently an instance of a class a clone is not made. Instead, a reference or pointer to the class is made. So accessing the member of the reference is equivalent to accessing the member of the assigned class.

        Returning to our example, this means that this.occup = occup; does not duplicate the state variable passed as an argument. It creates a reference to it.
        This is both powerful and a bit dangerous. It means that MyStateVar can mess up the x variable, which should be controlled by DDP. But it also means that as DDP manipulates x the other variable mystate is informed automatically, without DDP needing to know that y depends on the value of x.

        Also note that the new operator allows separate instances of MyStateVar to be created and passed to different variables to track.
        Because occup and work are not declared as static members, each instance has its own space for these variables. So they can be different values not pointers to the same space.

      7. Why matrices for actions but not states?
      8. It has been emphasized a few times that the user's code does not need to handle state variables in vectors. So why is it required to handle actions stored as a matrix? Here the issue is computational speed within an interpreted language like Ox and Matlab. Namely, nested Ox loops induce a computational overhead that nested loops in compiled languages like C and FORTRAN do not.

        The innermost loop of nearly all discrete dynamic programming solution algorithms is a choice over finite options. Requiring the user-defined utility() and Transit() methods to handle the matrix of feasible actions means this inner loop can be handled with native Ox matrix routines. Avoiding a layer of nested loops can speed up code considerably in an interpreted language like Ox.

    12. State Blocks
      1. Accounting for Correlated Innovations
      2. If a state is not autonomous it is coevolving with at least one other state variable. State variables f and g are coevolving if for some \(f^\prime, g^\prime, \alpha \in \theta.A, \eta\) and \(\theta\), $$P( f^\prime, g^\prime | \alpha,\eta,\theta ) \neq P_f(f^\prime | \alpha,\eta,\theta) P_g(g^\prime | \alpha,\eta,\theta).$$ The term coevolving is used rather than correlated, because it is ambiguous. Some users may interpret correlated as an unconditional property of the variables. Typically | is not used in these notes, but here it is used to emphasize that f and g may be correlated if we do not observe or condition on current action and state values.

        Example of state block
        Height and weight of a child as they grow older within a model of the parent's actions.
        The child's age is a simple counter that is not only autonomous of all other states but it is also deterministic. That is, its innovation (age\prime-age = 1) depends on nothing else in the model.
        If the model tracks the child's weight (w) then it would obviously not be deterministic. Its transition may depend on age and actions such as meals cooked, activities paid for, etc. However, it might be reasonable to treat weight accumulation, w\prime-w, as distributed independently of other state variables' innovations.
        However, suppose the model includes not just weight but height (h) as well. It might be reasonable to treat h as evolving in some way separate from decisions or other factors other than age and current height (if growth of the children in the sample are not nutritionally constrained). But obviously weight gain is correlated with height gain.
        That is, MyModel might specify individual transitions of the form \(P_h(h\prime|h,age) P_w(w\prime|h\prime-h,w,age)\). This formulation could be used to rule out, for example, a growth spurt (\(h' > h\)) and significant weight loss (\(w' \lt w\)). This requires a sequence in the realizations: h′ is realized first and then its value feeds into the transition probabilities for weight. In some ways this sequencing of the realizations makes the model simpler to specify.
        However, DDP has no mechanism to ensure this sequencing occurs properly for autonomous variables. That is, since h′ enters the transition of another variable it cannot be classified as autonomous even though its own innovation can be generated without reference to other innovations.
        To account for this correlation, the variables w and h must be specified as Coevolving and placed in a StateBlock which will determine their joint transition. The block might, for example, ensure that weight does not go down if height goes up. In effect, placing them in a state block specifies a transition of the form \(P_{w,h}\bigl(\,w^{\,\prime},h^{\,\prime}\,|\,h,w,age\,\bigr)\), which includes as a special case that h′-h affects the transition probabilities for weight.
        Alerted to the presence of a state block in the state vector, DDP processes the block's transition once to build up h′ and w′ simultaneously.

        Coevolving state variables must be placed in a StateBlock
        A state block is a type (derived class) of state variable. But it has additional data and methods to account for state variables that are not autonomous with respect to each other. State blocks are autonomous with other variables and blocks. In the example, the state variables for weight and height would be placed in a state block. The block would handle the correlation in the innovations. There is no method to allow nested blocks. Only Coevolving states can be added to a block and a StateBlock is not derived from the Coevolving class.

        Just like a single autonomous state variable, a state block can be an element of any of the state vectors, \(\epsilon\), \(\eta\), \(\theta\) and \(\gamma\). For example, serially uncorrelated but contemporaneously correlated wage offers would be handled as an exogenous state block. If accepted wages do not enter the state next period then the block could be fully exogenous. Otherwise it would be placed in the semi-exogenous class.

        Note that the Clock is a State Block, so its transitions cannot depend on states outside the block. The user can derive their own class of Clock that includes coevolving states. State variables added to the clock must be (derived from) TimeVariable objects, which is a 'container' class derived from coevolving.

        The worst case
        The setup can handle a general transition. The worst case is that all innovations are correlated and the endogenous vector would consist of a single state block containing all the state variables. Thus a user of DDP can code a completely general multivariate process.

        This 'black box' state must be derived from Clock if it is to handle everything. The advantage of allowing for autonomous state variables, rather than making everything a state block, is the ability to define standard kinds of variables that can be mixed and matched in different models with little to no programming required.

      3. How to Create a State Block
      4. These instructions follow the ones provided for creating a StateVariable. They are simpler and more to the point, focussing on the distinct features of state blocks. Most of the additional considerations apply here as well but are not discussed again.

          Steps
        1. Duplicate the State Block Template File
        2. Pick a name for your new kind of StateBlock.
          It cannot have spaces or symbols except _. These instructions will refer to «BlockName» for whatever name you chose.

          Copy niqlow/templates/DynamicPrograms/StateBlock.ox to a file called «BlockName».ox.
          Contents of Source: niqlow/templates/DynamicPrograms/StateBlock.ox
          
          

          Decide how you want to include your code: see include or import.

        3. Choose a base class to derive your block from.
        4. There are fewer predefined blocks than autonomous variables. So it is likely that StateBlock will be the base for your class.

        5. Declare needed elements (members and methods).
        6. Besides other information needed to compute the transition (see the StateVariable instructions), the state variables to be tracked in the block should be declared as data.
          For example,
                  struct Vitals : StateBlock {
                      decl wght, hght;
                      Vitals();
                      Transit();
                      Update();
                      GrowthProbs();
                      }
          

          As with StateVariable there is one required and two optional methods that must be provided for each new state variable.
          The required method is the constructor for the state variable. The optional methods are called Transit() and Update() that compute the transition and update the actual discrete values associated with the state variable.

          We have listed another method GrowthProb(), which will be presumed to return a vector of transition probabilities in a format described below.

        7. Code the constructor method
        8. In DDP your block has to call the constructor of the base class, define the component variables and add them to the block using AddToBlock().
          The state variables could exist outside the block but they should not be added to the DP model separately.
                  Vitals::Vitals() {
                      StateBlock("vitals");
                      wght = new Coevolving("w",10);
                      hght = new Coevolving("h",6);
                      AddToBlock(wght,hght);
                      }
          
          It would be very bad practice to hard code the constants 10 and 6 in the definition of the class.
          Those kinds of dimensions should be decided by the user who includes the block in their model. It is done in the example to abstract away from passing information to the constructor since this is the same as with a StateVariable.

        9. Code the Transit() method
        10. The major difference between a StateVariable and a StateBlock: a state variable Transit() function returns a single row vector of "prime states" but a block must return a matrix of feasible "prime" states, each row is the state of the corresponding member of the block in the order they were added to the block. The transition probabilities still pertain to each column of the matrix. The rows of the transition probabilities correspond to the rows of the feasible actions \(\theta\).A.

          Transit can access the feasible actions \(\alpha\) at the current state. in C.

          Your code has to return two items, sent back in an Ox array.
          The first is a matrix vector of feasible values, F: the values this block could take on next period. Each row corresponds to one of the variables in the block. Each column is a different outcome. This is what allows for correlated transitions within a block. The other output is a matrix of transition probabilities, P. The rows of the matrix must match the rows of Alpha::C; the columns must match the columns of the row vector F. The ij element of P is the probability that your state block takes on the vector of values in column j of F given the agent chooses the ith action in the feasible set Alpha::C.

          As a simple example:
          Suppose that height increases by one value or stays the same. Weight can go up or down or stay the same. However, weight cannot go down if height goes up (the correlated innovation). Further, assume that the transitions do not depend on actions. In this case, the probabilities are duplicated using reshape to get the right dimensions:
                  Vitals::Transit() {
                      decl F, P;
                      F =   wght.v-1 ~ wght.v ~ wght.v+1 ~ wght.v   ~ wght.v+1
                          | hght.v   ~ hght.v ~ hght.v   ~ hght.v+1 ~ hght.v+1;
                      P = GrowthProb();
                      return { F , reshape(P,Alpha::N,columns(F)) };
                      }
          
          So, if weight and height were currently at levels 2 and 3, respectively, then F would be:
                        1 2 3 2 3
                        3 3 3 4 4;
                      
          Notice that this transition matrix imposes the coevolving condition that weight cannot fall when height increases. There is no way to achieve this with two autonomous variables because their innovations are statistically independent.

          This code assumes that GrowthProb() will return a 1×5 vector of probabilities, corresponding to the five possible outcomes in F.

          The user would have to write that function and may have to pass parameters or other information in the constructor function. We are skipping those issues because they are discussed in the State Variable case. Then reshape() simply duplicates row-by-row since weight and height change do not depend on current actions.

          Also, note that the code above is too simple to work properly.

          It ignores the fact that there minimum and maximum values of the discrete states: wght.v-1 or hght.v+1 will be incorrect when the current values are near the bounds and this will eventually cause an error. The formula for F works fine as long as both variables are away from their boundaries and it illustrates the key issue here of what a block transition looks like.

          The user has the option to provide a Update() function for the block.

          This should update a matrix of actual values that correspond to the .v values of each variable. If there are M variables in the block, each taking on am.N values, then the actual matrix will be N ×(∏m  am.N).

        11. Add the block to the DP model:
        12.         decl stats;
                  ⋮
          

          SemiExogenousStates(stats = new Vitals());

          In this case, the vital state process is exogenous to other state variables, but if it influences the transitions of other state variables then the block is only semi-exogenous.

      5. Accessing State Block Values
      6. Since a StateBlock is a type of StateVariable it has a .v data member. DDP ensures that it always contains the vector of values of the state variables, in the same order as they were sent to AddToBlock. And this means that CV() will return the vector of values for a block just as it returns .v for a scalar state. In addition, AV() will return the vector of actual values that correspond to the indices in .v. That is, it will pull out of the actual matrix the right value for each of the M variables at their current values.

    13. The Transition for states and state vectors
    14. Terminology
      Recall that generic elements of the vectors are denoted with corresponding Roman letters (h2 is an element of \(\eta\)). DDP keeps track of the individual state variables inside the block as well the block itself. So elements of a block can still be denoted generically. The difference is that the block handles the transition of all the members of the block. So in defining the transition of the state vectors, the generic elements are either autonomous state variables or a state block. But from the point of view of MyModel each generic element is a separate state variable.

      The overall state transition is the product of the separate vector transitions:

      $$P(\zeta^{\,\prime},\epsilon^{\,\prime},\eta^{\,\prime},\theta^{\,\prime},\gamma^{\,\prime}\ |\ \alpha,\zeta,\epsilon,\eta,\theta,\gamma\ ) = f_\zeta(\zeta^{\,\prime}) \times P_\epsilon(\epsilon^{\,\prime}) \times P_\eta(\eta^{\,\prime}) \times P_\theta(\theta^{\,\prime}\ |\ \alpha,\eta,\theta) \times I\left\{\gamma^{\,\prime}=\gamma\right\}.$$

      The restricted natures of the different vectors is displayed by excluding them from other transitions. In particular, both \(\zeta\) and \(\epsilon\) are excluded from all other transitions. The continuity of the \(\zeta\)'s distribution is illustrated by using \(f_\zeta()\) instead of \(P()\) for its transition. The semi-exogenous nature of \(\eta\) is show by its own IID transition and the fact that it is not excluded from the transition of \(\theta\). The current value of \(\eta\) can have a direct effect on the transition of endogenous states but its own transition depends on nothing. Finally, on the other side of the full endogenous states \(\theta\) is the transition of the grouping vector \(\gamma\). Since it is fixed during a given program its transition is an indicator for keeping the same value next period. The realized values of all the state variables do affect the transition of \(\theta^{\,\prime}\) but, except for \(\theta\) and \(\eta\), only indirectly through the agent's optimal choice of \(\alpha\) conditional on the full realized state. We could illustrate this above by writing \(\alpha;\) as \(\alpha(\zeta,\epsilon,\eta,\theta,\gamma)\).

      In turn, each state vector's transition is the product of the individual elements (either block or autonomous):

      $$\eqalign{ P_\epsilon \left(\epsilon^{\,\prime}\right) &= \prod_{k=0,\dots,\epsilon.N-1} P_{e_k}(e_k^\prime)\cr P_\eta \left(\eta^{\,\prime}\right) &= \prod_{k=0,\dots,\eta.N^{-}} P_{h_k} (h_k^\prime)\cr P_\theta \left(\theta^{\,\prime}|\alpha,\eta,\theta\right) &= \prod_{k=0,\dots,\theta.N^{-}} P_q(q_k^{\,\prime} | \alpha,\eta,\theta).\cr}$$

      The group variables are constant within a single solution but across solutions follow a similar distribution:
      $$P_\gamma (\gamma) = \prod_{k=0,\dots,\gamma.N^{-}} P_{g_k}(g_k^\prime).$$


  5. The Clock: Time-Keeping StateBlock ∈ θ.
    1. Overview
    2. Time is a key concept in dynamic programming. At least the difference between now (today) and later (tomorrow) underlies Bellman's equation. In a stationary environment that is all that matters while solving the model. In a non-stationary world the clock takes on more values. The primary assumption is that time progresses: \(t^{\,\prime} \ge t\). This allows the value function to be solved backwards in time, saving storage or computation if, mistakenly the environment is assumed to be stationary. (That is, with stationarity a fixed point in all states must be solved at once, but with non-stationarity only a subset of the state space needs to be consider at each stage as we work backwards.)

      So the two simple extremes are stationarity (today is followed by tomorrow which is the same as today) and normal aging: t′ = t+1 until t=T‾. However, timing can be more complicated that those two cases. One case is an agent facing a finite horizon problem and the possibility of early death.

      The Clock Block is a single StateBlock that is always in the endogenous vector \(\theta\), and is always the rightmost element of it, in the sense that all task that span the endogenous state space will loop over time in the outermost loop.

    3. Setting the Clock
    4. \(\theta\) always contains a single clock block derived from Clock.

      The simplest way to set the clock is to call SetClock().
      The first argument is either one of the ClockTypes tags for built-in clocks, or it is an object of a class derived from Clock.
      If a tag is sent, other arguments may be required by that clock type.

      The call to SetClock() must take place between the calls to Initialize() and CreateSpaces()
      Example
      Initialize(new MyModel());
      SetClock(InfiniteHorizon);
      CreateSpaces();
      
      If MyModel does not set the clock explicitly, then a stationary infinite horizon clock is set by CreateSpaces().

      All clock blocks have the same first two variables in the block
      The first co-evolving state variable in the clock is t, a state variable that is weakly monotonic:
      t′ ≥ t
          
      With anticipation (foresight), \(V(\theta)\) can/should be solved backwards in t if time is important in the model beyond just today and tomorrow in an infinite horizon.

      The second co-evolving variable in the clock block, t″, tracks feasible values of t next period during model solution.

      DDP uses t″ to avoid storing the full \(V(\theta)\) while iterating. The user typically does nothing with t″.

      For example:
      With a RandomMortality clock described below, the next time may be either t+1 or T-1 if death occurs.

      The value function for the those two times must be available while computing the value at time t. However, no other time periods must be stored, so separate coding of the t process and t″ process conserves memory in complex environments. Because it plays no direct role in the mathematics (as opposed to the computations), t″ is never listed as a member of \(\theta\), but it will be seen in output with the value 0. In more complex environments the clock may include other state variables whose values coevolve with t and t″.

    5. Current time and the decision horizon
    6. The clock block is available as counter, but usually MyModel does not need to refer to it directly.
      The current value of t, is available to MyModel as t. That is,
      I::t ≡ counter.t.v
      
      Since t is in the index class I MyModel can use the identifier t for its own use.

      The decision horizon, or counter.t.N, also denoted T, is the number of values that the time variable t takes on.
      The horizon of the model is
              T ≡ t.N
              T = 1 for an infinite horizon model (T = ∞).
      
      Because it is crucial to the solution method, this is a property of MyModel stored as T
      When T is finite,
              N::T  ≡  T  =  counter.t.N,
      When T = ∞,
              N::T  ≡  1
      
      DDP distinguishes between a static program (finite horizon and T = N::T = 1>) and a stationary environment (T=∞ and N::T=1) by checking the class of counter.

    7. Kinds of Clocks
    8. Also see ClockTypes

      InfiniteHorizon: t″ = t = 0 = T‾.
      In the infinite horizon case Bellman 's equation must be iterated on from initial conditions until it converges. The algorithms know when today (t=0) is being accessed, and when tomorrow (t′) is being accessed. The code for MyModel only has to deal with today and the transitions of state variables.

      Ergodic
      The user can set the clock to be ergodic, which means that there are no absorbing or terminal states in the state space \(\Theta\).When the clock is Ergodic DDP will compute the ergodic or stationary distribution across states, \(P_\infty(\theta)\). If the user's state transitions are not themselves stationary then this calculation may fail.

      SubPeriods
      In some models a sequence of decisions is made within a single period. The SubPeriods tag creates a Divided clock.

      What do subperiods mean?
      Usually each state variable only transitions between one subperiod (which differs across states). This can be handled very easily by sending a base state variable to the SubState() augmenting class. Simply send the sub period for which this state variable transits. For all other subperiod transitions the variable is frozen. Each action variable is usually only changeable in one subperiod. This is handled by providing a replacement for FeasibleActions().

      The discount factor \(\delta\) is set to 1.0 for s < S-1. That is all payoffs within a subperiod occur simultaneously. The discount factor takes on it normal value for s= S-1 to capture the gap between major periods.

      Some implications
      Because transitions all depend on the value of the subperiod s, all state variables must be endogenous (added to \(\theta\)). This is checked in CreateSpaces. The user should augment state variables using

      NormalAging: t′ = t+1, up to T‾; t″=0.
      With ordinary aging Bellman's equation is solved backwards starting at t=T‾ down to 0. The auxiliary variable t″ is not needed to account for deviations from normal time so it is simply 0 always. A special case is a non-dynamic environment, StaticProgram, with T&line;=0.

      DDP knows that an infinite horizon model is different than a static program, because in the static case it does not iterate on V() until convergence. Since StaticProgram is a tag associated with the class StaticP, which is derived from the class Aging, DDP cannot confuse this with a Stationary environment.

      RandomMortality: the agent either ages normally or dies before the start of the next period
      Random mortality means that, for there are two possible values of t and t″ next period
      \((t^{\,\prime},t^{\,\prime\prime}) = (T^{-},1)\) with prob. \(\pi(\alpha,\theta)\)
      \((t^{\,\prime},t^{\,\prime\prime}) = (t+1,0)\) with prob. \(1-\pi(\alpha,\theta)\)
      With premature mortality Bellman's equation is solved backwards but the final period is also tracked at each t as a potential state next period. The use of the auxiliary state variable t″ now becomes important computationally.

      While iterating DDP does not store the value function for all t, only the final and next. So when indexing these values it does not use t′ but t″. It ensures that as t is decremented the just-solved for values are placed where t″ = 0 will reach it. This means that t″=0 is typically associated with "ordinary" time evolution while other values are perturbations such as premature death of the agent. The mortality probability \(\pi()\) can constant or depend on the current state and current actions.

      RandomAging: the agent spends a random amount of time in each age bracket

      UncertainLongevity:
      Many papers in the literature assume normal aging or random mortality with some long but finite maximum lifetime (say, age 100). Often the last part of the lifecycle is included with little decision making only to get reasonable continuation values for early ages. For \(\delta\) not too close to 1 the cap on ages does not affect choices much earlier.

      Another, perhaps more elegant, approach is to treat the lifetime itself as uncertain. t=T‾ is still the case of death which is still random and occurs with probability π() as above. But now t=T‾-1 is now a stationary problem and t=T‾ is a terminal state. Otherwise, once t=T‾-1 today and tomorrow are the same. DDP iterates on the value function at t=T‾ as if it were a (non-ergodic) stationary problem, continuing until convergence. Then it will proceed backwards as with mortality. The advantage of this approach is that there is a single choice probability for this final phase (conditional on other state variables) rather than computing and storing slightly different choice probabilities as t approaches T‾.

      The Longevity clock combines a special case of a more general notion of RandomAging which uses AgeBrackets for the state clock with random mortality. But it is not a special case of either one so it is derived as a third class from NonStationary.

      SocialExperiment: Phased treatment and random assignment
      In this environment the agent believes they are in a stationary problem and acts accordingly. However, they are unexpectedly placed in a temporary experimental situation in which their utility and possibly state transitions have changed. They again act accordingly but they know that eventually they will return to the original environment, which acts as the terminal values for the experiment. There are three possible values of t″ during treatment.

      RegimeChange
      Like a SocialExperiment except the unexpected environment lasts forever.

    9. Interacting With Value Function Iteration
    10. Clocks have three virtual methods associated with them which are called by ValueIteration and related solution methods.

      Vupdate() makes sure that the scratch space for the value function is updated after each iteration of Bellman's equation. In stationary models (or stationary time periods within a non-stationary model) this method also computes the norm of the difference between the current value functions and the last value. This norm is checked by the solution method against the given tolerance for convergence.

      Synch() is called any time the value of the clock changes (that is, it is called in SyncStates()). The default method simply places the current in t to b available to the user's code and other parts of DDP. Some kinds of clocks do more than this.

      setPstar() determines whether the next iteration should calculate choice probabilities or not. If only one iteration is required to compute the value function at this point in the clock (no fixed point problem), then the clock will return TRUE. This does not itself check for convergence, and other considerations may set setPstar to TRUE.

      Typically the user does nothing with these methods unless they are creating their own solution method. And if the create their own clock type they may have to provide replacement methods if the inherited ones are not correct.


  6. Time Invariants
    1. Time invariants index different DP models (elements of \(\gamma\)). Time invariants are variables that take on different values across agents but are fixed for a given agent.

    2. Overview
    3. The basic DP model concerns a single agent in a single environment. However, many applications involve related DP problems that differ in parameter values, which in turn alter the primitives \(U()\), \(P()\), \(\delta\). The state vector \(\gamma\) holds variables that are fixed for an agent but differ across agents.

      As with other state vectors, anything that can go in \(\gamma\) could be placed in \(\theta\). However, since the state does not vary it is inefficient to included invariant states in \(\Theta\).

      Instead, DDP resuses \(\Theta\) for each value of \(\gamma\) and stores only a minimum amount of information for previously solved models. Only state variables derived from the TimeInvariant class can be added to \(\gamma\) Invariants do not have a transition.

    4. Fixed and Random Effects
    5. DDP distinguishes between two kinds of invariants. Each is either a FixedEffect or a RandomEffect. This distinction plays the same role as in panel models. Fixed effects typically correspond to constant observed variables, such as gender. So if the model is to be solved separately for men and women, with different primitives, then a binary FixedEffect would be added to \(\gamma\).

      On the other hand, a RandomEffect is designed to account for unobserved variation in the underlying problem. An example would be a model that allows agents to have different (unobserved and permanent) skill levels. A single skill variable taking on discrete values could be added to \(\gamma\) In estimation DDP will sum over the distribution of skills.

      If a TimeInvariant class has a Transit() function defined it is never called because the DP problem presumes the value will never change. A fixed effect also has no distribution, but a random effect does, Distribution().

      The distribution is used to integrate out the random effect after solving for behavior conditional on each value. The distribution can depend on the value of the fixed effects. For example, the distribution of skills can depend on gender. This is called once for each value of the fixed effects and this updates the pdf(), a vector of probabilities or weights place on the values of the random effect.

      Correlated random effects are created by adding a RandomEffectBlock to \(\gamma\)

    6. The Group Space
    7. Each combination of random and fixed effects implies a value of \(\gamma\), and for each \(\gamma\) a Group node is created. The set of all group nodes is the group space, denoted \(\Gamma\).

      As with allowing state variables to be declared exogenous, moving time invariant variables from \(\theta\) to \(\gamma\) saves time and especially storage while solving the model. An invariant does not need to be tracked during iteration over \(\Theta\). So group variable Transit() methods are not called for each iteration over t and t″.

      Storage is re-used while solving for different values of \(\gamma\). DDP reuses the state space \(\Theta\) for each value of \(\gamma\). Choice probabilities \(P^\star(\alpha | \epsilon, \eta, \theta, \gamma )\) are stored separably for each random value of \(\gamma\). \(EV(\theta)\) integrates out the random effects, conditional on the value of the fixed effects in \(\gamma\). Both utility and a transitions can depend on the value of fixed effects. However, only utility can depend on random effects.

      Finite Mixture Heterogeneity
      FiveO includes options for finite-mixture objectives. This can be used seamlessly to estimate parameters of a DDP that vary across groups.

  7. Auxiliary Values
  8. An auxiliary value \(x\) is typically a function of the state and action vectors that would be observed in the data or is of interest in itself. It is based on a class derived from AuxiliaryValue and is added to the list \(\chi\).

    Elements of \(\chi\) are user-defined auxiliary variables, sometimes referred to as payoff-relevant variables in the DP literature. Auxiliary variables are functions of current states and actions (and not functions of past or future outcomes), so \(\chi\) adds no additional information to the full outcome. That is, \(\chi\) is redundant within Y*, whereas the other actions and states each contain information. Auxiliary variables are involved in partial observability of the realized DP.

    Auxiliary values are added to the outcome (appended to \(\chi\)) using AuxiliaryOutcomes(). The value of the variable is set in the method Realize(), which is a replacement for virtual Realize().

    Realize() sets the value of v given the current state and outcome. This value is then added to the realization. Auxiliary variables are never referenced nor realized by niqlow while solving the DP problem, because they plays no role in the solution. They are realized only when simulating the solved model or when matching the model to external data. The user might realize values in the process of computing utility.

    The user may break up Utility() into functions that help determine it. These can be static methods of MyModel. Then some of these functions may be observed in data and might enter econometric estimation. StaticAux is a auxiliary value class that can be used as a wrapper for such functions as shown below.

    Auxiliary values are also used to code indicators and interaction variables in data sets. For example, if \(a\) is an action to choose among 5 options the data may include four variables that are indicators for all but one of the options being observed, \(y_k = I\{a=k\}\). Instead of having to create a set of binary action variables to match this data the user can create a set of Indicators, each of which is an auxiliary outcome that will match whether \(a=k\) or not (as well as the probability of that occurring in a prediction data set).The Indicators() method can be used to generate all four indicators at once. Interactions and MultiInteractions between discrete actions and states can be created as can interactions between a auxiliary variable and a discrete quantity.

  9. Conditional Continuous Choices NEW and experimental
  10. A model may have a utility of the form: $$U(\alpha,\theta) = u(x^\star;\alpha,\theta)$$ where $$x^\star = \arg\max_{x} c(x;\alpha,\theta).$$ That is, the agent makes a continuous choice over \(x\) conditional on the state and the discrete action \(\alpha\). This conditional continuous choice (

    CondContChoice
    ) then determines the indirect utility of the discrete actions. One special case would that \(u() = c(x^\star;\alpha,\theta)\): that is, the utility is simply the indirect utility over the continuous choice. (ASnother possibility is that \(x^\star\) directly affects the transition as in \(P(\theta^{\,\prime};x^\star,\alpha,\theta)\). Currently this is not supported. The user may be able to use the features below to help implement this.

    A continuous choice is related to AuxiliaryValues, in the sense that you may match \(x^\star\) to observed data. If the continuous choice has a closed form then it can be represented as a static function and included in observations using the StaticAux wrapper. If the optimal choice requires sequential optimization then the CondContChoice objectitve can be used to carry this out efficiently. This allows for a single objective to represent each conditional choice and to minimize the amount of additional storage required.

        struct Cost : CondContChoice {
            vfunc();
            }
        Cost::Cost(x) {
            CondContChoice("mC",x);
            }
        Cost::vfunc() {
            v = -exp( x );
            }
    

    struct MyModel : Bellman { decl xvals;

    } MyModel::MyModel() {

    } mCost = new CondContChoice("mC",x); x = new Positive("x",2.0); mCost = new CondContChoice("mC",x); mCost -> Algor( new BFGS(mCost) );

  11. Volume (getting output specific to a variable)
  12. All Quantity based objects have their own Volume member.
    By default the Volume is set to SILENT.

    If you set the volume to one of the other NoiseLevels then:
    Some internal routines will print out information specific to variable. The output goes to the Variables log file.

Author:
© 2011-2023 Christopher Ferrall

Documentation of Items Defined in Variables.ox  ⇧ 

 Global functions, enumerations

Functions
 StripZeros Remove all transitions for which the transition probability is EXACTLY 0.
Enumerations
 TransOutput .

 Absorbing : Random : Autonomous : StateVariable : Discrete : Quantity

A binary variable to code an absorbing state. This transition from 0 to 1 happens with probability fPi, and the transition 1 to 1 happens with probability 1.
See also:
PermanentChoice
Public fields
 fPi const
 p
Public methods
 Absorbing Create a binary endogenous absorbing state.
 Transit
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Accumulator : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Add up the values of the target action or state up to a maximum.
Transition:
s' = min( s+ s.a, s.N-).
Ρ(s'=z | α,ε, θ,ψ) = I{z = min(s + s.a,s.N-) }.
Public fields
 Target const Variable to track
Public methods
 Transit virtual
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionAccumulator : Accumulator : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Add up the values of a target action up to a maximum.
Transition:
Example:
decl tothrs = new ActionAccumulator("TotHrs",work,10);  //track work up to 10 hours
EndogenousStates(tothrs);
Public fields
 x
 y
Public methods
 ActionAccumulator Create a variable that counts how many times an action has taken on certain values.
 Transit virtual .
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Accumulator:
Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionCounter : Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Track number of periods value(s) of target action variable have occurred.

Transition:

$$s' = s+I\{ CV(x) \in tau\}I\{s \lt N-1\}.$$

Example:
decl exper = new ActionCounter("Yrs Experience",10,work); //track up to 10 years working
EndogenousStates(exper);
Public fields
 inc
Public methods
 ActionCounter Create a variable that counts how many times an action has taken on certain values.
Inherited methods from Counter:
Counter, IsReachable
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Counter:
Reset, Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionCounterMaster : ActionCounter : Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Public fields
 CVSList
Public methods
 ActionCounterMaster Create a ActionCounter that checks reachability of an array of state counters.
 IsReachable Trims the state space if the clock is exhibits aging and there is a list of state counters for a Target, assuming all are initialized as 0.
Inherited methods from ActionCounter:
ActionCounter
Inherited methods from Counter:
Counter
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from ActionCounter:
inc
Inherited fields from Counter:
Reset, Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionTracker : Tracker : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Indicates an action variable took on a value last period.
q' = I{a ∈ r}.
See also:
StateTracker
Public fields
 d
Public methods
 ActionTracker Create a binary variable that indicates if the previous value of action variable was in a set.
 Transit virtual .
Inherited methods from Tracker:
IsReachable, Tracker
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Tracker:
Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

A state variable that augments a base transition so that the value of an action variable triggers this state to transit to a value.
Let
q denote this state variable.
b be the base state variable to be agumented (not added to model itself)
a be the action variable that is the trigger
t be the value of a that pulls the trigger.
r be the value to go to when triggered.

$$P(q^\prime =z ) = (1-I\{a\in t\}) Prob( b^\prime=z ) + I\{a \in t\}\}r$$

Example:
Public methods
 ActionTriggered Create a new augmented state variable for which an action triggers the change.
 Transit virtual
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ActionVariable : Discrete : Quantity

An element of the action vector \(\alpha\).
Example:
MyModel : DPparent {
    static decl a;
    ⋮
    }
⋮
MyModel::Initialize() {
    DPparent::Initialize(new MyModel());
    ⋮
    a = new ActionVariable("choice",2);
    Actions(a);
    ⋮
    CreateSpaces();
    }
Public fields
 vL vector of value labels.
Public methods
 ActionVariable Create a new action variable.
 myAV virtual Return A[][pos].
 myCV virtual Return C[][pos].
 myEV Return aC[][pos], current external value.
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Track, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 AgeBrackets : NonDeterministicAging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Aging within brackets.

Time increments randomly, with probability equal to inverse of bracket length.

If the bracket has size 1, then it is equivalent to Aging at that period. If the bracket is bigger than 1, the value function at that period is solved as a fix point.

ClockType code to use this clock: RandomAging
Example:
Represent a lifecycle as a sequence of 10 periods that each last on average 5 years:
SetClock(RandomAging,constant(5,1,10));
which is equivalent to this:
SetClock(new AgeBrackets(constant(5,1,10)));
Have normal aging for the first 10 years, then 3 brackets of five years, then 2 of 10 years:
SetClock(RandomAging,constant(1,1,10)~constant(5,1,3)~constant(10,1,2));

Transition
Let B = the bracket size at period t. $$Prob( t^\prime ) = I\{t^\prime=t\}(1-1/B) + I\{t^\prime=t+1\}/B$$
See also:
SetClock, ClockTypes
Public fields
 Brackets const Vector of period lengths for each age
 TransMatrix Transition matrix based on Brackets
Public methods
 AgeBrackets Create an aging clock with brackets.
 Last virtual Return flag for very last period possible.
 setPstar virtual
Inherited methods from NonDeterministicAging:
sePstar, Vupdate
Inherited methods from Clock:
Clock, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Aging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Timing in an ordinary finite horizon environment. Normal Finite Horizon Aging.

ClockType code to use this clock: NormalAging
Example:
Use SetClock, 10 time periods:
SetClock(NormalAging,10);
which is equivalent to this:
SetClock(new Aging(10));
See also:
SetClock, ClockTypes
Public methods
 Aging Normal Finite Horizon Aging.
 setPstar virtual
Inherited methods from NonStationary:
Transit, Vupdate
Inherited methods from Clock:
Clock, Last, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Asset : Random : Autonomous : StateVariable : Discrete : Quantity

Discretized interest-bearing asset. The actual vector should either be set by the user after creation of the state variable (if the actual asset levels are fixed), or the user should define a derived class and provide an Update() method that will keep actual current based on dynamic changes in parameters.
Let A be the actual current value, actual[v].
A* = min(  max( rA + S , actual[0] ), actual[N-1] )
Define Ab ≤ A* ≥ At as the actual values that bracket A* (they are equal to each other if A* equals one of the discrete actual values). Define m = (Ab+At)/2. Let b and t be the indices of the bracketing values.
    Prob(a′ = b) = (A-Ab)/m
    Prob(a′ = t ) = 1 - Prob(a′=b).
    
Public fields
 r const AV()-compatible object, interest rate on current holding.
Public methods
 Asset Create a new asset state variable.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Augmented : StateVariable : Discrete : Quantity

State variables that augment another state variable (the base) or otherwise specialize it.

An Augmented state variable has a transition that is based on another state variable's transtion.

For example, the Triggered state variables have a transition that is the same as the base variable sent when defining the augmented state variable unless a triggering condition is met. In that case the value next period is a special one that is not what the base transition would be.

The virtual IsReachable() for Augmented state variables is to return the base IsReachable() value.
Public methods
 Augmented Base creator augmented state variables
 IsReachable virtual Default Indicator for intrinsic reachable values of a state variable.
 SetActual virtual Normalize the actual values of the base variable and then copy them to these actuals.
 Synch Synchronize base state variable value with current value.
 Transit virtual Default Transit (transition for a state variable).
 Update virtual Default Augmented Update.

Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Autonomous : StateVariable : Discrete : Quantity

Container for single state variables with a statistically independent transition.

A state variable q is autonomous when:
Its conditional transition to the next value q' is independent of all other transitions.
That is, its transition \(P_q(q^{\,\prime}; \alpha,\theta,\epsilon)\) is statistically independent of all other transitions.
An autonomous q is not necessarily independent of another state variable s because both transitions can depend on the current state and actions.
So autonomous means independent conditional on all current state variables values and actions.
The counterpart to an autonomous state variable is a Coevolving one
See also:
Coevolving
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, Transit, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 AuxiliaryValue : Quantity

Base Class for variables related to outcomes.

Auxiliary variables are typically functions of the state and action vectors that would be observed in the data.<<

For example, in a search model, suppose the worker's ability x and the match quality m are both unobserved, but the wage, w = xm, is observed. Then an auxiliary variable can be created for wage, added to the outcome and read in along with other data.
Public fields
 indata observed in external data .
Public methods
 AuxiliaryValue Create a new element of χ, the space of auxiliary outcomes.
 Likelihood virtual Default contribution to likelihood.
 Realize virtual Default realized auxililary variable, sets v=1.0.
 Track virtual

Inherited methods from Quantity:
SetVolume
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 BinaryChoice : ActionVariable : Discrete : Quantity

Easy way to create a binary choice.
Example:
MyModel : DPparent {
    static decl a;
    ⋮
    }
…
MyModel::Initialize() {
    DPparent::Initialize(new MyModel());
    ⋮
    a = new BinaryChoice();
    Actions(a);
    ⋮
    CreateSpaces();
    }
Public methods
 BinaryChoice Create a binary action variable.
Inherited methods from ActionVariable:
ActionVariable, myAV, myCV, myEV
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Track, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from ActionVariable:
vL
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 BirthAndSex : Random : Autonomous : StateVariable : Discrete : Quantity

A variable
Public fields
 b const Binary ActionVariable.
 ratios const 2x1 proportions.
Public methods
 BirthAndSex Create a 3-valued state variable that records a binary outcome of a binary choice.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ChoiceAtTbar : LaggedAction : Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Record the value of an action variable at a specified time.
Transition:
$$s' = \cases{ 0 & if t less than Tbar\cr Target.v & if t equals Tbar\cr s & if t greater than Tbar\cr}$$

IsReachable
Non-zero states are trimmed as unreachable for t≤ Tbar
Example:
  d = new BinaryChoice("d");
  d5 = new ChoiceAtTbar("d5",d,5);
  EndogenousStates(dvals);
Track the first four choices of d:
  dvals = new array[4];
  decl i;
  for(i=0;i<3;++i) dvals[i] = new ChoiceAtTbar("d"+sprint(i),d,i);
  EndogenousStates(dvals);
See also:
LaggedState, KLaggedAction
Public fields
 Tbar const
Public methods
 ChoiceAtTbar Record the value of an action variable at a given time.
 IsReachable virtual Prune non-zero values before Tbar
Inherited methods from LaggedAction:
LaggedAction
Inherited methods from Lagged:
Lagged
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from LaggedAction:
acol, nxtv
Inherited fields from Lagged:
Order, Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Clock : StateBlock : StateVariable : Discrete : Quantity

Base class for timing in DP models.
Comments:
The user often does not need to create this variable directly. Instead, use the SetClock routine. Some methods do require the user to create a clock variable and send it to SetClock.
See also:
SetClock, ClockTypes
Public fields
 IsErgodic const Store Ergodic Distribution.
 normparam static Norm parameter for stationary convergence.
 t const t, current TimeVariable
 tprime const t', next period TimeVariable used during V iteration to track possible clock values next period.
 Volume static Volume for clocks (static so don't need to access internal var.
Public methods
 Clock Base clock block.
 Last virtual Flag for last period.
 setPstar virtual
 Synch virtual
 Vupdate virtual The base calculation to be carried out after the value of all states at time t have been computed.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock, Transit
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Coevolving : StateVariable : Discrete : Quantity

A member of a state block: transition of this variable depends on transition of one or more other state variables.
Comments:
Coevolving variables do not have their own Transit() function. Instead they sit in a StateBlock that has a Transit().
See also:
StateBlock, Autonomous, AddToBlock
Public fields
 block StateBlock that I belong to
 bpos Index into block array/vector
Public methods
 Coevolving Create a new Coevolving random variable.
 myAV virtual Return actual[v].
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 CorrelatedEffect : RandomEffect : TimeInvariant : StateVariable : Discrete : Quantity

An element of a RandomEffectBlock.
Public fields
 block EffectBlock that I belong to
 bpos Index into block array/vector
Public methods
 CorrelatedEffect Create a new CorrelatedEffect.
Inherited methods from RandomEffect:
Distribution, RandomEffect
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from RandomEffect:
fDist
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Count periods value(s) of target action or state variable have occurred.
Public fields
 Reset const AV()-compatiable reset to 0 flag
 Target const Variable to track
 ToTrack const Values to track
Public methods
 Counter Create a new counter variable.
 IsReachable virtual Trims the state space assuming counter is initialized to 0.
 Transit virtual
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Cycle : Deterministic : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Increments each period up to N‾ then returns to 0.
Transition
$$s' = I\{s\lt N-1\}(s+1)$$
Example:
decl qtr = new Cycle("Q",4);
EndogenousStates(qtr);
Public methods
 Cycle Create a deterministic cycle variable.
Inherited methods from Deterministic:
Deterministic
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Deterministic:
nextValues
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Deterministic : NonRandom : Autonomous : StateVariable : Discrete : Quantity

A state variable with a general non-random transition. The number of points the variable takes on equals the length of the second argument to the creator function.
Public fields
 nextValues matrix with transition.
Public methods
 Deterministic Create a state variable with an arbitrary deterministic transition.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Divided : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

A period is divided into sub-periods.

Models with subperiods typically index a period t and within t a fixed and finite number of subperiods, s=0,…,SubT-1.

Notation
It can be confusing to consider re-define the usual concept of time coded as t, so it continues to track the sequence of sub-periods as if they were different periods.
The concept of a subperiod is coded and available for use to the user as subt. The corresponding concept of a major period is coded as majt. If the clock is not divided into subperiods then both of these variables are identically 0 at all times.
In the simplest case of a divided clock, the user specifies the number of major periods, MajT, and the number of subperiods, SubT.
So in the base case the total number of different periods is T = MajT × SubT and ordinary time simply advances from t=0 to t=T‾ like normal aging.
As plain t advances, subt cycles through 0…SubT‾.
Major time starts at majt=0 and stays there until subt returns to 0 at which point it increments to 1 and so on until it gets to MajT‾.
MajT = 2, SubT =3
             t
      0 1 2 3 4 5
     -------------
 majt 0 0 0 1 1 1
 subt 0 1 2 0 1 2

Options and Special Cases
MajT=0: If the user sets MajT=0 it indicates an infinite horizon problem (i.e. really MajT=∞). subt continues to cycle deterministically. The difference with the ordinary case is that normal time cycles as well: t=0 1 …SubT‾ 0 1 … SubT‾ …. The problem is then ergodic and the value of states is a solution to a fixed point problem. Convergence only needs to be checked for subt=0. Once that period has converged the subsequent subperiods will also automatically be at the fixed point after one last iteration.
MajT = 0, SubT =2
             t
      0 1 0 1 0 1 0 1 0 …
     ---------------------------
 majt 0 0 0 0 0 0 0 0 0 …
 subt 0 1 0 1 0 1 0 1 0 …

HasInit: A model may have an initial undivided period that allows for, say, permanent heterogeneity to be realized.
MajT = 2, SubT =3, HasInit = TRUE
             t
      0 1 2 3 4 5 6
     ---------------
 majt 0 1 1 1 2 2 2
 subt 0 0 1 2 0 1 2

MajT = 0, SubT =2, HasInit = TRUE t 0 1 2 1 2 1 2 1 2 … --------------------------- majt 0 1 1 1 1 1 1 1 1 … subt 0 0 1 0 1 0 1 0 0 …

HasFinal: A model may have a final undivided period to allow for terminal conditions such as bequest motives. This option cannot be combined with MajT=0.
MajT = 2, SubT =3, HasFinal = TRUE
             t
      0 1 2 3 4 5 6
     ---------------
 majt 0 0 0 1 1 1 2
 subt 0 1 2 0 1 2 0

transitions
The standard time variable increments like normal aging or if St=0 it then cycles back to

What do subperiods mean?
Usually each state variable only transitions between one subperiod (which differs across states). This allows the timing of information arrival to occur within major periods. This can be handled very easily by sending a base state variable to the SubState() augmenting class. Simply send the sub period for which this state variable transits. For all other subperiod transitions the variable is frozen.
Each action variable is usually only changeable in one subperiod. This is handled by providing a replacement for FeasibleActions().
The discount factor δ is set to 1.0 for s < S-1. That is all payoffs within a subperiod occur simultaneously.
The discount factor takes on it normal value for s= S-1 to capture the gap between major periods.

Some implications
When state transitions depend on the value of the subperiod s the state variable must be endogenous (added to θ). So a state variable that would normally be exogenous has to be treated as endogenous unless it changes value every subperiod.
Example:
Create a stationary infinite horizon model in which a household works and shops in a first sub period and consumes in the second:
SetClock(SubPeriods,0,2);
Allow for an initial period before the stationary phase begins:
SetClock(SubPeriods,0,2,TRUE);
Create a finite horizon model in which a person makes binary decisions in 3 separate subperiods over a lifetime of 10 periods
SetClock(SubPeriods,10,3);
d = {new Binary("d0"),new Binary("d1"),new Binary("d2")};
Actions(d);
⋮
FeasibleActions(A) {
  decl i,v;
  notfeas = zeros(rows(A),1);
  foreach(v in d[i]) notfeas .+= (I::subt.!=i)*A[][d[i].pos];   // d[i]=1 not feasible if subt &neq; i
  return 1-notfeas;
  }

See also:
ClockTypes, SubState
Public fields
 delts zeros(SubT-1,0)|δ.
 HasFinal const Final subdivided period (TRUE/FALSE).
 HasInit const Initial subdivided period (TRUE/FALSE).
 MajT const Number of major periods, 0=Infinite horizon.
 SubT const Number of sub periods.
 Vnext0
Public methods
 Divided Create a clock divided into subperiods.
 Last virtual Not stationary and last period.
 setPstar virtual
 Synch virtual
 Transit virtual Transition for sub-divided time.
 Update virtual
 Vupdate virtual Check newly computed values; an alternative to computing norm of value function differences.
Inherited methods from Clock:
Clock
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Duration : Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Track number of consecutive periods an action or state variable has had the same value.

This variable requires a target s.X and the lagged value of the target, denoted s.Lx.

Transition:
$$s' = I\{ x = Lag(x)\}(s+ I\{s \lt N-1\}).$$
Example:
Public fields
 add1
 Current const
 g
 isact const
 Lag const
 MaxOnce const
Public methods
 Duration Create a variable that counts the consecutive periods a state or action has had the same value.
Inherited methods from Counter:
Counter, IsReachable
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Counter:
Reset, Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Episode : StateBlock : StateVariable : Discrete : Quantity

Re-occuring epsiodes with endogenous onset and ending probabilities. The block has to coevolving states, k and t, the type of episode and duration. Duration is not tracked for k=0. New episodes occur with Onset probability and end with End probability.
Comments:
Based on illness episode in Gilleskie (1998).
Public fields
 End const episode end probabilities
 Finite const all spells end at t=T-1.
 k const Coevolving, current type of episode
 Onset const episode onset probabilities
 probT Stores end probability at t=T-1 for non-Finite episodes
 t const Coevolving, duration of episode, NOT tracked for k.v=0
Public methods
 Episode K mutually exclusive episodes.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Fixed : Random : Autonomous : StateVariable : Discrete : Quantity

A placeholder for variables to be added to the model later.

This class is used internally so that no subvector is empty. It takes on the value 0 each period. Since it takes on only one value it does not affect the size of state spaces, only the length of the state vector.

The user might want to add a fixed variable to the model as placeholder for a state variable that is not coded yet. For example, if in the complete model q is a complex new state variable that will require coding of a new Transit() function, then you made start with q as a Fixed so formulas involving it can be completed.
Public methods
 Fixed Create a constant entry in the state vector.

Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 FixedAsset : Asset : Random : Autonomous : StateVariable : Discrete : Quantity

Public fields
 delta const ActionVariable.
Public methods
 FixedAsset Create a new FIXED asset state variable.
Inherited methods from Asset:
Asset
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Asset:
r
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 FixedEffect : TimeInvariant : StateVariable : Discrete : Quantity

Base class for a state variable that is non-random and invariant for an individual DP problem.

Members of \(\gamma_f\) are derived from this class.

Solution methods loop over fixed effect values, re-using storage space.

See also:
FETask
Public methods
 FixedEffect A state variable that is non-random an invariant for and individual DP problem.
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 FixedEffectBlock : StateBlock : StateVariable : Discrete : Quantity

A Block of FixedEffect group variables.
Public methods
 FixedEffectBlock Create a list of FixedEffect state variables.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock, Transit
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Forget : ActionTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

When a permanent condition will occur next period because an action is chosen now this state permanently becomes equal to its reset value. This class provides a replacement for IsReachable() that trims the state space Θ because only cases of the target equal to tv and this variable equal to its rval are reachable.
Comments:
This state variable is designed to collapse the state space down when an event is triggered.
Public methods
 Forget An augmented state variable that 'forgets' its value when an action leads to a (semi-) permanent condition.
 IsReachable virtual Trims the state space of values that can't be reached because they are forgotten.
Inherited methods from ActionTriggered:
ActionTriggered
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ForgetAtT : ValueTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

Forget values when t==T (effective next time period).
Comments:
This state variable is designed to collapse the state space down when an event is triggered.
Public methods
 ForgetAtT Create an augmented state variable that 'forgets' its value when at t==T* (effective T*+1).
 IsReachable Trim states because t > T* .
Inherited methods from ValueTriggered:
ValueTriggered
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Freeze : ValueTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

When the trigger value returns TRUE this state freezes at its current value.
Public methods
 Freeze Augment a state variable so it freezes at its current value as long as a trigger is TRUE.
 Transit virtual
Inherited methods from ValueTriggered:
ValueTriggered
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 IIDBinary : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A binary IID process. It accepts a scalar as the transition.

Transition:
$$\displaylines{ P(s'=1|s) = \pi\cr P(s'=0|s) = 1-\pi\cr } $$
Example:
A job offer is available with an estimated probability with an initial value of 0.6:
decl poff = new Probability("op",0.6);
decl hasoffer = new IIDBinary"off",poff);
Public methods
 IIDBinary Create binary state variable for which Prob(s=1) = Pi.
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A discrete IID process.

Transition:
$$P(s'=z|s) = \Pi_{z}$$
Example:
decl health = new IIDJump("h",<0.1;0.8;0.1>);
Comments:
Unlike a general Markov variable, a IIDJump is eligible to be an Exogenous variable, a member of \(\epsilon\) added to the model with ExogenousStates.
Public methods
 IIDJump Create a new IID Jump process.
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Indicator : AuxiliaryValue : Quantity

An auxiliary value that is an indicator for values other actions or states.
Public fields
 iacted const
 iobj const
 myval const
 target const
 ttype const
Public methods
 Indicator Create an indicator (or indicator interacted with another outcome).
 Realize virtual Default realized auxililary variable, sets v=1.0.
Inherited methods from AuxiliaryValue:
AuxiliaryValue, Likelihood, Track
Inherited methods from Quantity:
SetVolume
Inherited fields from AuxiliaryValue:
indata
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Jump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A variable that jumps to a new value with probability π, otherwise stays the same.

Transition:
$$P(s'=z|s) = \pi / N + (1-\pi)I\{z=s\}.$$
Example:
See also:
SimpleJump, Offer
Public fields
 Pi const parameter for jump probability π
Public methods
 Jump Create a variable that jumps with probability
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 KeptZeta : Random : Autonomous : StateVariable : Discrete : Quantity

A discretized version of a continous ζ.

value that enters the endogenous vector θ depending on reservation values to keep it. A kept random discrete version of ζ enters the state as this variable.

Its value is constant as long as a state variable indicates it is being held.
Public fields
 held const
 keep const
Public methods
 CDF virtual The default cdf of a kept continuous variable: Φ(z*).
 DynamicTransit virtual Pold 0 0 Pold
 InitDynamic virtual
 KeptZeta Create a discretized verison of the continuous ζ variable that enters the state when accepted (or kept).
 Quantile virtual The default quantile of a kept continuous variable: Φ-1(u).

Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Base class for variables that take on previous values of other states or actions.

$$s^\prime = CV(Target).$$

See also:
LaggedState, LaggedAction
Public fields
 Order const Order of lag (for pruning).
 Target const Variable to track.
Public methods
 IsReachable virtual Presumes an initial value of 0 for prunable clocks.
 Lagged Takes on a previous value of another state or action.
 Transit virtual
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 LaggedAction : Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Take on the current value of an action variable.

Transition:
$$s^\prime = CV(Target).$$

IsReachable
The default initial value is 0, so for finite horizon clocks, t=0,q>0 is marked unreachable. Set Prune=FALSE to not prune these unreachable states automatically.
See also:
LaggedState, KLaggedAction
Public fields
 acol
 nxtv
Public methods
 LaggedAction Create a variable that tracks the previous value of action variable.
Inherited methods from Lagged:
IsReachable, Lagged
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Lagged:
Order, Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 LaggedState : Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Take on the current value of another state variable.

Note the Target variable must be in \(\eta\) or \(\theta\). It canot be added to \(\epsilon\).

Transition:
$$s' = CV(Target).$$
See also:
LaggedAction, KLaggedState
Public methods
 LaggedState Create a variable that tracks the previous value of another state variable.
Inherited methods from Lagged:
IsReachable, Lagged
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Lagged:
Order, Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 LiquidAsset : Asset : Random : Autonomous : StateVariable : Discrete : Quantity

Public fields
 isact const
 NetSavings const AV()-compatible static function or ActionVariable.
Public methods
 LiquidAsset Create a new LIQUID asset state variable.
Inherited methods from Asset:
Asset
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Asset:
r
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 LogNormalOffer : Offer : Random : Autonomous : StateVariable : Discrete : Quantity

Public fields
 pars const
Public methods
 LogNormalOffer Create a discretized lognormal offer.
 Update virtual Updates the actual values.
Inherited methods from Offer:
Offer
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Offer:
Accept, accept, offprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Longevity : Mortality : NonDeterministicAging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Random mortality and uncertain lifetime.

Like Mortality but at t = T*-1 ≡ T-2 a stationary environment occurs with constant probability of mortality.

ClockType code to use this clock: UncertainLongevity
Example:
Model a lifetime of 5 ordinary periods with a constant probability of dying after each period. The 6th period (t=5) is stationary and transitions to the last period with probability 1/10 each period.
SetClock(UncertainLongevity,7,0.1);
which is equivalent to:
SetClock(new Longevity(7,0.1));
Over periods allow mortality rate to increase linearly in age up to a stationary mortality rate of 50% in the final stage of life:
MyModel::Mprob() { return 0.5*I::t/(N::T-2) ; }
⋮
SetClock(RandomMortality,9,MyModel::Mprob);

Transition
Let T* = T-1
Prob(t' = T*) = π(t)
Prob(t' = t+1) = 1 - π(t) if t < T*-1
Prob(t' = t) = 1-π(t) if t = T*-1.
T* is the last period with probability 1.
Public fields
 twilight const
Public methods
 Last virtual With Longevity the last period (death) is definitively the last.
 Longevity Random death and uncertain maximum lifetime.
 setPstar virtual
 Vupdate virtual Mixed value updating.
Inherited methods from Mortality:
Mortality
Inherited methods from NonDeterministicAging:
sePstar
Inherited methods from Clock:
Clock, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Mortality:
DeathV, MortProb, mp, Tstar
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A Markov process.

Definition: A Markov state variable q is an autonomous random variable

whose a transition depends (at most) on its own current value.
That is, its transition does not depend on either the values of other state variables or the current action vector.
Because q is discrete and the transition to q' can only depend on q, the transition is a square matrix.
Note that elements of the transition matrix do not have to be constant values. They can be parameters or functions of parameters that are evaluated each time the problem is solved,
The FiveO function TransitionMatrix will create and return an array of simplexes which can be sent as the transition matrix in the general Markov case.

Transition:

The transition must be square. The number of values it takes is determined by the dimension of the column or vector.

If it is a matrix, then the rows are next period states and the columns are currents. $$P(s'=z|s) = \Pi_{zs}$$ To be handled properly the state variable must be placed in the endogenous state vector \(\theta\).

Example:
A 3x3 transition matrix.
  decl tmat =< 0.90 ~ 0.09 ~0.05;
               0.02 ~ 0.80 ~0.3;
               0.08 ~ 0.01 ~0.11>
  x = new Markov("x",tmat);
  EndogenousStates(x);

See TransitionMatrix

See also:
Offer
Public fields
 jprob
 Pi const
Public methods
 Markov Create a discrete Markov process.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Mortality : NonDeterministicAging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Deterministic aging with random early death.

This clock has the agent age each period but allows for a state-dependent probability that they die before the start of the next period.

Death is a transition to the maximum value of the counter, T-1. This allows for endogenous bequest motives which are computed before iterating backwards on other ages.

The probability of death should be a CV-compatible quantity. This means that the mortality risk can be state dependent.

Example:
Model a lifetime of 7 ordinary periods with a constant probability of 1/10 of dying after each period (except the 7th period, which is the last period with certainty).
SetClock(RandomMortality,7,0.1);
which is equivalent to:
SetClock(new Mortality(7,0.1));
Over 9 periods, allow mortality rate to be an increasing function of age and health status. Health is a binary state variable in the model, where 1 is good health and 0 is poor health and acts like aging 3 periods:
MyModel::Mprob() {
    decl v = exp(I::t + 3*(1-CV(health)));
    return  v/(1+v);
    }
⋮
SetClock(RandomMortality,9,MyModel::Mprob);

Transition
Let T* = T-1 $$Prob(t^\prime = I\{t^\prime=t+1\}(1 - \pi(t)) + I\{t^\prime = T*\} pi(t).$$ T* is the last period with probability 1.
Public fields
 DeathV EV at t=T-1, value of death
 MortProb const CV-compatible mortality probability
 mp
 Tstar const
Public methods
 Mortality Set clock to be deterministic aging with early random death.
 setPstar virtual
 Vupdate virtual Mixed value updating.
Inherited methods from NonDeterministicAging:
sePstar
Inherited methods from Clock:
Clock, Last, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 MultiIndicator : Indicator : AuxiliaryValue : Quantity

Create
Public methods
 MultiIndicator
 Realize Default realized auxililary variable, sets v=1.0.
Inherited methods from Indicator:
Indicator
Inherited methods from AuxiliaryValue:
AuxiliaryValue, Likelihood, Track
Inherited methods from Quantity:
SetVolume
Inherited fields from Indicator:
iacted, iobj, myval, target, ttype
Inherited fields from AuxiliaryValue:
indata
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 MVIID : StateBlock : StateVariable : Discrete : Quantity

Container for a block of discrete multivariate IID variables (can be contemporaneously correlated).
Public fields
 M const Number of points per variable
 MtoN const total number points in the block.
Public methods
 MVIID Create a multivariate normal state block.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 MVNormal : MVIID : StateBlock : StateVariable : Discrete : Quantity

A discrete multivariate normal IID block of contemporaneously correlated variables.

x ∼ N( μ, C'C ).
Public fields
 CholLT const AV()-compatible vector-valued object which returns vech() for lower triangle of C, the Choleski decomposition of the variance matrix Ω, Ω = CC'.
 mu const vector of means μ
 zvals const matrix of Z vals for simulated actual values
Public methods
 MVNormal Create a block for a multivariate normal distribution (IID over time).
 Update virtual Updates the grid of Actual values.
Inherited methods from MVIID:
MVIID
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from MVIID:
M, MtoN
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 MVNvectorized : Nvariable : SimpleJump : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A single IID jump variable which will return a simulated vector of correlated normal variates.

How does this compare to MVNormal?
Both are used to represent a normal vector of length \(M\): \(x \sim N(\mu,\Omega)\)
The latter creates a block of \(M\) correlated state variables. Each one takes on \(N\) values. So a total of \(M^N\) points are created. For more than 2 or 3 dimensions this either makes the state space very large or the approximization very coarse.

This state variable uses a given number of pseudo-random draws, \(N.\) It is based on a fixed draw of \(N\times M\) IID standard normal draws which are transformed by the mean and CHolesky matrix to produce \(N\times M\) correlated normals.

This variable needs to be used with care: CV() will return a single value which is the index between 0 and N-1. AV() returns the \(1\times M\) vector that corresponds to the index.
Public fields
 Ndim
 vecactual
 zvals
Public methods
 MVNvectorized Create a single IID variable which vectorizes a multivariate normal using psuedo-random draws.
 myAV
 Update

Inherited methods from Nvariable:
Nvariable
Inherited methods from SimpleJump:
SimpleJump, Transit
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Nvariable:
pars
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Noisy : AuxiliaryValue : Quantity

Add normal noise to a AV() compatible target, either linear or log-linear.
Public fields
 eps implied normal value.
 Linear const noise is additive (not multiplicative).
 sigma const standard deviation of the normal nois.
 truevalue const Object that is measured with noise.
Public methods
 Likelihood likelihood contribution of a noisy auxiliary value .
 Noisy Create an auxiliary value that adds normally-distributed noise to the actual value.
 Realize Default realized auxililary variable, sets v=1.0.
Inherited methods from AuxiliaryValue:
AuxiliaryValue, Track
Inherited methods from Quantity:
SetVolume
Inherited fields from AuxiliaryValue:
indata
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 NonDeterministicAging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Container for non-stationary and non-deterministic aging clocks.
See also:
SetClock
Public methods
 sePstar virtual
 Vupdate virtual Mixed value updating.
Inherited methods from NonStationary:
Transit
Inherited methods from Clock:
Clock, Last, setPstar, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 NonRandom : Autonomous : StateVariable : Discrete : Quantity

A container class for state variables with a non-random transition.

DDP makes no distinction between random and non-random state variables except to organize different kinds of transitions into a taxonomy.

A non-random state variable has a transition which is 0 for all possible states except one, for which it equals 1. Its state next period is determined by the current state and action.

Example:
q counts how many times another state variable s has equalled 1 in the past. Since the current value of s is known, the value of q next period is (conditionally) deterministic. As a transition function: $$q' = q + I\{s=1\}.$$ As a transition probability: $$P(q'; q,s) = I\{ q' = q + I\{s=1\} \}.$$
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, Transit, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

A container for non-stationary clocks.
Comments:
Some clocks classified as nonstationary may in fact be stationary for some parameter values.
See also:
SetClock, ClockTypes
Public methods
 Transit virtual The baseline transition for non-stationary models (normal aging).
 Vupdate virtual Check newly computed values; an alternative to computing norm of value function differences.
Inherited methods from Clock:
Clock, Last, setPstar, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 NormalComponent : Coevolving : StateVariable : Discrete : Quantity

Component of a multi-dimensional normal StateBlock.
Public methods
 NormalComponent A single element of a MVNormal block.
Inherited methods from Coevolving:
Coevolving, myAV
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Coevolving:
block, bpos
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 NormalRandomEffect : RandomEffect : TimeInvariant : StateVariable : Discrete : Quantity

A permanent discretize N(0,σ2) random effect.
See also:
DiscreteNormal
Public fields
 pars const
Public methods
 Distribution Update pdf, the distribution over the random effect.
 NormalRandomEffect Create a permanent discretized normal random effect.
Inherited methods from RandomEffect:
RandomEffect
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from RandomEffect:
fDist
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Nvariable : SimpleJump : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A jump variable whose acutal values are quantiles of the normal distribution, \(N(\mu,\sigma)\).
Public fields
 pars
Public methods
 Nvariable Create a Normal N(mu,sigma) discretize jump variable.
 Update virtual
Inherited methods from SimpleJump:
SimpleJump, Transit
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Offer : Random : Autonomous : StateVariable : Discrete : Quantity

A Basic Offer Variable. Acceptance of an offer is the action variable passed as Accept If accepted, the state next period is the current offer. Otherwise, offer occurs with a probability Pi.v values 1 ... N-1 equally likely. Value of 0 indicates no offer (prob. 1-Pi.v) Acceptance is a value of 1 in the AcceptAction.pos column of FeasAct
Public fields
 Accept const ActionVariable - indicates aceptance
 accept
 offprob
 Pi const π Double, Parameter or static function, offer probability
Public methods
 Offer Create an offer state variable.
 Update virtual
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 OfferWithLayoff : StateBlock : StateVariable : Discrete : Quantity

A combination of an Offer state variable and a status variable, (w,m) .

If unemployed an offer was generated with prob. φ. The agent can accept an offer and become employed or reject and continue searching again next period. If employed the agent can quit or can be laid off with probability λ or keep their job.

If quit or laid off last period, the offer stays the same for one period but transition to employment is impossible. This allows the previous wage to be captured and used in a block derived from this.

φ and λ can depend on (w,m) and on α

(w,m)
m ∈ Status
If m=Unemp:
   w′ =
   	   w with prob. a
       0 with prob. (1-a)(1-φ) or 1 ... N-1 with prob. (1-a)φ
   m′ = aEmp + (1-a)Unemp;
If m=Emp,
   w′ = w
   m′ =
        (1-a)Quit
		aLaidOff with prob. λ
		aEmp with prob. 1-λ
If m=Quit or LaidOff,
	m′ = Unemp
	w′ same as m=Unemp and a=0
See also:
Offer
Public fields
 accept const Action to accept offer
 Lambda const Probability of reverting to 0
 offer const Variable containing offer index
 Pi const Job Offer Probability
 status const searching, just laid off, currently holding
Public methods
 Employed
 OfferWithLayoff An offer with layoff (match dissolution) risk.
 Searching
Enumerations
 Status Status values.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 PermanentChoice : LaggedAction : Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Record if an action has ever been non-zero.

Transition:
$\(s' = \cases{ 0 & if s==0 and CV(Target)==0\cr CV(Target) & if s==0 and CV(Target) \)\gt\( 0\cr s & if s \)\gt\( 0\cr}\)$
Public methods
 PermanentChoice Create a variable that tracks a one-time permanent choice.
Inherited methods from LaggedAction:
LaggedAction
Inherited methods from Lagged:
IsReachable, Lagged
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from LaggedAction:
acol, nxtv
Inherited fields from Lagged:
Order, Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 PermanentCondition : StateTracker : Tracker : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Indicator for a condition (ever) occuring in the past.
Public methods
 PermanentCondition Create a variable that tracks that some Target condition has occurred in the past.
Inherited methods from StateTracker:
StateTracker
Inherited methods from Tracker:
IsReachable, Tracker
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Tracker:
Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 PhasedTreatment : NonDeterministicAging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

A sequence of treatment phases with fixed maximum durations.
Public fields
 final const indicates time=Rmax in phase
 ftime const vector of times in phases
 LastPhase const
 MaxF const The last treatment phase (0 if no treatment)
 phase const vector of phases
 R0 const The index into the vector of the state variable that are the first period of each phase
 Rmaxes const Vector of positive integers that are the length of each of the phases.
Public methods
 PhasedTreatment A sequence of finite-termed phases of treatment.
 Transit virtual The default transition for treatment.
 Vupdate virtual Mixed value updating.
Inherited methods from NonDeterministicAging:
sePstar
Inherited methods from Clock:
Clock, Last, setPstar, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Random : Autonomous : StateVariable : Discrete : Quantity

State variables with a non-determinisitic transition.

DDP makes no distinction between random and non-random state variables except to organize different kinds of transitions into a taxonomy.

A random state variable has a transition which is not always 0/1 .

Its value next period is determined stochastically by the current state and action.

Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, Transit, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RandomEffect : TimeInvariant : StateVariable : Discrete : Quantity

Base for that is invariant for an individual DP problem treated as random.

Elements of \(\gamma_r\) derived from this class. A random effect plays a similar to its counterpart in an econometric model. Solution methods loop over random effect values and will account for the distribution in computing predictions, simulations and econometric objectives.

Example:
N equally likely values:
RandomEffect("g",N);
A binary variable with constant and unequal weights:
RandomEffect("g",2,<0.8;0.2>);
A two-point random effect for which the distribution is a function of the value of a fixed effect (level of education) and a parameter vector β:
decl beta;
⋮
hd() {
    decl v = exp((1~AV(educ))*CV(beta)), s = sumc(v);
    return v/s;
    }
⋮
GroupVariables(
   h=RandomEffect("h",2,hd),
   educ = FixedEffect("ed",4);
   }
A random effect with a distribution that equals a Simplex parameter block:
enum{Npts = 5};
hd = new Simplex("h",Npts);
RandomEffect("h",Npts,hd);

fDist is stored in a non-constant member, so it can be changed after the CreateSpaces() has been called.

Most Flexible Option
The user can define a derived class and supply a replacement to the virtual Distribution().
See also:
RETask, FixedEffect, RandomEffectBlock
Public fields
 fDist holds the object that computes and returns the distribution across values.
Public methods
 Distribution virtual Update pdf, the distribution over the random effect.
 RandomEffect Create a state variable that is random but invariant for an individual DP problem.
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RandomEffectBlock : StateBlock : StateVariable : Discrete : Quantity

A Block of CorrelatedEffect group variables.
Public methods
 Distribution virtual
 RandomEffectBlock Create a list of CorrelatedEffect state variables.
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock, Transit
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RandomSpell : Random : Autonomous : StateVariable : Discrete : Quantity

A binary random state variable that goes from 1 to 0 with a AV-compatible probability and goes from 0 to 1 based on the value of an action or a CV-compatible object.
Public methods
 IsReachable
 RandomSpell A binary state that starts and stops according to a random process.
 Transit
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RandomTrigger : Triggered : Augmented : StateVariable : Discrete : Quantity

A state variable that augments a base transition so that with some probability it takes on a special value next period.

The difference with ValueTriggered is that this transition is random as of the current state but ValueTriggered the transition is deterministic.

Transition. Let
q denote this state variable.
b be the base state variable to be agumented
τ be the actual value of the trigger probability.
r be the value to go to when triggered.
Prob( q′= z | q,b ) =  τI{z=r} + (1-τ)Prob(b′=z)
Example:
A person is employed at the start of this period if they chose to work last period and they were not randomly laid off between periods, with a dynamically determed lay off probability.
a = ActionVariable("work",2);
lprob = new Probability("layoff prob",0.1);
m = RandomTrigger( new LaggedAction("emp",a),lprob,0);
Public methods
 RandomTrigger Augment a base transition so that a special value occurs with some probability.
 Transit virtual
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RandomUpDown : Random : Autonomous : StateVariable : Discrete : Quantity

A state variable that can stay the same, increase by 1 or decrease by 1.
Transition:

$$\eqalign{ P(s^\prime = s-1) &= I\{s\gt0\}\Pi_0\cr P(s^\prime = s) &= \Pi_1 + I\{s=0\}\Pi_0 + I\{s=N-1\} \Pi_2\cr P(s^\prime = s+1) &= I\{s\lt N-1\}\Pi_2\cr}$$

\(\Pi\) can be a vector, a Simplex parameter block or a static function that returns a 3×1 vector.

Example:
In Wolpin (1984), the stock of children (M), fertility choice (i) and neonatal infant mortality are modeled as:
Fertility : FiniteHorizon	{
	static decl q, i, M, x;
	static Mortality();
	}
Fertility::Initalize()	 {
	q = new Coefficients("q",);
	Actions(i = new Action("i",2));
	EndogenousState( M = new RandomUpDown("M",20,Fertility::Mortality) );
	}	
Fertility::Mortality()	{
	decl p = probn(x*q);  // x has to be updated using current age and other x values.
	decl ivals = CV(i);
	return 0 ~ (1-p*ivals) ~ p*ivals;
	}
Public fields
 fp
 fPi const
Public methods
 IsReachable virtual Prune assuming initial value = 0.
 RandomUpDown Create a state variable that increments or decrements with (possibly) state-dependent probabilities.
Enumerations
 Anonymous enum 1
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RealizedUtility : AuxiliaryValue : Quantity

Built-in variable that records realized utility. \(U(\alpha,\epsilon,\eta,\theta,\gamma)\)
Public methods
 Realize virtual Default realized auxililary variable, sets v=1.0.
 RealizedUtility Realized utility, U().
Inherited methods from AuxiliaryValue:
AuxiliaryValue, Likelihood, Track
Inherited methods from Quantity:
SetVolume
Inherited fields from AuxiliaryValue:
indata
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Regimes : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Public methods
 Regimes
Inherited methods from NonStationary:
Transit, Vupdate
Inherited methods from Clock:
Clock, Last, setPstar, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Regressors : FixedEffectBlock : StateBlock : StateVariable : Discrete : Quantity

A block of fixed effects that can be created like a vector of demographic variables.
Public fields
 ObservedX
Public methods
 InObservedX Returns TRUE if any rows of InObservedX equal the current actual value of the block.
 Regressors Create a vector of fixed effects (FixedEffectBlock).
Inherited methods from FixedEffectBlock:
FixedEffectBlock
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock, Transit
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Renewal : Random : Autonomous : StateVariable : Discrete : Quantity

Increments randomly based on Pi then returns to 0 if reset=1. is the reset action. \(\Pi\) is the vector of non-zero increment probabilities.
Transition:
If \(s\lt N-1-K\) then
$$Prob(s^\prime = (1-A)s+k ) = \Pi_k.$$
Otherwise,
$$Prob(s^\prime = N-1 ) = \sum_{k=N-s-1}^K \Pi_k.$$
Example:
Rust (1987) is set up as
Zurcher : Ergodic	{
	static decl q, i, x;
    ⋮
	}
Zurcher::Initialize()	{
    ⋮
	q = new Simplex("q",3);
	AddVariable(i = new Action("i",2));
	EndogenousState( x = new Renewal("x",90,i,q) );
    ⋮
	}
Public fields
 P const length of Pi
 Pi const block or vector of increment probabilities
 reset const state or action that resets the process to 0
Public methods
 Renewal Create a Rust renewal process state variable.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Reset : ActionTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

When the trigger is 1 the transition resets to 0, otherwise it is the base.

A special case of ActionTriggered in which 1 is the trigger and the special reset value is 0.

If the trigger takes on more than two values only the value of 1 is the trigger.
Public methods
 Reset Augment a base transition to reset to 0 if the trigger is 1.

Inherited methods from ActionTriggered:
ActionTriggered, Transit
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 RetainMatch : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Store a new offer or retain its current value.
Transition:
s' = .
Example:
Public fields
 acc const
 hold
 i
 keep const
 matchvalue const
 repl
 replace const
Public methods
 RetainMatch Accept or reject an offer then retain.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Rouwenhorst : Tauchen : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

Rouwenhorst approximation to discretized correlated normal value.
See also:
Tauchen
Public fields
 cp
 Grid
 M
 n
 Nrt const
 p
 pp
Public methods
 Rouwenhorst Rouwenhorst discretizization of a correlated normal process.
 Update virtual Dynamic updating of the transition probabilities.
Inherited methods from Tauchen:
Tauchen
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Tauchen:
gaps, pars, pts, rnge
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 SimpleJump : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

Equally likely values each period (IID).

Transition:
$$P(s' = z) = {1\over N} $$
Public methods
 SimpleJump Create an equally likely discrete Exogenous variable.
 Transit Transition .
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateAccumulator : Accumulator : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Add up the values of the target state.
Transition:
s' = min( s+ s.x, s.N-).
Ρ(s'=z | α,ε, θ,ψ) = I{z = min(s + s.x,s.N-) }.
Example:
decl totoff = new StateAccumulator("Total Offers",noffers,10);  //track total offers received up to 10
EndogenousStates(totoff);
Public methods
 StateAccumulator Create a variable that tracks the cumulative value of another state.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Accumulator:
Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateAtTbar : LaggedState : Lagged : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Record the value of an state variable q at a specified time, starting the following period.
Transition:
$$s' = \cases{ 0 & if t less than Tbar\cr Target.v & if t equals Tbar\cr s & if t greater than Tbar\cr}$$

IsReachable
Non-zero states are trimmed as unreachable for t≤ Tbar
See also:
LaggedState, ChoiceAtTbar
Public fields
 Tbar const value of I::t to record at.
Public methods
 IsReachable virtual Prune non-zero values before Tbar
 StateAtTbar Record the value of a state variable at a given time (starting at the next period).
Inherited methods from LaggedState:
LaggedState
Inherited methods from Lagged:
Lagged
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Lagged:
Order, Target
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateBlock : StateVariable : Discrete : Quantity

A Block of Coevolving state variables.
Public fields
 Actual matrix of all actual values.
 Allv matrix of all current values
 rnge vector <0:N-1>, used in AV().
Public methods
 AddToBlock Add state variable(s) to a block.
 myAV virtual Sets and returns the vector of actual values of the block as a row vector.
 StateBlock Create a list of Coevolving state variables.
 Transit virtual Default Transit (transition for a state variable).
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateCounter : Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Counts periods value(s) of target state s.x have occurred.

The values of x in \(\tau\) (ToTrack)

Transition:

$$s' = s+I\{ CV(x) \in tau\}I\{s \lt N-1\}.$$

Example:
decl wks  = new ActionState("wksunemp",work,10,<0>); //track up to 10 years
EndogenousStates(wks);
Public methods
 StateCounter Create a variable that counts how many times another state has taken on certain values.
Inherited methods from Counter:
Counter, IsReachable
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Counter:
Reset, Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateCounterMaster : StateCounter : Counter : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Responsible for tracking reachability in a list of state counters for a target.
Public fields
 CVSList
Public methods
 IsReachable Trims the state space if the clock is exhibits aging and there is a list of state counters for a Target, assuming all are initialized as 0.
 StateCounterMaster Create a StateCounter that checks reachability of an array of state counters.
Inherited methods from StateCounter:
StateCounter
Inherited methods from Counter:
Counter
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Counter:
Reset, Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateTracker : Tracker : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Indicates another state variable took on a value last period. Let t denote the state variable being tracked. Let r denote the value or vector of values to track.
q' = I{t ∈ r}.
See also:
ActionTracker
Public methods
 StateTracker Create a binary variable that indicates if the previous value of state variable was in a set.
Inherited methods from Tracker:
IsReachable, Tracker
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Tracker:
Target, ToTrack
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StateVariable : Discrete : Quantity

Base Class for elements of state vectors, \(\epsilon\), \(\eta\), \(\theta\), \(\gamma\) .
See also:
ExogenousStates, EndogenousStates, SemiExogenousStates, GroupVariables, StateBlock
Public fields
 Prune const Trim unreachable states if finite horizon clock is detected.
 TermValues A vector of values that end decision making Equal to < > if state is not terminating.
Public methods
 Check virtual Check dimensions of actual.
 IsBlock static Check if variable is a block of state variables.
 IsBlockMember static Check if variable is a valid member of a block.
 IsReachable virtual Default Indicator for intrinsic reachable values of a state variable.
 MakeTerminal Designate one or more values of the state variable as terminal.
 myAV virtual Return actual[v].
 StateVariable The base state variable creator.
 Track Track the state variable in predictions.
 Transit virtual Default Transit (transition for a state variable).
 UnChanged virtual Returns the transition for a state variable that is unchanged next period.
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StaticAux : AuxiliaryValue : Quantity

A wrapper around a function that returns an auxiliary value.
Public fields
 target const static function.
Public methods
 Realize virtual Default realized auxililary variable, sets v=1.0.
 StaticAux Create an wrapper for a static function AV()-compatible object.
Inherited methods from AuxiliaryValue:
AuxiliaryValue, Likelihood, Track
Inherited methods from Quantity:
SetVolume
Inherited fields from AuxiliaryValue:
indata
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 StaticP : Aging : NonStationary : Clock : StateBlock : StateVariable : Discrete : Quantity

A static one-shot program (T=1).

ClockType code to use this clock: StaticProgram
Example:
Use SetClock:
SetClock(StaticProgram);
which is equivalent to this:
SetClock(new StaticP());
which is also equivalent to this:
SetClock(NormalAging,1);
See also:
SetClock, ClockTypes
Public methods
 StaticP A static problem: Aging and T=1.
Inherited methods from Aging:
Aging, setPstar
Inherited methods from NonStationary:
Transit, Vupdate
Inherited methods from Clock:
Clock, Last, Synch
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 Stationary : Clock : StateBlock : StateVariable : Discrete : Quantity

Timing in a stationary environment.

ClockTypes tags to send to SetClock for Stationary:
InfiniteHorizon, Ergodic
Example:
Using Tags:
SetClock(InfiniteHorizon);

SetClock(Ergodic);

Sent directly:
SetClock(new Stationary(FALSE));

SetClock(new Stationary(TRUE));

See also:
SetClock, ClockTypes
Public methods
 Last virtual No period is the last period in a stationary environment.
 Stationary A stationary clock block.
Inherited methods from Clock:
Clock, setPstar, Synch, Vupdate
Inherited methods from StateBlock:
AddToBlock, myAV, StateBlock
Inherited methods from StateVariable:
IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Clock:
IsErgodic, normparam, t, tprime, Volume
Inherited fields from StateBlock:
Actual, Allv, rnge
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v

 SubEffect : FixedEffect : TimeInvariant : StateVariable : Discrete : Quantity

An element of the FixedEffectBlock.
Public fields
 block EffectBlock that I belong to
 bpos Index into block array/vector
Public methods
 SubEffect Create a new Sub.
Inherited methods from FixedEffect:
FixedEffect
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 SubState : ValueTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

Let a state variable transit only in one sub-period of a Divided clock.
Public methods
 IsReachable virtual Use majt for time period when checking reachability of the base state variable.
 SubState Make a state variable only transit in one subperiod of a Divided clock.
Inherited methods from ValueTriggered:
ValueTriggered
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Tauchen : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A one-dimensional correlated discretized normal process using Tauchen's approach.
See also:
MVNormal
Public fields
 cp
 gaps const
 Grid
 M const support.
 pars const
 pts
 rnge
Public methods
 Tauchen Tauchen discretizization of a correlated normal process.
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 TauchenRandomEffect : NormalRandomEffect : RandomEffect : TimeInvariant : StateVariable : Discrete : Quantity

Use Tauchen's method to discretize a normal variable for Fixed Effects.
Public fields
 M const
 pars const
Public methods
 Distribution virtual Update pdf, the distribution over the random effect.
 TauchenRandomEffect Create a permanent discretized normal random effect.
Inherited methods from NormalRandomEffect:
NormalRandomEffect
Inherited methods from RandomEffect:
RandomEffect
Inherited methods from TimeInvariant:
Update
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from RandomEffect:
fDist
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 TimeInvariant : StateVariable : Discrete : Quantity

Base container for an element of the group vector \(\gamma\).
Public methods
 Update Do Nothing and prohibit derived Updates.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 TimeVariable : Coevolving : StateVariable : Discrete : Quantity

Base class for members of a Clock block.
Public methods
 TimeVariable .
Inherited methods from Coevolving:
Coevolving, myAV
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from Coevolving:
block, bpos
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Tracker : NonRandom : Autonomous : StateVariable : Discrete : Quantity

Indicates a state or action took on particular values last period.
See also:
StateTracker, ActionTracker
Public fields
 Target const Variable to track
 ToTrack const Values to track
Public methods
 IsReachable Default initial value for Tracker variables is 0.
 Tracker Indicator that a state or action took on particular value(s) last period.
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, Transit, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual, Update
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Triggered : Augmented : StateVariable : Discrete : Quantity

Container for augmented state variables in which a value or an action trigger a transition not present in the base state.
Public methods
 Triggered The base for triggered augmentations.
 Update virtual Default Augmented Update.
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch, Transit
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 UnFreeze : Triggered : Augmented : StateVariable : Discrete : Quantity

Unfreeze a froze variable under some condition.
Public methods
 InitDist virtual
 Transit virtual
 UnFreeze
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 ValueTriggered : Triggered : Augmented : StateVariable : Discrete : Quantity

A state variable that augments a base transition so that the value of a AV()-compatible object triggers this state to transit to a special value.
Transition. Let
q denote this state variable.
b be the base state variable to be agumented (not added to model itself)
s be the trigger object
t be the value of s that pulls the trigger.
r be the value to go to when triggered.
    q′ = I{a≠t} b′ + (1-I{s==t}}r
    
Example:
Public methods
 Transit virtual
 ValueTriggered Augment a base transition so the value of some other object trigger a special value.
Inherited methods from Triggered:
Triggered, Update
Inherited methods from Augmented:
Augmented, IsReachable, SetActual, Synch
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF
Inherited methods from Quantity:
SetVolume
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Xponential : SimpleJump : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A IID jump variable whose actual values are quantiles of the exponential distribution with decay rate \(\gamma\).
Public fields
 gamma const
Public methods
 Update
 Xponential Create a exponentially distributed discretize jump variable.
Inherited methods from SimpleJump:
SimpleJump, Transit
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Zvariable : Nvariable : SimpleJump : IIDJump : Markov : Random : Autonomous : StateVariable : Discrete : Quantity

A jump variable whose acutal values are quantiles of the standard normal distribution.
Public methods
 Zvariable Create a standard normal N(0,1) discretize jump variable.
Inherited methods from Nvariable:
Nvariable, Update
Inherited methods from SimpleJump:
SimpleJump, Transit
Inherited methods from IIDJump:
IIDJump
Inherited methods from Markov:
Markov
Inherited methods from StateVariable:
Check, IsBlock, IsBlockMember, IsReachable, MakeTerminal, myAV, StateVariable, Track, UnChanged
Inherited methods from Discrete:
Discrete, GetN, PDF, SetActual
Inherited methods from Quantity:
SetVolume
Inherited fields from Nvariable:
pars
Inherited fields from Markov:
jprob, Pi
Inherited fields from StateVariable:
Prune, TermValues
Inherited fields from Discrete:
actual, N, pdf, subv, vals
Inherited fields from Quantity:
L, logf, pos, track, v, Volume

 Global

Enumerations
TransOutput . elements of array returned by Transit Qind, Qprob, TransOutput

 StripZeros

StripZeros ( trans )
Remove all transitions for which the transition probability is EXACTLY 0.

 Absorbing

 Absorbing

Absorbing :: Absorbing ( L , fPi )
Create a binary endogenous absorbing state.
Parameters:
L label
fPi AV()-compatible object that returns either:
a probability p of transiting to state 1
a vector equal in length to C.
[default = 0.5]: absorbing state happens with probability 0.5.
Comments:
fPi is only called if the current value is 0.

 fPi

const decl fPi [public]

 p

decl p [public]

 Transit

Absorbing :: Transit ( )

 Accumulator

 Target

const decl Target [public]
Variable to track

 Transit

 ActionAccumulator

 ActionAccumulator

ActionAccumulator :: ActionAccumulator ( L , N , Action )
Create a variable that counts how many times an action has taken on certain values.
Parameters:
L label
N integer, maximum number of times to count
Action ActionVariable to accumulate values for.

 Transit

virtual ActionAccumulator :: Transit ( )
.

 x

decl x [public]

 y

decl y [public]

 ActionCounter

 ActionCounter

ActionCounter :: ActionCounter ( L , N , Act , ToTrack , Reset , Prune )
Create a variable that counts how many times an action has taken on certain values.
Parameters:
L label
Act ActionVariable to track.
N integer, maximum number of times to count
ToTrack vector, values of Act to count, default=<1>
DoAll: all non-zero values
Reset AV()-compatible binary value that resets the count if TRUE (default=0).
Prune TRUE (default), prune unreachable states if finite horizon detected.
Example:
Track work==1 up to 10 years, no reset.
decl exper = new ActionCounter("Yrs Experience",10,work);
EndogenousStates(exper);

 inc

decl inc [public]

 ActionCounterMaster

 ActionCounterMaster

ActionCounterMaster :: ActionCounterMaster ( L , N , Act , ToTrack )
Create a ActionCounter that checks reachability of an array of state counters. This state variable is created by DP::ActionValueCounters
Parameters:
L label
N integer, maximum number of times to count
Act ActionVariable to track.
ToTrack integer , values of State to count.

 CVSList

decl CVSList [public]

 IsReachable

ActionCounterMaster :: IsReachable ( )
Trims the state space if the clock is exhibits aging and there is a list of state counters for a Target, assuming all are initialized as 0.

 ActionTracker

 ActionTracker

ActionTracker :: ActionTracker ( L , Target , ToTrack , Prune )
Create a binary variable that indicates if the previous value of action variable was in a set.
Parameters:
L Label
Target ActionVariable to track
ToTrack integer or vector of values to track
Prune Prune in finite horizon (starts at 0)

 d

decl d [public]

 Transit

virtual ActionTracker :: Transit ( )
.

 ActionTriggered

 ActionTriggered

ActionTriggered :: ActionTriggered ( b , t , tv , rval )
Create a new augmented state variable for which an action triggers the change.
Parameters:
b the base StateVariable whose transition is augmented.
t the ActionVariable that triggers a different transition
tv the integer (actual) or vector of integer values of tv that triggers the transition [default=1].
rval the integer (current) value of this state variable when the trigger occurs [default=0]
-1, then the reset value this.N = b.N+1 and rval = b.N.

The transition for an action triggered state:
Prob( q' = v ) = I{t∈tv}I{v==rval} + (1-I{t∈tv})Prob(b=v)

 Transit

virtual ActionTriggered :: Transit ( )

 ActionVariable

 ActionVariable

ActionVariable :: ActionVariable ( L , NorVLabels )
Create a new action variable.
Parameters:
L string a label or name for the variable.
default = "a"
NorVLabels positive integer, number of values the variable can take on.
default N=1 is a constant, which can be included as a placeholder for extensions of a model.
OR
N-array of strings, holding labels for each choice (used in printing)

See also:
Actions, FeasibleActions

 myAV

virtual ActionVariable :: myAV ( )
Return A[][pos].
See also:
Alpha, AV, Update

 myCV

virtual ActionVariable :: myCV ( )
Return C[][pos].
See also:
Alpha, CV, Update

 myEV

ActionVariable :: myEV ( )
Return aC[][pos], current external value.
See also:
Alpha, CV, Update

 vL

decl vL [public]
vector of value labels.

 AgeBrackets

 AgeBrackets

AgeBrackets :: AgeBrackets ( Brackets )
Create an aging clock with brackets.
Parameters:
Brackets vector of period lengths

ClockType code to use this clock: StaticProgram
Example:
Use SetClock:
SetClock(StaticProgram);
which is equivalent to this:
SetClock(new StaticP());
which is also equivalent to this:
SetClock(NormalAging,1);
See also:
SetClock, ClockTypes

 Brackets

const decl Brackets [public]
Vector of period lengths for each age

 Last

virtual AgeBrackets :: Last ( )
Return flag for very last period possible.

 setPstar

virtual AgeBrackets :: setPstar ( notsynched )

 TransMatrix

decl TransMatrix [public]
Transition matrix based on Brackets

 Aging

 Aging

Aging :: Aging ( T )
Normal Finite Horizon Aging.

ClockType code to use this clock: NormalAging
Example:
Use SetClock, 10 time periods:
SetClock(NormalAging,10);
which is equivalent to this:
SetClock(new Aging(10));
See also:
SetClock, ClockTypes

 setPstar

virtual Aging :: setPstar ( notsynched )

 Asset

 Asset

Asset :: Asset ( L , N , r )
Create a new asset state variable.
Parameters:
L label
N number of values
r AV()-compatible object, interest rate on current holding.
See also:
Discretized

 r

const decl r [public]
AV()-compatible object, interest rate on current holding.

 Augmented

 Augmented

Augmented :: Augmented ( Lorb , N )
Base creator augmented state variables
Parameters:
Lorb either a StateVariable object, the base variable to augment
Or, string the label for this variable. In thise case the base is Fixed
N integer, if Otherwise, if > b.N number of points of the augmented variable. Otherwise, ignored and this.N = b.Nl If Lorb is a string then b = new Fixed(Lorb).

 IsReachable

virtual Augmented :: IsReachable ( )
Default Indicator for intrinsic reachable values of a state variable.
Returns:
TRUE
Comments:
Most derived classes will not provide a replacement. Examples that do (or could) are Forget augmented state variables; ActionAccumulator and Duration in normal aging models.

 SetActual

virtual Augmented :: SetActual ( MaxV )
Normalize the actual values of the base variable and then copy them to these actuals.
Parameters:
MaxV positive real, default = 1.0 Sets the actual vector to 0,…, MaxV.
See also:
Update

 Synch

Augmented :: Synch ( )
Synchronize base state variable value with current value. Not called by user code.

 Transit

virtual Augmented :: Transit ( )
Default Transit (transition for a state variable).

This is a virtual method, so derived classes replace this default version with the one that goes along with the kind of state variable.

Returns:
a 2x1 array, {F,P} where
F is a 1×L row vector of feasible (non-zero probability) states next period.
P is either a Alpha::N × L or 1 × L matrix of action-specific transition probabilities, Ρ(q′;α,η,θ).
The built-in transit returns <0> , CondProbOne }. That is the with probability one the value of the state variable next period is 0.

 Update

virtual Augmented :: Update ( )
Default Augmented Update. The Update() for the base type is called. And its actual vector is copied to the augmented actuals.

Not called by user code

 AuxiliaryValue

 AuxiliaryValue

AuxiliaryValue :: AuxiliaryValue ( L )
Create a new element of χ, the space of auxiliary outcomes.
Parameters:
L label

See also:
AuxiliaryOutcomes

 indata

decl indata [public]
observed in external data .

 Likelihood

virtual AuxiliaryValue :: Likelihood ( y )
Default contribution to likelihood.
Parameters:
y, the current realized outcome, υ.
Returns:
1.0

 Realize

virtual AuxiliaryValue :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 Track

 BinaryChoice

 BinaryChoice

BinaryChoice :: BinaryChoice ( L )
Create a binary action variable.

 BirthAndSex

 b

const decl b [public]
Binary ActionVariable.

 BirthAndSex

BirthAndSex :: BirthAndSex ( L , b , ratios )
Create a 3-valued state variable that records a binary outcome of a binary choice.
Parameters:
L label
b BinaryChoice action variable.
ratios AV-compatible object that returns a 2-d probabilities (ratio[0]+ration[1]=1)

Transition values:

  s'        Prob            Interpretation
  ------------------------------------------------------------------
  0        CV(b)==0         Choose not to have a child this period
  1        CV(b)*ratio[0]   Had child, it's a boy
  2        CV(b)*ratio[1]   Had child, it's a girl

 ratios

const decl ratios [public]
2x1 proportions.

 ChoiceAtTbar

 ChoiceAtTbar

ChoiceAtTbar :: ChoiceAtTbar ( L , Target , Tbar , Prune )
Record the value of an action variable at a given time.
Parameters:
L label
Target ActionVariable to record
Tbar clock time at which to record choice
Prune TRUE [default], prune unreachable states (non-zero values) before Tbar

$$s' = \cases{ 0 & if t less than Tbar\cr CV((Target) & if t equals Tbar\cr s & if t greater than Tbar\cr}$$

Example:

 IsReachable

virtual ChoiceAtTbar :: IsReachable ( )
Prune non-zero values before Tbar

 Tbar

const decl Tbar [public]

 Clock

 Clock

Clock :: Clock ( Nt , Ntprime )
Base clock block.
Parameters:
Nt integer, the number of different values t takes on
Ntprime integer, the number of values t' takes on.
Comments:
Typically use a derived clock type or SetClock. If you are creating your own clock that is not derived from something else your code would have to call this routine.

 IsErgodic

const decl IsErgodic [public]
Store Ergodic Distribution.

 Last

virtual Clock :: Last ( )
Flag for last period. This is the default method, used by simple NonStationary environments. The Ergodic clock replaces this one and other clocks may as well.

If TRUE then transitions are not computed.

Returns:
TRUE if current time is the last possible.

 normparam

static decl normparam [public]
Norm parameter for stationary convergence. Default=2, Eucliean. See Ox's norm() When A is a vector: 0: infinity norm, maximum absolute value, 1: sum of absolute values, 2: square root of sum of squares, p: p-th root of sum of absolute elements raised to the power p. -1: minimum absolute value.

 setPstar

virtual Clock :: setPstar ( notsynched )

 Synch

virtual Clock :: Synch ( )

 t

const decl t [public]
t, current TimeVariable

 tprime

const decl tprime [public]
t', next period TimeVariable used during V iteration to track possible clock values next period.

 Volume

static decl Volume [public]
Volume for clocks (static so don't need to access internal var.

 Vupdate

virtual Clock :: Vupdate ( )
The base calculation to be carried out after the value of all states at time t have been computed.
The clock Vupdate() is called by Update() at the end of one Bellman iteration.

The base version returns the norm() of the change in the value function for convergence check.
This is the stationary calculation. See Vupdate() for an alternative
Other specialized clocks copy part of the value functions into the vector of values for access in earlier time periods.
Returns:
||V(θ)-V'(θ′)||
See also:
Tune, normparam, vtoler

 Coevolving

 block

decl block [public]
StateBlock that I belong to

 bpos

decl bpos [public]
Index into block array/vector

 Coevolving

Coevolving :: Coevolving ( Lorb , N )
Create a new Coevolving random variable.
Parameters:
Lorb label or base StateVariable
N number of values it takes on.
See also:
StateBlock

 myAV

virtual Coevolving :: myAV ( )
Return actual[v]. If s contains a state variable then AV(s) calls s->myAV()
Returns:
actual[v].
See also:
CV, AV, Update

 CorrelatedEffect

 block

decl block [public]
EffectBlock that I belong to

 bpos

decl bpos [public]
Index into block array/vector

 CorrelatedEffect

CorrelatedEffect :: CorrelatedEffect ( L , N )
Create a new CorrelatedEffect.
Parameters:
L label
N number of values it takes on.
See also:
RandomEffectBlock

 Counter

 Counter

Counter :: Counter ( L , N , Target , ToTrack , Reset , Prune )
Create a new counter variable.
Parameters:
L label
N number of values
Target state or action to track
ToTrack integer or vector of values to count
DoAll, track all non-zero values
Reset AV()-compatiable reset to 0 flag
Prune prunt assuming counter starts at 0.

 IsReachable

virtual Counter :: IsReachable ( )
Trims the state space assuming counter is initialized to 0.

 Reset

const decl Reset [public]
AV()-compatiable reset to 0 flag

 Target

const decl Target [public]
Variable to track

 ToTrack

const decl ToTrack [public]
Values to track

 Transit

 Cycle

 Cycle

Cycle :: Cycle ( L , N )
Create a deterministic cycle variable.
Parameters:
L label
N integer, number of periods in the cycle
Example:
decl qtr = new Cycle("Quarter",4);
EndogenousStates(qtr);

 Deterministic

 Deterministic

Deterministic :: Deterministic ( L , nextValues )
Create a state variable with an arbitrary deterministic transition.
Parameters:
L label
nextValues vector of next (current) values

Example:
    s = Deterministic(L,<1;0>);
s will cycle betweeen 0 and 1 every period

 nextValues

decl nextValues [public]
matrix with transition.

 Divided

 delts

decl delts [public]
zeros(SubT-1,0)|δ.

 Divided

Divided :: Divided ( MajT , SubT , HasInit , HasFinal )
Create a clock divided into subperiods.
Parameters:
MajT integer number of periods
0 infinite horizon
SubT positive integer, sub-periods
HasInit FALSE [default] t=0 is a sub-period (so the subperiod increments and the major period stays the same)
TRUE t=0 is not subdivided (so the subperiod stays 0 in the following period and the major period increments).
HasFinal FALSE [default] The final period is the final sub-period.
the final period is an undivided period (so its subperiod is 0)
This cannot be combined with an infinite horizon

Example:


 HasFinal

const decl HasFinal [public]
Final subdivided period (TRUE/FALSE).

 HasInit

const decl HasInit [public]
Initial subdivided period (TRUE/FALSE).

 Last

virtual Divided :: Last ( )
Not stationary and last period.

 MajT

const decl MajT [public]
Number of major periods, 0=Infinite horizon.

 setPstar


 SubT

const decl SubT [public]
Number of sub periods.

 Synch

virtual Divided :: Synch ( )

 Transit

virtual Divided :: Transit ( )
Transition for sub-divided time.

 Update

virtual Divided :: Update ( )

 Vnext0

decl Vnext0 [public]

 Vupdate

virtual Divided :: Vupdate ( )
Check newly computed values; an alternative to computing norm of value function differences.
Returns:
+∞ if the newly computed state values are all well-defined
.NaN otherwise

 Duration

 add1

decl add1 [public]

 Current

const decl Current [public]

 Duration

Duration :: Duration ( L , Current , Lag , N , MaxOnce , ToTrack , Prune )
Create a variable that counts the consecutive periods a state or action has had the same value.
Parameters:
L label
Current ActionVariable or StateVariable that holds the current value
Lag StateVariable holding previous value
or vector of values to count runs for
N integer, maximum number of periods to count
MaxOnce FALSE [default] spells repeat; TRUE once max is reached it remains frozen.
Prune TRUE [default]: prune states if finite horizon detected.
Example:
streak = new Duration("Streak",Won,<1>,10); //track winning streaks up to 9 periods (9 means "9 or more");
Suppose Result equals 0 for loss, 1 for a tie and 3 for a win. Then the right-censored unbeaten streak is
noloss = new Duration("Unbeaten",Result,<1,2>,10); //track unbeaten streaks up to 10 periods long
Choice = new ActionVariable("c",3);
prechoice = new LaggedAction("lagc",Choice);
contchoice = new Duration("Streak",Choice,prechoice,5); //track streaks of making same choice up to 5 periods

 g

decl g [public]

 isact

const decl isact [public]

 Lag

const decl Lag [public]

 MaxOnce

const decl MaxOnce [public]

 Episode

 End

const decl End [public]
episode end probabilities

 Episode

Episode :: Episode ( L , K , T , Onset , End , Finite )
K mutually exclusive episodes.
Parameters:
L string, label
K integer, number of different episodes. k=0 is a special no-episode state. Spells with k>0 end with a transition k'=0
T integer, maximum duration of episodes, or maximum duration to track if not Finite
Onset AV()-compatible K vector of onset probabilities (first element is k=0 spell continuation probability)
End AV()-compatible probability of current spell ending.
Finite TRUE, T is the actual limit of spells. At t=T-1 transition is k'=0, t'=0.
FALSE, T is limit of duration tracking. Spell ends with prob. calculated at first case of t=T-1.

 Finite

const decl Finite [public]
all spells end at t=T-1.

 k

const decl k [public]
Coevolving, current type of episode

 Onset

const decl Onset [public]
episode onset probabilities

 probT

decl probT [public]
Stores end probability at t=T-1 for non-Finite episodes

 t

const decl t [public]
Coevolving, duration of episode, NOT tracked for k.v=0

 Fixed

 Fixed

Fixed :: Fixed ( L )
Create a constant entry in the state vector.
Parameters:
L label

Example:
Create a fixed value that will later hold a health indicator (but for now is always 0):
h = new Fixed("health");
Comments:
The variable only takes on the value 0. This can be used to hold a place for a variable to be added later.

 FixedAsset

 delta

const decl delta [public]
ActionVariable.

 FixedAsset

FixedAsset :: FixedAsset ( L , N , r , delta )
Create a new FIXED asset state variable.
Parameters:
L label
N number of values
r AV()-compatible object, interest rate on current holding.
delta ActionVariable
See also:
Discretized

 FixedEffect

 FixedEffect

FixedEffect :: FixedEffect ( L , N )
A state variable that is non-random an invariant for and individual DP problem.
Example:
enum{male,female,Sex} decl sex = new FixedEffect("sex",Sex);
See also:
GroupVariables

 FixedEffectBlock

 FixedEffectBlock

FixedEffectBlock :: FixedEffectBlock ( L )
Create a list of FixedEffect state variables.
Parameters:
L label for block

 Forget

 Forget

Forget :: Forget ( b , t , FCond , tv , rval )
An augmented state variable that 'forgets' its value when an action leads to a (semi-) permanent condition.
Parameters:
b base StateVariable to augment
t the ActionVariable that triggers a different transition
FCond (permanent) CV-compatible value that indicates condition already exists if TRUE
FALSE otherwise
tv the integer (actual) or vector of integer values that triggers the transition [default=1].
rval the integer (current) value of this state variable when the trigger occurs [default=0]
-1, then the reset value this.N = b.N+1 and rval = b.N.

Comments:
IsReachable() prunes the state space accordingly

 IsReachable

virtual Forget :: IsReachable ( )
Trims the state space of values that can't be reached because they are forgotten.
Returns:
TRUE if value is not forgotten or equals the reset value
FALSE otherwise

 ForgetAtT

 ForgetAtT

ForgetAtT :: ForgetAtT ( b , T )
Create an augmented state variable that 'forgets' its value when at t==T* (effective T*+1).
Parameters:
b base StateVariable to augment
Tstar time to forget
Comments:
IsReachable() prunes the state space accordingly

 IsReachable

ForgetAtT :: IsReachable ( )
Trim states because t > T* .

 Freeze

 Freeze

Freeze :: Freeze ( b , t )
Augment a state variable so it freezes at its current value as long as a trigger is TRUE.
Parameters:
b base StateVariable
t AV()-compatible object

 Transit

virtual Freeze :: Transit ( )

 IIDBinary

 IIDBinary

IIDBinary :: IIDBinary ( L , Pi )
Create binary state variable for which Prob(s=1) = Pi.
Parameters:
L label
Pi probability of 1 [default = 0.5]

 IIDJump

 IIDJump

IIDJump :: IIDJump ( L , Pi )
Create a new IID Jump process.
Parameters:
L label
Pi column vector or a Simplex parameter block or a static function
Comments:
If a function is sent then it has to be well-defined before spaces are created. It must return a column vector of the right length at this time, but the values are not used until transitions are created.

 Indicator

 iacted

const decl iacted [public]

 Indicator

Indicator :: Indicator ( target , myval , iobj , prefix )
Create an indicator (or indicator interacted with another outcome).
Parameters:
target State or Action variable to create indicator value for
myval value to indicate
iobj integer (no indicator) or an outcome to interact.
prefix string for column matching or integer (not in data)

 iobj

const decl iobj [public]

 myval

const decl myval [public]

 Realize

virtual Indicator :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 target

const decl target [public]

 ttype

const decl ttype [public]

 Jump

 Jump

Jump :: Jump ( L , N , Pi )
Create a variable that jumps with probability
Parameters:
L label
N number of values
Pi AV()-compatible jump probability.

 Pi

const decl Pi [public]
parameter for jump probability π

 KeptZeta

 CDF

virtual KeptZeta :: CDF ( zstar )
The default cdf of a kept continuous variable: Φ(z*).
Parameters:
zstar, cut-off value
Returns:
F(zstar)

 DynamicTransit

virtual KeptZeta :: DynamicTransit ( z )
Pold 0 0 Pold

 held

const decl held [public]

 InitDynamic

virtual KeptZeta :: InitDynamic ( cth , VV )

 keep

const decl keep [public]

 KeptZeta

KeptZeta :: KeptZeta ( L , N , keep , held )
Create a discretized verison of the continuous ζ variable that enters the state when accepted (or kept).
Parameters:
L label
N number of points the state variable will take on.
keep ActionVariable() that indicates ζ is kept. Next period this variable will contain a discrete approximiation to it.
held

 Quantile

virtual KeptZeta :: Quantile ( u )
The default quantile of a kept continuous variable: Φ-1(u).
Parameters:
u, vector of
Returns:
F-1(y)

 Lagged

 IsReachable

virtual Lagged :: IsReachable ( )
Presumes an initial value of 0 for prunable clocks.
Returns:
TRUE if not pruning or not a prunable clock or t gt; Order or value = 0

 Lagged

Lagged :: Lagged ( L , Target , Prune , Order )
Takes on a previous value of another state or action.
Parameters:
L label
Target Variable to lag
Prune TRUE [default], prune state space assuming initial value of 0
Order, order of the lag
[default=1] current value becomes next value tomorrow
Comments:
Users should not create a variable directly of this type. They should use the derived classes LaggedState and LaggedAction
See also:
KLaggedState, KLaggedAction

 Order

const decl Order [public]
Order of lag (for pruning).

 Target

const decl Target [public]
Variable to track.

 Transit

 LaggedAction

 acol

decl acol [public]

 LaggedAction

LaggedAction :: LaggedAction ( L , Target , Prune , Order )
Create a variable that tracks the previous value of action variable.
Parameters:
L label
Target ActionVariable to track.
Prune TRUE [default]: prune non-zero states at t=0 if finite horizon detected.
Order number of lags [default = 1]
Example:
wrked = new LaggedAction("Worked Last Year",work);
See also:
KLaggedAction

 nxtv

decl nxtv [public]

 LaggedState

 LaggedState

LaggedState :: LaggedState ( L , Target , Prune , Order )
Create a variable that tracks the previous value of another state variable.
Parameters:
L label
Target StateVariable to track.
Prune [optional default=TRUE] prune unreachable states automatically if finite horizon
Order [optional default=1] order of the lag (how many periods to prune)

Example:
prevoccup = new LaggedState("Prev",occup);
See also:
KLaggedState

 LiquidAsset

 isact

const decl isact [public]

 LiquidAsset

LiquidAsset :: LiquidAsset ( L , N , NetSavings )
Create a new LIQUID asset state variable.
Parameters:
L label
N number of values
NetSavings AV()-compatible static function of the form NetSavings()
orActionVariable

Example. A model has a choice of consume or save with a budget constraint of the form: $$c+a \le Y+(1+r)A.$$ Here \(c\) is consumption, \(a\) are assets to carry forward to tomorrow, \(Y\) is non-asset income, \(r\) is the current interest rate and \(A\) are assets that were carried forward yesterday. Rewriting in terms of feasible actions: $$a_L \le a \le Y+(1+r)A.$$ Consumption would then be total income minus \(a.\) Thus, if \(A\) and \(a\) are continuous the transition is simply: $$A^\prime = a.$$ However, if \(a\) and \(A\) are both discretized, then two things occur. First, the current values are simply indices (0,1,...). So the actual values need to set using SetActual() or writing Update() functions to replace the virtual versions if the actual values depend on parameters. If for some reason the grid values for \(a\) and \(A\) are not the same (different coarsen between consumption and assets) then the transition may not be on the grid. In this case, let \(AV(a)\) equal \(AV(A_i)+p(AV(A_{i+1})-AV(A_i)),\) where \(p\) is the fraction of the gap between two successive values of \(A\). Now: $$A^\prime = \cases{ A_i & with prob. 1-p\cr A_{i+1} & with prob. p\cr}.$$

    a = new ActionVariable("a",5);
    A = new LiquidAsset("A",5,a);
    a-<SetActual(<-10;0;10;100;1000>);
    A-<SetActual(<-10;0;10;100;1000>);

FeasibleActions() { return AV(a) .< AV(Y)+(1+r)AV(A); }

Note that no action may be feasible if AV(Y) is too small.

See also:
Discretized

 NetSavings

const decl NetSavings [public]
AV()-compatible static function or ActionVariable.

 LogNormalOffer

 LogNormalOffer

LogNormalOffer :: LogNormalOffer ( L , N , Pi , Accept , pars )
Create a discretized lognormal offer.
Parameters:
L label
N number of distinct offers. 0 means no offer
Pi double, Parameter or static function, prob. of an offer
Accept ActionVariable, acceptance of the offer.
pars 2x1 vector or array of NormalParams

 pars

const decl pars [public]

 Update

virtual LogNormalOffer :: Update ( )
Updates the actual values. actual = exp{ μ | σΦ-1(v/N)+ μ}
Comments:
v takes on values 1,2,...,N-.
See also:
AV

 Longevity

 Last

virtual Longevity :: Last ( )
With Longevity the last period (death) is definitively the last.

 Longevity

Longevity :: Longevity ( T , MortProb )
Random death and uncertain maximum lifetime.
Parameters:
T number of age phases. T-1 = death; T-2 = stationary last period.
MortProb AV() compatible probability of death
Comments:
EV at t=T-1 is computed as usual as a terminal state.
EV at t=T-2 is treated as an infinite horizon problem and iterated on.

 setPstar

virtual Longevity :: setPstar ( notsynched )

 twilight

const decl twilight [public]

 Vupdate

virtual Longevity :: Vupdate ( )
Mixed value updating.
setPstar determines behaviour
If TRUE then either t is deterministic or convergence was reached on the last iteration.
So copy current values for use by younger t and check for validity
If FALSE then check for convergence

 Markov

 jprob

decl jprob [public]

 Markov

Markov :: Markov ( L , Pi )
Create a discrete Markov process. The dimensions of the transition probabilities determines the number of values taken on.
Parameters:
L label
Pi N×1 array of AV(Alpha::C) compatible transition probabilities.
Example:
  decl tmat =< 0.90 ~ 0.09 ~0.05;
               0.02 ~ 0.80 ~0.3;
               0.08 ~ 0.01 ~0.11>
  x = new Markov("x",tmat);
see TransitionMatrix()

 Pi

const decl Pi [public]

 Mortality

 DeathV

decl DeathV [public]
EV at t=T-1, value of death

 Mortality

Mortality :: Mortality ( T , MortProb )
Set clock to be deterministic aging with early random death.
Parameters:
T length of horizon. T-1 = death;
MortProb AV()-compatible probability of early death
Comments:
EV at t=T-1 is stored and used for the value of an early death.

 MortProb

const decl MortProb [public]
CV-compatible mortality probability

 mp

decl mp [public]

 setPstar

virtual Mortality :: setPstar ( notsynched )

 Tstar

const decl Tstar [public]

 Vupdate

virtual Mortality :: Vupdate ( )
Mixed value updating.
setPstar determines behaviour
If TRUE then either t is deterministic or convergence was reached on the last iteration.
So copy current values for use by younger t and check for validity
If FALSE then check for convergence

 MultiIndicator

 MultiIndicator

MultiIndicator :: MultiIndicator ( targlist , nvec , iobj , prefix )

 Realize

MultiIndicator :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 MVIID

 M

const decl M [public]
Number of points per variable

 MtoN

const decl MtoN [public]
total number points in the block.

 MVIID

MVIID :: MVIID ( L , N , M , base )
Create a multivariate normal state block.
Parameters:
L label
N dimensions
M number of points in each dimension
base

 MVNormal

 CholLT

const decl CholLT [public]
AV()-compatible vector-valued object which returns vech() for lower triangle of C, the Choleski decomposition of the variance matrix Ω, Ω = CC'.

 mu

const decl mu [public]
vector of means μ

 MVNormal

MVNormal :: MVNormal ( L , N , M , mu , CholLT )
Create a block for a multivariate normal distribution (IID over time).
Parameters:
L label for block
N integer, length of the vector (number of state variables in block)
M integer, number of values each variable takes on (So MN is the total number of points added to the state space)
mu either a Nx1 constant vector or a Parameter Block containing means
Sigma either a N(N+1)/2 x 1 vector containing the lower diagonal of the Choleski matrix or a parameter block for it

NOTE: the first observation actual values always equals the mean for KW approximation.


 Update

virtual MVNormal :: Update ( curs , first )
Updates the grid of Actual values.
Comments:
Like all Update routines, this is called at UpdateTime.

 zvals

const decl zvals [public]
matrix of Z vals for simulated actual values

 MVNvectorized

 MVNvectorized

MVNvectorized :: MVNvectorized ( L , Ndraws , Ndim , pars , myseed )
Create a single IID variable which vectorizes a multivariate normal using psuedo-random draws.
Parameters:
L label
Ndraws number of draws of the vector.
Ndim width of the vector.
pars 2-array of NormalParams , mean and Cholesky distrubution (in the Nsigma spot)
myseed 0 [default] do not set the seed. Positive will set the seed for the draw of the IID Z matrix. See Ox's ranseed().

 myAV

MVNvectorized :: myAV ( )

 Ndim

decl Ndim [public]

 Update

MVNvectorized :: Update ( )

 vecactual

decl vecactual [public]

 zvals

decl zvals [public]

 Noisy

 eps

decl eps [public]
implied normal value.

 Likelihood

Noisy :: Likelihood ( y )
likelihood contribution of a noisy auxiliary value .

 Linear

const decl Linear [public]
noise is additive (not multiplicative).

 Noisy

Noisy :: Noisy ( truevalue , sigma , Linear , Label )
Create an auxiliary value that adds normally-distributed noise to the actual value.
Parameters:
truevalue AV()-compatible object
sigma CV()-compatible standard deviation
Linear TRUE error is additive
FALSE error is multiplicative and log-normal
Label if truevalue does not have a L member this will be used

 Realize

Noisy :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 sigma

const decl sigma [public]
standard deviation of the normal nois.

 truevalue

const decl truevalue [public]
Object that is measured with noise.

 NonDeterministicAging

 sePstar


 Vupdate

virtual NonDeterministicAging :: Vupdate ( )
Mixed value updating.
setPstar determines behaviour
If TRUE then either t is deterministic or convergence was reached on the last iteration.
So copy current values for use by younger t and check for validity
If FALSE then check for convergence

 NonStationary

 Transit

virtual NonStationary :: Transit ( )
The baseline transition for non-stationary models (normal aging).

 Vupdate

virtual NonStationary :: Vupdate ( )
Check newly computed values; an alternative to computing norm of value function differences.
Returns:
+∞ if the newly computed state values are all well-defined
.NaN otherwise

 NormalComponent

 NormalComponent

NormalComponent :: NormalComponent ( L , N )
A single element of a MVNormal block.
Parameters:
L
N

 NormalRandomEffect

 Distribution

NormalRandomEffect :: Distribution ( )
Update pdf, the distribution over the random effect. This is the built-in default method. It computes and stores the distribution over the random effect in pdf using fDist;

The user can supply a replacement in a derived class.


 NormalRandomEffect

NormalRandomEffect :: NormalRandomEffect ( L , N , pars )
Create a permanent discretized normal random effect.
Parameters:
L label
N number of points
pars AV()-compatible NormalParams vector/array containing mean and standard deviation

The probabilitY of each discrete value is 1/N. The distribution is captured by adjusting the actual values to be at the quantiles of the distribution.


 pars

const decl pars [public]

 Nvariable

 Nvariable

Nvariable :: Nvariable ( L , Ndraws , pars )
Create a Normal N(mu,sigma) discretize jump variable.
Parameters:
L label
Ndraws N
pars 2x1 vector or array of NormalParams

 pars

decl pars [public]

 Update

virtual Nvariable :: Update ( )

 Offer

 Accept

const decl Accept [public]
ActionVariable - indicates aceptance

 accept

decl accept [public]

 Offer

Offer :: Offer ( L , N , Pi , Accept )
Create an offer state variable.
Parameters:
L label
N integer, number of values (0 ... N-)
Pi a AV(Pi,Feas)-compatible offer probability.
Accept ActionVariable that indicates the offer is accepted.
Comments:
0 is treated as a the no offer probability.

 offprob

decl offprob [public]

 Pi

const decl Pi [public]
π Double, Parameter or static function, offer probability

 Update

 OfferWithLayoff

Enumerations
Status Status values. Unemp, Quit, LaidOff, Emp, Nstatus

 accept

const decl accept [public]
Action to accept offer

 Employed

OfferWithLayoff :: Employed ( )

 Lambda

const decl Lambda [public]
Probability of reverting to 0

 offer

const decl offer [public]
Variable containing offer index

 OfferWithLayoff

OfferWithLayoff :: OfferWithLayoff ( L , N , accept , Pi , Lambda )
An offer with layoff (match dissolution) risk.
Parameters:
L string, label
N integer, number of distinct offers. 0 is no offer
accept ActionVariable, indicator for accepting offer or not quitting if already employed
Pi CV compatible offer probability
Lambda CV compatible layoff probability

 Pi

const decl Pi [public]
Job Offer Probability

 Searching

OfferWithLayoff :: Searching ( )

 status

const decl status [public]
searching, just laid off, currently holding

 PermanentChoice

 PermanentChoice

PermanentChoice :: PermanentChoice ( L , Target , Prune )
Create a variable that tracks a one-time permanent choice.
Parameters:
L label
Target ActionVariable to track.

$$s' = \cases{ 0 & if CV(Target)==0\cr CV(Target) & if s==0 and CV(Target) greater than 0\cr s & if s greater than 0\cr}$$

Example:
Once a person retires the stay retired forever:
retired = new PermanentChoice("Ret",retire);

 PermanentCondition

 PermanentCondition

PermanentCondition :: PermanentCondition ( L , Target , ToTrack , Prune )
Create a variable that tracks that some Target condition has occurred in the past. Once the Target is current in the ToTrack set this variable will be TRUE
Parameters:
L label
Target CV-compatible value to track.
ToTrack vector of values to Track. Default=<1>.
Prune. Begins at 0 in finite horizon clocks.
Example:
??

 PhasedTreatment

 final

const decl final [public]
indicates time=Rmax in phase

 ftime

const decl ftime [public]
vector of times in phases

 LastPhase

const decl LastPhase [public]

 MaxF

const decl MaxF [public]
The last treatment phase (0 if no treatment)

 phase

const decl phase [public]
vector of phases

 PhasedTreatment

PhasedTreatment :: PhasedTreatment ( Rmaxes , IsErgodic )
A sequence of finite-termed phases of treatment.
Parameters:
Rmaxes vector of maximum times in each phase.
IsErgodic TRUE, store Ρ* and use it for sampling

 R0

const decl R0 [public]
The index into the vector of the state variable that are the first period of each phase

 Rmaxes

const decl Rmaxes [public]
Vector of positive integers that are the length of each of the phases.

 Transit

virtual PhasedTreatment :: Transit ( )
The default transition for treatment. All phases are deterministic. No early transitions. The transition must be one of three values.

 Vupdate

virtual PhasedTreatment :: Vupdate ( )
Mixed value updating.
setPstar determines behaviour
If TRUE then either t is deterministic or convergence was reached on the last iteration.
So copy current values for use by younger t and check for validity
If FALSE then check for convergence

 RandomEffect

 Distribution

virtual RandomEffect :: Distribution ( )
Update pdf, the distribution over the random effect. This is the built-in default method. It computes and stores the distribution over the random effect in pdf using fDist;

The user can supply a replacement in a derived class.


 fDist

decl fDist [public]
holds the object that computes and returns the distribution across values.

 RandomEffect

RandomEffect :: RandomEffect ( L , N , fDist )
Create a state variable that is random but invariant for an individual DP problem.
Parameters:
L label
N number of points of support.
fDist integer [default], uniform distribution
otherwise, a AV()-compatible object that returns the N-vector of probabilities.

The default (and always initial) distribution is uniform:
RandomEffect("g",N);
means:
Prob(g=k) = 1/N, for k=0,...,N‾
Dynamic Densities
The third argument can be used to set the vector:
RandomEffect("g",2,<0.8;0.2>);
a static function:
decl X, beta;
⋮
hd() {
    decl v = exp(X*beta), s = sumc(v);
    return v/s;
    }
⋮
RandomEffect("h",rows(X),hd);
or a parameter block:
enum{Npts = 5};
hd = new Simplex("h",Npts);
RandomEffect("h",Npts,hd);

fDist is stored in a non-constant member, so it can be changed after the CreateSpaces() has been called.

Most Flexible Option
The user can define a derived class and supply a replacement to the virtual Distribution().

See also:
GroupVariables

 RandomEffectBlock

 Distribution

virtual RandomEffectBlock :: Distribution ( )

 RandomEffectBlock

RandomEffectBlock :: RandomEffectBlock ( L )
Create a list of CorrelatedEffect state variables.
Parameters:
L label for block

 RandomSpell

 IsReachable

RandomSpell :: IsReachable ( )

 RandomSpell

RandomSpell :: RandomSpell ( L , ProbEnd , Start )
A binary state that starts and stops according to a random process.
Parameters:
L label
ProbEnd probability (AV()-compatible) of a transition from 0 to 1 [default=0.5]
Start 0/1 value (ActionVariable or CV-compatible) that moves from 0 to 1 [default=1]

Example:
injury = new RandomSpell("i",0.2,0.4);
ExogenousStates(injury);

An injury begins each period with probability 0.2. Once it starts it ends with probablity 0.4. The arguments can be functions that are called and compute probabilities that depend on states and actions.

Comments:
v=1 at t=0 in finite horizon models is pruned.

 Transit

RandomSpell :: Transit ( )

 RandomTrigger

 RandomTrigger

RandomTrigger :: RandomTrigger ( b , Tprob , rval )
Augment a base transition so that a special value occurs with some probability.
Parameters:
b base StateVariable
Tprob τ, probability of the special event
rval r, integer, value to jump to in the special event.
-1, then the reset value this.N = b.N+1 and rval = b.N.
Transition
Prob( q′= z | q,b ) =  I{z=r}τ + (1-τ)Prob(b′=z)

 Transit

virtual RandomTrigger :: Transit ( )

 RandomUpDown

Enumerations
Anonymous enum 1 down, hold, up, NRUP

 fp

decl fp [public]

 fPi

const decl fPi [public]

 IsReachable

virtual RandomUpDown :: IsReachable ( )
Prune assuming initial value = 0.

 RandomUpDown

RandomUpDown :: RandomUpDown ( L , N , fPi , Prune )
Create a state variable that increments or decrements with (possibly) state-dependent probabilities.
Parameters:
L label
N integer, number of values, N ≥ 3
fPi() a AV()-compatible object that returns a rows(A) x 3 vector of probabilities.
Prune TRUE [default], prune unreachable states assuming initial value of 0
FALSE do not prune

 RealizedUtility

 Realize

virtual RealizedUtility :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 RealizedUtility

RealizedUtility :: RealizedUtility ( )
Realized utility, U().

 Regimes

 Regimes

 Regressors

 InObservedX

Regressors :: InObservedX ( )
Returns TRUE if any rows of InObservedX equal the current actual value of the block.

 ObservedX

decl ObservedX [public]

 Regressors

Regressors :: Regressors ( L , vNorM , UseOnlyObserved )
Create a vector of fixed effects (FixedEffectBlock).
Parameters:
L string prefix
or array of strings, individual labels
vNorM vector of integer values, number of values each effect takes on.
OR, a matrix of actual values.
UseOnlyObserved [default=TRUE] if a matrix is sent as the second argument then TRUE means that only combinations (rows) of actual fixed effects in the matrix will be created and solved. Otherwise, all possible combinations of observed actual values will be solved for.
Example:
Create 3 fixed effects that take on 2, 3 and 5 values, respectively:
 x = new Regressors("X",<2,3,5>); 
Give each effect a label:
 x = new Regressors({"Gender","Education","Occupation"},<2,3,5>); 

Here is an approach to combining parameters (estimated or chosen) with fixed effects. Suppose a parameter ψ in the DP model depends on a subset of X variables, Xp and coefficients β. That is,

ψ = exp{ Xp ψ } 
For example, ψ is a function of an intercept and gender, but other X variables that do not affect ψ are in the model. The code segments below show how to give names to the columns of X, define the number of values each takes on and then how to make their current values determine the dynamically determined value of ψ.
enum{intercept,gender,race,sibling,test,NX}
enum{Ni=1,Ng=2,Nr=2,Ns=2,Nt=3}
⋮
const decl psispec = intercept~gender;
⋮
beta = new Coefficients("B",columns(psispec));
GroupVariables(X = new Regressors("X",Ng~Nr~Ns~Nt));
X.Actual[][intercept] = 1;   // replace fixed 0 with fixed 1 for the intercept.
⋮
psi = [=]() { return exp(AV(X)[psipsec]*CV(Beta)); };
Note the last line uses the lambda function feature introduced in Ox 7. So psi() would return the dynamically determined value of ψ. The alternative is to define a static method which would have the same code.

 Renewal

 P

const decl P [public]
length of Pi

 Pi

const decl Pi [public]
block or vector of increment probabilities

 Renewal

Renewal :: Renewal ( L , N , reset , Pi )
Create a Rust renewal process state variable.
Parameters:
L string, state variable name
N integer, number of values
reset ActionVariable
Pi, vector or Simplex parameter block

 reset

const decl reset [public]
state or action that resets the process to 0

 Reset

 Reset

Reset :: Reset ( b , t )
Augment a base transition to reset to 0 if the trigger is 1.
Parameters:
b the base StateVariable whose transition is augmented.
t the ActionVariable that triggers a different transition

The transition for an action triggered state:
$$Prob( q' = v ) = I\{t=1\}\times I\{v=0\} + (1-I\{t=1\})\times Prob(b=v).$$
Note:
Reset(b,t) ≡ ActionTriggered(b,t,1,0)

Example:

 RetainMatch

 acc

const decl acc [public]

 hold

decl hold [public]

 i

decl i [public]

 keep

const decl keep [public]

 matchvalue

const decl matchvalue [public]

 repl

decl repl [public]

 replace

const decl replace [public]

 RetainMatch

RetainMatch :: RetainMatch ( matchvalue , acc , replace , keep )
Accept or reject an offer then retain.
Parameters:
matchvalue
acc, either ActionVariable or an array of ActionVariables
replace, integer or vector of values that replace current value with matchvalue
keep, integer or vector of values that retain current value
Comments:
other values of acc result in 0

 Rouwenhorst

 cp

decl cp [public]

 Grid

decl Grid [public]

 M

decl M [public]

 n

decl n [public]

 Nrt

const decl Nrt [public]

 p

decl p [public]

 pp

decl pp [public]

 Rouwenhorst

Rouwenhorst :: Rouwenhorst ( L , N , pars )
Rouwenhorst discretizization of a correlated normal process.
Parameters:
L label
N Number of discrete points in the approximation
pars 3x1 vector or array of AV()-compatible parameters
    i: Parameter (default)
    0: mean (μ=0.0)
1: st. dev. (σ=1.0) 2: correlation (ρ=0.0)

Actual values will take on \(N\) equally spaced values in a dynamically determined range

Note: If none of the paramters are objects then the actual values will be Updated upon creation. This makes them available while creating spaces. Otherwise, update is not called on creation in case parameters will be read in later.

The udpate code is based on these implementations:

https://drive.google.com/file/d/0BzbiTUXVnwezQjBmU3p5NFVOYnM/view?resourcekey=0-IgQlYEXCqGMBYBGzd4Y4Zg
https://quanteconpy.readthedocs.io/en/latest/_modules/quantecon/markov/approximation.html#rouwenhorst

 Update

virtual Rouwenhorst :: Update ( )
Dynamic updating of the transition probabilities.

 SimpleJump

 SimpleJump

SimpleJump :: SimpleJump ( L , N )
Create an equally likely discrete Exogenous variable.
Parameters:
L string, label
N integer, number of values the variable takes on.
The transition is simply
Prob(q' = v)  =  1 / N, for v=0,…,N‾
Example:
TFPshock = new SimpleJump("z",20);
ExogenousStates(TFPshock);

Tomorrow TFPshock will take on values 0 to 19 each with probability 0.05


 Transit

SimpleJump :: Transit ( )
Transition .

 StateAccumulator

 StateAccumulator

StateAccumulator :: StateAccumulator ( L , N , State )
Create a variable that tracks the cumulative value of another state.
Parameters:
L label
N integer, maximum value.
State StateVariable to accumulate values for.

 StateAtTbar

 IsReachable

virtual StateAtTbar :: IsReachable ( )
Prune non-zero values before Tbar

 StateAtTbar

StateAtTbar :: StateAtTbar ( L , Target , Tbar , Prune )
Record the value of a state variable at a given time (starting at the next period).
Parameters:
L label
Target StateVariable to record
Tbar clock time at which to record choice (starting at Tbar+1)
Prune TRUE [default], prune unreachable states (non-zero values at and before Tbar)

$$s' = \cases{ 0 & if t less than Tbar\cr Target.v & if t equals Tbar\cr s & if t greater than Tbar\cr}$$

Example:

 Tbar

const decl Tbar [public]
value of I::t to record at.

 StateBlock

 Actual

decl Actual [public]
matrix of all actual values.

 AddToBlock

StateBlock :: AddToBlock ( ... )
Add state variable(s) to a block.
Parameters:
... list of Coevolving state variables to add to the block. The default Actual matrix is built up from the actual vectors of variables added to the block.

 Allv

decl Allv [public]
matrix of all current values

 myAV

virtual StateBlock :: myAV ( )
Sets and returns the vector of actual values of the block as a row vector.

 rnge

decl rnge [public]
vector <0:N-1>, used in AV().

 StateBlock

StateBlock :: StateBlock ( L , ... )
Create a list of Coevolving state variables.
Parameters:
L label for block
... list of Coevolving states to add to the block.

 Transit

Default Transit (transition for a state variable).

This is a virtual method, so derived classes replace this default version with the one that goes along with the kind of state variable.

Returns:
a 2x1 array, {F,P} where
F is a 1×L row vector of feasible (non-zero probability) states next period.
P is either a Alpha::N × L or 1 × L matrix of action-specific transition probabilities, Ρ(q′;α,η,θ).
The built-in transit returns <0> , CondProbOne }. That is the with probability one the value of the state variable next period is 0.

 StateCounter

 StateCounter

StateCounter :: StateCounter ( L , N , State , ToTrack , Reset , Prune )
Create a variable that counts how many times another state has taken on certain values.
Parameters:
L label
N integer, maximum number of times to count
State StateVariable or AV()-compatible object to track.
ToTrack integer or vector, values of State to count.
[default = <1>]
DoAll: track all non-zero values
Reset AV()-compatible object that resets the count if TRUE.
[Default=0] (no reset)
Prune TRUE [default]: prune states if finite horizon detected.
Example:
Count the total number of non-zero offers received so far, up to 5
noffers = new StateCounter("Total Offers",5,offer,DoAll);
    

 StateCounterMaster

 CVSList

decl CVSList [public]

 IsReachable

StateCounterMaster :: IsReachable ( )
Trims the state space if the clock is exhibits aging and there is a list of state counters for a Target, assuming all are initialized as 0.

 StateCounterMaster

StateCounterMaster :: StateCounterMaster ( L , N , State , ToTrack )
Create a StateCounter that checks reachability of an array of state counters. This state variable is created by DP::StateValueCounters
Parameters:
L label
N integer, maximum number of times to count
State StateVariable to track.
ToTrack integer or vector, values of State to count.

 StateTracker

 StateTracker

StateTracker :: StateTracker ( L , Target , ToTrack , Prune )
Create a binary variable that indicates if the previous value of state variable was in a set.
Parameters:
Target StateVariable to track actual values of.
ToTrack vector of (actual) values to track
DoAll: track all non-zero values
[defaut=<1>]
Prune, prune non-zero states at t==0 in finite horizon.

 StateVariable

 Check

virtual StateVariable :: Check ( )
Check dimensions of actual.

Called by Transitions() after Update() called. If the user's update function returns the wrong dimension this catches.

actual should be N x 1 not 1 x N.


 IsBlock

static StateVariable :: IsBlock ( sv )
Check if variable is a block of state variables.
Parameters:
sv StateVariable
Returns:
TRUE if sv is StateBlock or RandomEffectBlock or FixedEffectBlock
FALSE otherwise

 IsBlockMember

static StateVariable :: IsBlockMember ( sv )
Check if variable is a valid member of a block.
Parameters:
sv StateVariable
Returns:
TRUE if sv is Coevolving or CorrelatedEffect or SubEffect
FALSE otherwise

 IsReachable

virtual StateVariable :: IsReachable ( )
Default Indicator for intrinsic reachable values of a state variable.
Returns:
TRUE
Comments:
Most derived classes will not provide a replacement. Examples that do (or could) are Forget augmented state variables; ActionAccumulator and Duration in normal aging models.

 MakeTerminal

StateVariable :: MakeTerminal ( TermValues )
Designate one or more values of the state variable as terminal.
Parameters:
TermValues integer
vector in the range 0...N-1

A terminal value of a state variable is the case when reaching that state means decision making is ended.
For example, in the simple search model presented in GetStarted, once a price has been accepted the process is finished.

A point \(\theta\) is terminal if any of the state variables in are at one of their a terminal values.

Utility() must return a single terminating value of the process at a terminal value.

For example, if their is a bequest motive then Utility of being newly deceased should return the bequest value of the state.

Example:
  s = new StateVariable("s",5);
  s->MakeTerminal(<3;4>);
  v = new StateVariable("v",1);
  v->MakeTerminal(1);

Now any state for which CV(s)=3 or CV(s)=4 or CV(v)=1 will be marked as terminal: The classification of terminal values occurs during CreateSpaces() and is stored at each object of the MyModel by setting Bellman::Type >= TERMINAL.

Comments:
The feasible action set for terminal states is automatically set to the first row of the gobal A matrix.
See also:
StateTypes, Type

 myAV

virtual StateVariable :: myAV ( )
Return actual[v]. If s contains a state variable then AV(s) calls s->myAV()
Returns:
actual[v].
See also:
CV, AV, Update

 Prune

const decl Prune [public]
Trim unreachable states if finite horizon clock is detected. Default is FALSE.
See also:
IsReachable

 StateVariable

StateVariable :: StateVariable ( L , N )
The base state variable creator.
Parameters:
N positive integer the number of values the variable takes on.
N=1 is a constant, which can be included as a placeholder for extensions of a model.
L string a label or name for the variable.

Comments:
The default transition is s′ = 0, so it is unlikely MyModel would ever include a variable of the base class.

 TermValues

decl TermValues [public]
A vector of values that end decision making Equal to < > if state is not terminating.

 Track

StateVariable :: Track ( LorC )
Track the state variable in predictions.
Parameters:
LorC either a label or a column number to match this variable to external data. This

 Transit

virtual StateVariable :: Transit ( )
Default Transit (transition for a state variable).

This is a virtual method, so derived classes replace this default version with the one that goes along with the kind of state variable.

Returns:
a 2x1 array, {F,P} where
F is a 1×L row vector of feasible (non-zero probability) states next period.
P is either a Alpha::N × L or 1 × L matrix of action-specific transition probabilities, Ρ(q′;α,η,θ).
The built-in transit returns <0> , CondProbOne }. That is the with probability one the value of the state variable next period is 0.

 UnChanged

virtual StateVariable :: UnChanged ( )
Returns the transition for a state variable that is unchanged next period. This is used inside Transit functions to make the code easier to read.
Returns:
{ v , 1}

 StaticAux

 Realize

virtual StaticAux :: Realize ( y )
Default realized auxililary variable, sets v=1.0.
Parameters:
y, the current realized outcome, υ.

 StaticAux

StaticAux :: StaticAux ( L , target )
Create an wrapper for a static function AV()-compatible object.
Parameters:
L label string
integer: get label from
target static function of the form target()

 target

const decl target [public]
static function.

 StaticP

 StaticP

StaticP :: StaticP ( )
A static problem: Aging and T=1.

ClockType code to use this clock: StaticProgram
Example:
Use SetClock:
SetClock(StaticProgram);
which is equivalent to this:
SetClock(new StaticP());
which is also equivalent to this:
SetClock(NormalAging,1);
See also:
SetClock, ClockTypes

 Stationary

 Last

virtual Stationary :: Last ( )
No period is the last period in a stationary environment.
Returns:
FALSE

 Stationary

Stationary :: Stationary ( IsErgodic )
A stationary clock block.
Parameters:
IsErgodic TRUE, store Ρ*

 SubEffect

 block

decl block [public]
EffectBlock that I belong to

 bpos

decl bpos [public]
Index into block array/vector

 SubEffect

SubEffect :: SubEffect ( L , N )
Create a new Sub.
Parameters:
L label
N number of values it takes on.
See also:
FixedEffectBlock

 SubState

 IsReachable

virtual SubState :: IsReachable ( )
Use majt for time period when checking reachability of the base state variable.

 SubState

SubState :: SubState ( b , mys )
Make a state variable only transit in one subperiod of a Divided clock.
Parameters:
b base StateVariable
s integer the subperiod this variable is not frozen and goes throug a regular transition.

No check is made whether the period sent is a valid sub-period (between 0 and SubT-1).

See also:
Divided, ClockTypes

 Tauchen

 cp

decl cp [public]

 gaps

const decl gaps [public]

 Grid

decl Grid [public]

 M

const decl M [public]
support.

 pars

const decl pars [public]
See also:
NormalParams

 pts

decl pts [public]

 rnge

decl rnge [public]

 Tauchen

Tauchen :: Tauchen ( L , N , M , pars )
Tauchen discretizization of a correlated normal process.
Parameters:
L label
N Number of discrete points in the approximation
M AV()-compatiable max discrete value
pars 3x1 vector or array of AV()-compatible parameters
    i: Parameter (default)
    0: mean (μ=0.0)
1: st. dev. (σ=1.0) 2: correlation (ρ=0.0)

Actual values will take on \(N\) equally spaced values in the range $$ \mu \pm M\sigma/\sqrt(1-\rho^2).$$ The transition probabilities depends on the current value a la Tauchen.

Note: If none of the paramters are objects then the actual values will be Updated upon creation. This makes them available while creating spaces. Otherwise, update is not called on creation in case parameters will be read in later.

 TauchenRandomEffect

 Distribution

virtual TauchenRandomEffect :: Distribution ( )
Update pdf, the distribution over the random effect. This is the built-in default method. It computes and stores the distribution over the random effect in pdf using fDist;

The user can supply a replacement in a derived class.


 M

const decl M [public]

 pars

const decl pars [public]

 TauchenRandomEffect

TauchenRandomEffect :: TauchenRandomEffect ( L , N , M , pars )
Create a permanent discretized normal random effect.
Parameters:
L label
N number of points
M number of standard deviations to set the largest value as
pars 2x1 vector or array of AV()-compatible Normal distribution parameters, see NormalParams actual values are equally spaced between -Mσ and Mσ.

The probabilities are not uniform. They are constant and depend only on N and M.

 TimeInvariant

 Update

TimeInvariant :: Update ( )
Do Nothing and prohibit derived Updates. actual should be set in Distribution.

 TimeVariable

 TimeVariable

TimeVariable :: TimeVariable ( L , N )
.

 Tracker

 IsReachable

Tracker :: IsReachable ( )
Default initial value for Tracker variables is 0.

 Target

const decl Target [public]
Variable to track

 ToTrack

const decl ToTrack [public]
Values to track

 Tracker

Tracker :: Tracker ( L , Target , ToTrack , Prune )
Indicator that a state or action took on particular value(s) last period.

 Triggered

 Triggered

Triggered :: Triggered ( b , t , tv , rval )
The base for triggered augmentations.
Parameters:
b the base StateVariable whose transition is augmented.
t the trigger for a different transition (action, state, or AV()-compatible object)
tv the integer (actual) value of t that triggers the transition [default=1]
or vector of integer values that trigger
rval the integer (current) value of this state variable when the trigger occurs [default=0]
-1, then the reset value this.N = b.N+1 and rval = b.N.

 Update

virtual Triggered :: Update ( )
Default Augmented Update. The Update() for the base type is called. And its actual vector is copied to the augmented actuals.

Not called by user code

 UnFreeze

 InitDist


 Transit


 UnFreeze

 ValueTriggered

 Transit

virtual ValueTriggered :: Transit ( )

 ValueTriggered

ValueTriggered :: ValueTriggered ( b , t , tv , rval )
Augment a base transition so the value of some other object trigger a special value.
Parameters:
b the base StateVariable whose transition is augmented (the base should not be added to the model separately).
t the AV()-compatible value which triggers the change. Usually this would be another state variable that is present in the model.
tv the integer (actual) or vector of integer values of tv that triggers the transition [default=1].
rval the integer (current) value of this state variable when the trigger occurs [default=0]
-1, then the reset value this.N = b.N+1 and rval = b.N.

 Xponential

 gamma

const decl gamma [public]

 Update

Xponential :: Update ( )

 Xponential

Xponential :: Xponential ( L , Ndraws , gamma )
Create a exponentially distributed discretize jump variable.
Parameters:
L label
N number of points
gamma decay rate [default=1.0]

 Zvariable

 Zvariable

Zvariable :: Zvariable ( L , Ndraws )
Create a standard normal N(0,1) discretize jump variable.