Code Monkey home page Code Monkey logo

hark's Introduction

Heterogeneous Agents Resources and toolKit (HARK)

HARK is a toolkit for the structural modeling of economic choices of optimizing and non-optimizing heterogeneous agents. For more information on using HARK, see the Econ-ARK Website.

The Econ-ARK project uses an open governance model and is fiscally sponsored by NumFOCUS. Consider making a tax-deductible donation to help the project pay for developer time, professional services, travel, workshops, and a variety of other needs.


This project is bound by a Code of Conduct.

Questions/Comments/Help

We have a Gitter Econ-ARK community.

Table of Contents

Install

Install from Anaconda Cloud by running:

conda install -c conda-forge econ-ark

Install from PyPi by running:

pip install econ-ark

Usage

We start with almost the simplest possible consumption model: A consumer with CRRA utility

has perfect foresight about everything except the (stochastic) date of death.

The agent's problem can be written in Bellman form as:


To model the above problem, start by importing the PerfForesightConsumerType model from the appropriate HARK module then create an agent instance using the appropriate paramaters:

import HARK

from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType

PF_params = {
    "CRRA": 2.5,  # Relative risk aversion
    "DiscFac": 0.96,  # Discount factor
    "Rfree": 1.03,  # Risk free interest factor
    "LivPrb": [0.98],  # Survival probability
    "PermGroFac": [1.01],  # Income growth factor
    "T_cycle": 1,
    "cycles": 0,
    "AgentCount": 10000,
}

# Create an instance of a Perfect Foresight agent with the above paramaters
PFexample = PerfForesightConsumerType(**PF_params)

Once the model is created, ask the the agent to solve the problem with .solve():

# Tell the agent to solve the problem
PFexample.solve()

Solving the problem populates the agent's .solution list attribute with solutions to each period of the problem. In the case of an infinite horizon model, there is just one element in the list, at index-zero.

You can retrieve the solution's consumption function from the .cFunc attribute:

# Retrieve the consumption function of the solution
PFexample.solution[0].cFunc

Or you can retrieve the solved value for human wealth normalized by permanent income from the solution's .hNrm attribute:

# Retrieve the solved value for human wealth normalized by permanent income
PFexample.solution[0].hNrm

For a detailed explanation of the above example please see the demo notebook A Gentle Introduction to HARK.

For more examples please visit the econ-ark/DemARK repository.

Citation

If using Econ-ARK in your work or research please cite our Digital Object Identifier or copy the BibTex below.

DOI

[1] Carroll, Christopher D, Palmer, Nathan, White, Matthew N., Kazil, Jacqueline, Low, David C, & Kaufman, Alexander. (2017, October 3). econ-ark/HARK

BibText

@InProceedings{carroll_et_al-proc-scipy-2018,
  author    = { {C}hristopher {D}. {C}arroll and {A}lexander {M}. {K}aufman and {J}acqueline {L}. {K}azil and {N}athan {M}. {P}almer and {M}atthew {N}. {W}hite },
  title     = { {T}he {E}con-{A}{R}{K} and {H}{A}{R}{K}: {O}pen {S}ource {T}ools for {C}omputational {E}conomics },
  booktitle = { {P}roceedings of the 17th {P}ython in {S}cience {C}onference },
  pages     = { 25 - 30 },
  year      = { 2018 },
  editor    = { {F}atih {A}kici and {D}avid {L}ippa and {D}illon {N}iederhut and {M} {P}acer },
  doi       = { 10.25080/Majora-4af1f417-004 }
}

For more on acknowledging Econ-ARK visit our website.

Support

Looking for help? Please open a GitHub issue or reach out to the TSC.

Release Types

  • Current: Under active development. Code for the Current release is in the branch for its major version number (for example, v0.10.x).
  • Development: Under active development. Code for the Current release is in the development.

Current releases follow Semantic Versioning. For more information please see the Release documentation.

Documentation

Documentation for the latest release is at docs.econ-ark.org.

Introduction

For Students: A Gentle Introduction to HARK

Most of what economists have done so far with 'big data' has been like what Kepler did with astronomical data: Organizing the data, and finding patterns and regularities and interconnections.

An alternative approach called 'structural modeling' aims to do, for economic data, what Newton did for astronomical data: Provide a deep and rigorous mathematical (or computational) framework that distills the essence of the underlying behavior that produces the 'big data.'

The notebook A Gentle Introduction to HARK details how you can easily utilize our toolkit for structural modeling. Starting with a simple Perfect Foresight Model we solve an agent problem, then experiment with adding income shocks and changing constructed attributes.

For Economists: Structural Modeling with HARK

Dissatisfaction with the ability of Representative Agent models to answer important questions raised by the Great Recession has led to a strong movement in the macroeconomics literature toward 'Heterogeneous Agent' models, in which microeconomic agents (consumers; firms) solve a structural problem calibrated to match microeconomic data; aggregate outcomes are derived by explicitly simulating the equilibrium behavior of populations solving such models.

The same kinds of modeling techniques are also gaining popularity among microeconomists, in areas ranging from labor economics to industrial organization. In both macroeconomics and structural micro, the chief barrier to the wide adoption of these techniques has been that programming a structural model has, until now, required a great deal of specialized knowledge and custom software development.

HARK provides a robust, well-designed, open-source toolkit for building such models much more efficiently than has been possible in the past.

Our DCEGM Upper Envelope notebook illustrates using HARK to replicate the Iskhakov, Jørgensen, Rust, and Schjerning paper for solving the discrete-continuous retirement saving problem.

The notebook Making Structural Estimates From Empirical Results is another demonstration of using HARK to conduct a quick structural estimation based on Table 9 of MPC Heterogeneity and Household Balance Sheets by Fagereng, Holm, and Natvik.

For Computational Economics Developers

HARK provides a modular and extensible open-source toolkit for solving heterogeneous-agent partial-and general-equilibrium structural models. The code for such models has always been handcrafted, idiosyncratic, poorly documented, and sometimes not generously shared from leading researchers to outsiders. The result being that it can take years for a new researcher to become proficient. By building an integrated system from the bottom up using object-oriented programming techniques and other tools, we aim to provide a platform that will become a focal point for people using such models.

HARK is written in Python, making significant use of libraries such as numpy and scipy which offer a wide array of mathematical and statistical functions and tools. Our modules are generally categorized into Tools (mathematical functions and techniques), Models (particular economic models and solvers) and Applications (use of tools to simulate an economic phenomenon).

For more information on how you can create your own Models or use Tools and Model to create Applications please see the HARK Manual.

Contributing to HARK

We want contributing to Econ-ARK to be fun, enjoyable, and educational for anyone, and everyone.

Contributions go far beyond pull requests and commits. Although we love giving you the opportunity to put your stamp on HARK, we are also thrilled to receive a variety of other contributions including:

  • Documentation updates, enhancements, designs, or bugfixes
  • Spelling or grammar fixes
  • REAME.md corrections or redesigns
  • Adding unit, or functional tests
  • Triaging GitHub issues -- e.g. pointing out the relevant files, checking for reproducibility
  • Searching for #econ-ark on twitter and helping someone else who needs help
  • Answering questions from StackOverflow tagged with econ-ark
  • Teaching others how to contribute to HARK
  • Blogging, speaking about, or creating tutorials about HARK
  • Helping others in our mailing list

If you are worried or don’t know how to start, you can always reach out to the TSC or simply submit an issue and a member can help give you guidance!

After your first contribution please let us know and we will add you to the Contributors list below!

To install for development see the Quickstart Guide.

For more information on contributing to HARK please see the contributing guide. This is the guide that collaborators follow in maintaining the Econ-ARK project.

Disclaimer

This is a beta version of HARK. The code has not been extensively tested as it should be. We hope it is useful, but there are absolutely no guarantees (expressed or implied) that it works or will do what you want. Use at your own risk. And please, let us know if you find bugs by posting an issue to the GitHub page!

hark's People

Contributors

aa-turner avatar akaufman10 avatar alanlujan91 avatar amonninger avatar brainwane avatar compumetrika avatar dclow avatar dedwar65 avatar drdrij avatar edmundcrawley avatar ericholscher avatar iworld1991 avatar jackshiqili avatar janrosa1 avatar jasonaowen avatar jdice avatar llorracc avatar llorracc-git avatar mnwhite avatar mriduls avatar mv77 avatar pkofod avatar renovate[bot] avatar rkcah avatar sbenthall avatar shaunagm avatar sidd3888 avatar stephenschroeder avatar timmunday avatar wdu9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hark's Issues

Folder structure

We need to figure out a better folder structure for the files.

I recommend something like having the files that actually have HARK in their name in the main folder. ConsumptionSavingsModel.py can be in a folder called "ConsumptionSavingsModel", the DAP model can be in a folder called "DAPModel", etc.

Interpolation overhaul: 1D transformations

This is fairly closely linked with the other interpolation overhaul issue.

In many economic applications, it is "safer" and/or more accurate to interpolate g(f(x)) rather than f(x) itself, and "un-transform" the output as g_inv(f_interpolation(x)). E.g. a value function v(x) is extremely curved, but u_inv(v(x)) is much more linear. As is, HARK has the user manager this manually, with a lambda function applied to the interpolation on the transformed function.

Interpolation should be overhauled to support transformed interpolations "automatically". The user must specify whether and which transformation should be applied, but the rest should be automated. This change should allow derivative methods to return correct output, and interact with multiple output interpolations.

Operation Namingway

Per many discussions, we need a review of the name of (most) every variable, method, function, class, and module in HARK. Primary goals:

  1. Standardize name formatting for variables, functions, classes.

  2. Conventions for what words mean: calc, find, make, etc.

  3. Tie common variable names to possible "LARK" LaTeX definitions.

Small: order of inputs

Order of common inputs/arguments to similar functions needs unifying in some cases. The approxDISTRIB functions (in HARKutilities) have N as their first input, but (most of?) the drawDISTRIB functions (in HARKsimulation) have N as one of the later inputs. This should be changed for style, but doing so will break a lot of stuff (easy fixes, as it's just reordering inputs).

We should add a routine to calculate the average value of any simulated object

In DemosForFed, a method is defined for calculating the average value of consumption in the simulated economy. There should be a general-purpose method which takes as its argument the object whose mean needs to be calculated, and returns that mean, so that in the future it will not be necessary to define specific methods for, say, wealth, and income, and so on.

vFunc does not respect borrowing constraint

Every solver in ConsumptionSavingModel.py incorrectly constructs the value function in the presence of an artificial borrowing constraint (i.e. when constraint is not None). The value function is reported as if the constraint did not exist, and thus is too high anywhere that the constraint binds; this will propagate to other areas during backward iteration.

New functionality: kinky R

Need consumption-saving solver that can handle R_borrow \neq R_save. This is easy, already implemented in DCL's pre-HARK DAP model.

Small: NullFunc

We currently have NullFunc as a function that returns an array of nans (for use as a placeholder/default function), but this should be expanded slightly. NullFunc should be a class whose constructor takes no inputs; it has convergence_criteria=[], so that any instance of NullFunc has a distance of 0 from another instance. Anywhere else in the code where we refer to NullFunc should be replaced with NullFunc(). The new class' call method should be rewritten more robustly; as is, it fails when the input is a scalar.

A few weird variable names in TractableBufferStock

The variable names in TractableBufferStock.py have been cleaned up as of 5/11/2016, but there are a few that still need fixing. All of these seem to be temporary algebraic conveniences, and I don't know how to describe them in words off the top of my head:

  • natural
  • Beth
  • Pi
  • zeta

New functionality: persistent shocks

The current consumption-saving models assume that shocks are fully permanent or fully transitory. Must add a solver that allows for persistent (but not permanent) shocks, e.g. an AR(1) model of income. Requires tracking income as a second state variable, would likely use LinearInterpOnInterp1D.

Extrapolation in infinite horizon

In any ConsumptionSavingModel variant, the extrapolated cFunc (etc) diverge from the true solution in infinite horizon problems (cycles=0). The convergence criterion depends only on cFunc, which satisfactorily converges many cycles before gothic_h_t approaches its long-run value (off by 5% or more in many cases). The limiting perfect foresight consumption function is thus too low in level; it is very slightly too steep as well, as kappa_min has not converged either.

Potential solutions:

  1. Add gothic_h and kappa_min to convergence criteria. This is seriously suboptimal, as the problem would take many more cycles to solve, but the only change in successive solutions would be to gothic_h and kappa_min, as cFunc has already converged.

  2. Put in the correct values for gothic_h and kappa_min after convergence; this can be done in postSolve(). The correct values are calculated in the method calcBoundingValues() (the kappa_max) code is wrong for the Markov version).

consumptionSavingSolverMarkov: m_underbar issues

The Markov solver can return incorrect results when the Markov matrix has zeros in it. As written, it (partially) assumes that the same m_underbar applies for each future state, and adjusts a_grid to get a based on that uniform m_underbar. However, this is not necessarily the case if some future states are "inaccessible" from some present states.

This issue requires a bit of thought. A beginning test case might be a normal problem with the possibility of jumping to a no-shocks environment for exactly N periods. There would be N+1 discrete states, with N of them transitioning to exactly one other state with prob=1, and the final one (normal state) transitioning either to itself (high prob) or the first no-shocks state.

Small: DiscFac should not be time-varying by default

One of the weirder things in ConsumptionSavingModel is that the discount factor is a time-varying parameter, which is not a standard feature of consumption-saving models. We have it this way because the first application the Cagetti-style estimation in SolvingMicroDSOPs, but this is a counterintuitive default.

ConsumptionSavingModel (etc) should be adjusted so that DiscFac is in time_inv (with some default in the parameters file). The estimation AgentType in SolvingMicroDSOPs.py should DiscFac from time_inv to time_vary and then load in Params.DiscFac_timevary.

ConsumptionSavingModel will not load from outside sources

Here are two places where it fails.

  File "/Users/ganong/repo/HARK/cstwMPC/cstwMPC.py", line 19, in <module>
    import ConsumptionSavingModel as Model

ImportError: No module named ConsumptionSavingModel
  File "/Users/ganong/repo/HARK/TractableBufferStock/TBSexamples.py", line 13, in <module>
    from ConsumptionSavingModel import ConsumerType, solveConsumptionSavingMarkov

ImportError: No module named ConsumptionSavingModel

However, when I run ConsumptionSavingModel.py directly, it runs with no trouble at all.

Interpolation overhaul: multiple outputs in same structure

This is pretty closely linked with the other "interpolation overhaul" issue.

As is, a HARKinterpolator has a codomain of R, so that an instance must be created for each function output (e.g. a consumption function, a value function, a medical spending function, etc). These instances share an interpolation node structure, but this structure must be searched independently for each instance (e.g. a lookup for c(x_0), a lookup for v(x_0), and a lookup for m(x_0)). This is somewhat costly in the current 1D applications, and will be horribly inefficient for ND methods (which are the models most likely to want multiple values evaluated at the same points).

The interpolation framework needs a complete overhaul to allow a call to HARKinterpolator to return multiple outputs (e.g. getting c(x_0), v(x_0), and m(x_0) with a single lookup or search). By default, all outputs of an interpolator are returned, but the user can request less.

Open Q: format for specifying which outputs are returned? User must keep track of which output is in which "layer" (e.g. c is 0, v is 1, etc), or apply text labels ('c', 'v', etc).

Small: KinkedR simulator changes

The simulation method ConsumerType.simOnePrd() doesn't respect Rboro and Rsave in the KinkedR model. A similar issue is present in ConsPrefShockModel.

Small: alternate drawDiscrete

The drawDiscrete function in HARKsimulation should be extended with a boolean input to choose between two modes:

  1. Current implementation, where draws from a discrete distribution are independent of each other. Thus an input of p = [1/3,1/3,1/3] and x = [1,2,3] and N = 10000 might return an array of all 1's (with very small probability).

  2. Alternate version that forces the simulated draws to match the discrete distribution as closely as possible, given the finite number of draws. A version of this is in ConsumerType.makeIncShkHist(), so not much new coding is required.

I think (1) should be the default, as that's how the other drawX functions work.

New model: bequest motives

Straightforward extension of ConsIndShock that implements a bequest motive. We should probably start with the standard "warm glow" utility bonus at death:

warm_glow = param_1*(assets_at_death + param_0)**(1.-CRRA+param_2)/(1.-CRRA+param_2)

Note that when param_2 is not zero, we would need to calculate the bounding MPCs differently, as the relevant CRRA for MPCmin would be CRRA-param_2 but still just CRRA for MPCmax (bequests should be a luxury good).

Might be best to do two parameter version first and get that working. Adding and correctly handling param_2 would also be a motivation to overhaul how bounding MPCs are calculated. I think a solution object should carry around the limiting slopes of the "pseudo-inverse value" function as well as the relevant CRRA coefficient for each end. This is relevant for bequests and for the medical shocks model, as well as for handling interactions between different models or allowing CRRA to change between periods.

README: Needs updating

This is basically a constant state of affairs as changes get made. This issue is a reminder to adjust the README before a public-ish release.

Curvilinear2DInterp lacks derivatives

Curvilinear2DInterp will throw an implementation error if its derivativeX() or derivativeY() methods are called; _derX() and _derY() must be written. I have the math for this in a notepad in my desk drawer.

Steady-state market resources calculator

CDC wants the consumption-saving solver to be able to calculate steady-state market resources m*, and to do this on every iteration (so that it can be a convergence criterion). Requires making a (linear) delta-m=0 function and finding the intersection b/w that and the cFunc.

I expect this will make the solver significantly slower, and should be commented out by default-- or have some way to turn it off. It's only useful in infinite horizon models, but there's currently no way to tell a one period problem solver that it's part of an infinite horizon problem-- it explicitly has a foxhole mentality that ignores broader context.

Small: rename convergence_criteria

This is a clunky name and should be refined. I think the name should refer to distance rather than convergence. I'm open to suggestions.

Possible bug in ConsAggShockModel.py

Line 542 calculates the KtoY ratio as
KtoYnow = np.mean(aAll*pAll),
and then the KtoL ratio as:
KtoLnow = self.convertKtoY(KtoYnow)
This KtoL ratio is then used to calculate interest rates, wages and it is an input into the consumption function.

But in order to be used like this it needs to be normalized by the mean level of permanent income.

I think a simple fix is to change line 542 to:

KtoYnow = np.mean(aAll*pAll)/np.mean(pAll)

Which works for my purposes but when I make this change it seems to break convergence in solving the aggregate market ctswMPC.py.

I will try and figure out why ctswMPC.py stops working, but in the meantime if anyone disagrees/agrees with the above let me know.

Descriptions at top of files

Every .py file should have comments at the top of the code describing what the code does. HARKcore.py does not have this. Other files do, although they should probably be more detailed.

BufferStockTheory folder

The BufferStockTheory commit had far too many (2000+) files, many of which were unnecessary. I reverted the commit.

We need to decide which parts of the folder are actually necessary for HARK, and push the commit again. The one file that was fixed in BufferStockTheory is still in there, to avoid deleting the fix. But that file also needs to be dealt with.

New functionality: marginal utility shocks

Implement a new consumption-saving solver for a model with a stochastic shifter on marginal utility of consumption. This is already a feature of DCL's pre-HARK DAP model.

Small: plotFuncs

The functions plotFunc and plotFuncs in HARKutilities should be unified into a single function. It's a bit silly to have one function for plotting a single function and another function for plotting several functions. plotFunc should be able to take a list or a single function as a first argument. A similar change should be made for plotFuncDer.

Perfect foresight model with CRRA = 1

Inspired by demo file, I am writing demos for each model in ConsIndShockModel. It is fun to play with parameters and helps me to understand consumption-saving dynamics model. In fact, I talked to other graduate students and realized only a few had experienced Python language. David's example is superb, and I think more demos would be instructive. Of course, I noticed David left homework assignment in the demo. For some people, I guess it is quite difficult.. Learning by doing would be ideal, but more guidelines also are useful. I just want to share struggles I faced earlier (even now:) ).

Anyway, I checked some variant versions of perfect foresight case.

from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
Example = PerfForesightConsumerType()
Example.CRRA = 1.0
Example.solve()

It throws out an error. This is because line 410 in ConsIndShockModel.py
MPCnvrs = self.MPC**(-self.CRRA/(1.0-self.CRRA))

I guess we need to fix defValueFuncs function. Am I right?

p.s. CRRA=1.0 works for other types.

Small: approxUniform

approxUniform doesn't work like the other discrete approximators in HARKutilities, as it only returns values rather than probabilities. It's also not a natural way to specify a uniform distribution (most people would expect approxUniform(N,bot,top)) and should be rewritten. This is easy to fix, but then cstwMPC needs a small repair.

Small: calcBoundingValues() fixes

The calcBoundingValues() method of ConsumerType is in a state of disrepair. It should be able to correctly compute MPCmin, MPCmax, and hNrm for any (one period cycle) infinite horizon model. Later we can expand it to handle multi-period cycles in infinite horizon problems.

Comments and code testing

This is more of an ongoing issue than something that can ever truly be fixed, but the code needs more extensive comments. It also needs units tests, etc.

Generalize extrapolation in HARK1DInterp

The two 1D interpolation methods in HARK (cubic spline interpolation and linear spline interpolation) behave completely differently w.r.t. extrapolation, and this should be rectified.

Cubic1DInterpDecay (which needs to be renamed after all this) lets the user specify a linear function that the true function approaches as x-->infinity; if no function is specified, linear extrapolation is used. This functionality should be replicated in LinearInterp. As is, the perfect foresight solution of ConsumptionSavingModel does not bound the numeric solution when cubic_splines = False. LinearInterp is just an extension of a scipy.interpolate class right now, so it will need to be rewritten to allow for decay extrapolation.

We also should consider whether these 1D interpolation methods should include extrapolation options below the lowest gridpoint. The models we're working with right now have a meaningful lower bound, but others might not.

Small: move Markov solver out of ConsumptionSavingModel.py

I think the Markov solver is sufficiently different from the basic idiosyncratic shocks model that it should be moved out of the main model file and into its own module. This would allow us to take out the clunky "if markov" statements from the simulator (etc), and to put MarkovExamples into a main model file (along with the Markov example at the bottom of ConsumptionSavingModel). Would need to make a new subclass of ConsumerType with new simulator; also need to revise updateSolutionTerminal() for the new type.

__name__ == 'main'

If at all possible, every .py file should have:

if name=='main':
(example code showing what the file does)

Ideally this would both show off the capabilities of the code and demonstrate how to use it. None of the .py files have this so far it seems

Small: convergence_criteria for ValueFunc (etc) classes

The ValueFunc, MargValueFunc, etc classes should have a convergence_criteria attribute, so that (say) 'vFunc' can be added as a convergence criterion in the consumption-saving model. This might be all that's necessary to do the 1D transformations item on the Issues page (and add missing derivative methods, maybe).

New model: Unemployment search

Create module for models of consumption-saving-search. Consumption-saving model with persistent unemployment; agent must exert costly effort to increase probability of finding job / receiving offer.

Lots of variations on this to do. Can pick one or two easy ones to start to show models with multiple controls in HARK (or rather: multiple controls in some states).

Taxonomy of consumption-saving models

Break ConsumptionSavingModel into sub-modules:

  • Fully permanent, fully transitory shocks
  • Persistent shocks
  • Tractable buffer stock (subsume)
  • Models with Markov-style discrete states
  • "Baby" version geared toward early grad students
  • Etc

Each file should pertain to a fairly well defined type of consumption-saving problems. Continuing to add solvers and functionality for more variations will lead to massive file bloat and be unappealing to new users.

Move CSTWmarket into ConsAggShocksModel

As is, cstwMPC defines the relevant Market subclass for doing consumption-saving problems with aggregate shocks (to find a dynamic general equilibrium). This class should be moved to ConsAggShocksModel (and renamed). Further, the relevant simulation (etc) methods should be moved to a ConsumerType extension in this module as well. cstwMPC.py should import these classes, and should only contain functions that could only plausibly be used for cstwMPC.

A necessary subtask of this is to rework the slope_prev and intercept_prev functionality, as storing these in a different module/namespace is clunky and weird. The dynamics calculator should be a class method, not a function; this will allow us to store the previous parameters inside the class.

Small: ConsAggShockModel needs example/demo

As is, the only implementation of ConsAggShockModel is in cstwMPC in a general equilibrium framework. We should add a main section at the bottom of the model file with a demonstration/example (not in gen eq). Load in parameters from the parameters file, add a few more things, solve the agent and then display consumption function at each level of k in the grid.

New model: endogenous medical expenses

Create module for models of endogenous medical spending. Can be as simple as second consumption good with stochastic marginal utility. Fairly straightforward to do once stochastic marginal utility is in consumption-saving model.

Input checking for MarkovConsumerType

MarkovConsumerType could use more checking of its inputs. It can give very weird error messages if it receives inputs that are just slightly different than what they should be.

For example, I wasted a lot of time with an infinite horizon version before I figured out that PermGroFac and LivPrb should be lists (of length 1, whose only element is a np.array), but Rfree should just be a numpy array. Also, those arrays should be of shape (N,), where N is the number of Markov states. Using an array of shape (N,1) also produced a very obscure error message

I imagine putting in checks at the beginning to verify that inputs are the right types and shapes could save a lot of people some of the hassle I went through

Readme.txt examples out of date

From the readme:

6) Test one of HARK's sample modules with one of the following commands:

"run ConsumerExamples"
"run SolvingMicroDSOPs"
"run cstwMPC"
"run TBSexamples"

The examples in instruction 6 of readme.txt are out-of-date.
(1) It seems that ConsumerExamples has become ConsumptionSavingModel.
(2) All the examples are now in subdirectories

(this is my first issue post and it is incredibly pedantic. I will post more substantive stuff as I find it, but I figured that you'd rather know about bugs, particularly those that affect people just starting out.)

Small: separate SolvingMicroDSOPs into new folder

SolvingMicroDSOPs.py (and related files) should be split off into their own folder, in the style of cstwMPC. The ConsumptionSaving folder is sort of cluttered with files that aren't strictly about consumption-saving models, but rather a particular application of one of these models.

Even more confusing, SetupConsumerParameters.py contains both parameters necessary to specify a ConsumerType and a bunch of parameters for the SolvingMicroDSOPs estimation. These should be split into two separate files (that live in separate directories), so that newcomers open up the consumption-saving parameters file and see only parameters needed to solve those models.

Small: approxNormal

Somehow HARKutilities has no function to construct an approximation to a normal distribution. There are several known methods for doing this, and we really ought to have at least one in HARK.

Markov solver in DCL-Dev-1

Already found one bug in DCL-dev-1... the Markov solver uses linear interpolation even when it is supposed to use cubic. I'm working on this now.

Euler error calculator

ConsumptionSavingModel (and related models?) should have an Euler error calculator function to find the relative difference between consumption and optimal consumption. The beginnings of this are sort of in a ConsumerType method, but I don't think it does anything at this time.

This has been on the back burner for a long time now, and we should get around to it soon.

error in executing the first line of Demo.py

Hi,
I am a new comer here. Forgive me if my question sounds stupid.
I run into problem when running the first line of Demo.py (under the folder "ConsumptionSaving")

from ConsIndShockModel import IndShockConsumerType

All that I did was just to enter that folder and execute this first line. The error message is:

File "C:\Users...\Documents\Python\HARK\ConsumptionSaving\ConsIndShockModel.py", line 2242
*raise Exception, "grid_type not recognized in init." + *
^
SyntaxError: invalid syntax

I use Anaconda. Any insight? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.