Code Monkey home page Code Monkey logo

skillmodels's Introduction

Hi there

I am Janoś, an economist with a passion for coding, open-source software, numerical optimization and AI.

My biggest project is estimagic, a package for nonlinear optimization. I started it in 2019 and through the help of amazing contributors it has grown into something larger and better than I ever imagined.

If you find estimagic useful, consider supporting it with a ⭐!

Star History Chart

skillmodels's People

Contributors

effiehan avatar hmgaudecker avatar janosg avatar lbaji avatar lindamaok899 avatar mpetrosian avatar roecla avatar tobiasraabe avatar tostenzel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

skillmodels's Issues

Prepare for next releases of Numba

Version 0.0.60 gives a NumbaDepreciationWarning for the translog function because it falls back to object mode (=no speed improvement), they will disable that in the future.

Solution: Either make compliant with Numba's supported features of Python or remove the decorator.

NB: Most of the stuff below seems repetitive, but I did not check super-carefully whether some part is not, so I rather dump the whole output...

/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "translog" failed type inference due to: Invalid use of Function(<built-in function getitem>) with argument(s) of type(s): (tuple(int64 x 5), slice<a:b>)
 * parameterized
In definition 0:
    All templates rejected with literals.
In definition 1:
    All templates rejected without literals.
In definition 2:
    All templates rejected with literals.
In definition 3:
    All templates rejected without literals.
In definition 4:
    All templates rejected with literals.
In definition 5:
    All templates rejected without literals.
In definition 6:
    All templates rejected with literals.
In definition 7:
    All templates rejected without literals.
In definition 8:
    All templates rejected with literals.
In definition 9:
    All templates rejected without literals.
In definition 10:
    All templates rejected with literals.
In definition 11:
    All templates rejected without literals.
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py (136)

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 136:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
            # add the interaction terms
            for pos2 in included_positions[p:]:
            ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "translog" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:742: NumbaWarning: Function "translog" was compiled in object mode without forceobj=True, but has lifted loops.

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 117:
@jit
def translog(sigma_points, coeffs, included_positions):
^

  self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:751: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 117:
@jit
def translog(sigma_points, coeffs, included_positions):
^

  warnings.warn(errors.NumbaDeprecationWarning(msg, self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "translog" failed type inference due to: Invalid use of Function(<built-in function getitem>) with argument(s) of type(s): (tuple(int64 x 5), slice<a:b>)
 * parameterized
In definition 0:
    All templates rejected with literals.
In definition 1:
    All templates rejected without literals.
In definition 2:
    All templates rejected with literals.
In definition 3:
    All templates rejected without literals.
In definition 4:
    All templates rejected with literals.
In definition 5:
    All templates rejected without literals.
In definition 6:
    All templates rejected with literals.
In definition 7:
    All templates rejected without literals.
In definition 8:
    All templates rejected with literals.
In definition 9:
    All templates rejected without literals.
In definition 10:
    All templates rejected with literals.
In definition 11:
    All templates rejected without literals.
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py (136)

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 136:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
            # add the interaction terms
            for pos2 in included_positions[p:]:
            ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:742: NumbaWarning: Function "translog" was compiled in object mode without forceobj=True.

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:751: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  warnings.warn(errors.NumbaDeprecationWarning(msg, self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "translog" failed type inference due to: Invalid use of Function(<built-in function getitem>) with argument(s) of type(s): (tuple(int64 x 3), slice<a:b>)
 * parameterized
In definition 0:
    All templates rejected with literals.
In definition 1:
    All templates rejected without literals.
In definition 2:
    All templates rejected with literals.
In definition 3:
    All templates rejected without literals.
In definition 4:
    All templates rejected with literals.
In definition 5:
    All templates rejected without literals.
In definition 6:
    All templates rejected with literals.
In definition 7:
    All templates rejected without literals.
In definition 8:
    All templates rejected with literals.
In definition 9:
    All templates rejected without literals.
In definition 10:
    All templates rejected with literals.
In definition 11:
    All templates rejected without literals.
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py (136)

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 136:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
            # add the interaction terms
            for pos2 in included_positions[p:]:
            ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "translog" failed type inference due to: cannot determine Numba type of <class 'numba.dispatcher.LiftedLoop'>

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:742: NumbaWarning: Function "translog" was compiled in object mode without forceobj=True, but has lifted loops.

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 117:
@jit
def translog(sigma_points, coeffs, included_positions):
^

  self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:751: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 117:
@jit
def translog(sigma_points, coeffs, included_positions):
^

  warnings.warn(errors.NumbaDeprecationWarning(msg, self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py:116: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "translog" failed type inference due to: Invalid use of Function(<built-in function getitem>) with argument(s) of type(s): (tuple(int64 x 3), slice<a:b>)
 * parameterized
In definition 0:
    All templates rejected with literals.
In definition 1:
    All templates rejected without literals.
In definition 2:
    All templates rejected with literals.
In definition 3:
    All templates rejected without literals.
In definition 4:
    All templates rejected with literals.
In definition 5:
    All templates rejected without literals.
In definition 6:
    All templates rejected with literals.
In definition 7:
    All templates rejected without literals.
In definition 8:
    All templates rejected with literals.
In definition 9:
    All templates rejected without literals.
In definition 10:
    All templates rejected with literals.
In definition 11:
    All templates rejected without literals.
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: typing of intrinsic-call at /home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py (136)

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 136:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
            # add the interaction terms
            for pos2 in included_positions[p:]:
            ^

  @jit
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:742: NumbaWarning: Function "translog" was compiled in object mode without forceobj=True.

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  self.func_ir.loc))
/home/wherever/miniconda/envs/myenv/lib/python3.6/site-packages/numba/compiler.py:751: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "../../../../../miniconda/envs/myenv/lib/python3.6/site-packages/skillmodels/model_functions/transition_functions.py", line 125:
def translog(sigma_points, coeffs, included_positions):
    <source elided>
    nr_included = len(included_positions)
    for i in range(long_side):
    ^

  warnings.warn(errors.NumbaDeprecationWarning(msg, self.func_ir.loc))

Value error in data_processor.py after deleting rows

Issue

Due to an issue with pandas one can run into an value error in
skillmodels.pre_processing.data_processor y_data().
We started with a dataframe that included data from all 13 available HRS waves. It has a mult-index (id, individual_period). Here individual_period = 0 means it is the first time an individual is observed. We then dropped all observations from hrs_wave 13. This means that some id's and individual_period = 12 do not occur any longer.

Creating an instance of SkillModel results into an ValueError in y_data():
--> 138 y_data[counter : counter + len(measurements), :] = df.to_numpy().T

ValueError: could not broadcast input array from shape (x,y) into shape (x,z)

Origin

pre_process_data() transforms an unbalanced dataset into a balanced panel. It uses

all_ids, all_periods = list(df.index.levels[0]), list(df.index.levels[1])
nobs = len(all_ids)
nperiods = len(all_periods)

to determine the shape of the balanced panel. Due to a pandas issue index.levels still reports values that have been deleted. See: https://github.com/pandas-dev/pandas/issues/2770
The shape of balanced is too large.

The dimension of y_data is determined by
dims = (self.nupdates, self.nobs)
y_data = np.zeros(dims)

It seems like nobs comes from
self.nobs = int(len(self.data) / self.nperiods) (row 60 in model_spec_processor.py)
where nperiods is the correct number of periods from the model specification file. The expected number of individuals is then too large.

Possible solution

  1. After deleting rows, resetting and setting the index circumvents this problem. Here:
    df = df.reset_index().set_index(["id", "individual_period"]).sort_index()
  2. Using df.index.get_level_values("id").unique() instead of len(list(dataset.index.levels[0])) shows the correct value.

params is overwritten in statsmodels_result_to_string_series

Where: skillmodels/skillmodels/visualization/table_function.py in statsmodels_result_to_string_series

Problem: When the Results instance has no name, res_col is set to "params". This causes that the (numeric) params column is overwritten with a string. This causes problems when Python tries to add phantom minus signs (-) for positive parameters in line 35 because strings cannot be compared to 0.

Solution: Use the original Series params to evaluate whether a parameter is larger or smaller than 0.

Code:

def statsmodels_result_to_string_series(res, decimals, report_se=True):
    if hasattr(res, 'name'):
        res_col = res.name
    else:
        res_col = 'params'

    params = res.params
    params.name = 'params'
    df = params.to_frame()
    df['p'] = res.pvalues
    df['stars'] = pd.cut(df['p'], bins=[-1, 0.01, 0.05, 0.1, 2],
                         labels=['***', '**', '*', ''])
    df[res_col] = params.round(decimals).astype(str)
    df[res_col].replace({'-0': '0', '-0.0': '0'}, inplace=True)
    df['phantom'] = r'\textcolor{white}{-}'
    df[res_col] = df[res_col].where(params < 0, df['phantom'] + df[res_col])
    df[res_col] += df['stars'].astype(str)

    if report_se is True:
        se_col = res.bse.round(decimals).astype(str)
        se_col = ' (' + se_col + ')'
        df[res_col] += se_col

    return df[res_col]

Document default values

The default values of some entries of the "general" section from the model dictionary are not documented. They are implemented in init of model_spec_processor.

Small fixes

  • use **kwargs in fit method to allow for full configuration of maximize, even when maximize gets new arguments
  • Allow to set threshold in robust_cholesky in model specs

Make data errors comprehensive

My excuses if this is already fixed in the current release, I am still on the one from late August (aside: Would be great to see tags in the repo and to be able to do skillmodels.__version__, then I would be more precise here, too :-))

Upon creation, SkillModel() is checking for variation in the measurements per period, which is great. However, it would be even better if it did so for all of them rather than one-by-one. With a large dataset, it is really slow to go through all of this. I did a bit of manual work to speed it up, but still

In an ideal world, the error message would look like the params Dataframe with the subset of variable/period combinations that are violating the constraints, i.e., from the top of my head,

factor      measurement    period    problem

cognition   abc            1         All values equal to 1.0
            cde            5         All values equal to 0.0
            xyz            2         All values missing 

non-cong    abc            1         All values equal to 1.0

This would allow one to go quickly through the model specification to eliminate all problems in one go.

Adjust skillmodels for pandas 24.x

In recent versions of pandas, it is no longer recommended to use .value to convert DataFrames to numpy arrays. Instead .to_numpy() should be used (and maybe in rare cases .array).

More information is in this blogpost and the documentation of .to_numpy()

Please go over all occurences of .values in skillmodels and replace them be one of the recommended methods.

Use a linear predict when possible

We use an unscented Kalman predict even if all transition equations are linear. However a linear predict step would be much faster and the predict step is the bottleneck in most models.

The linear predict step is already implemented and tested. The only thing that's left is to integrate it into the likelihood function. Among others this will need the following steps:

  • write a function that determines if the linear predict can be used
  • construct the transition matrix from the parameters
  • construct other input arrays
  • select the correct predict function

Out-of-the box output formatting

Since my coauthors just complained that I sent them output tables with 1000+ rows this just occurs to me. Of course I have not checked whether such functionality exists already. In the end it should be pretty trivial, but it might save users a lot of boilerplate code that does little but reindexing/merging....

What comes immediately to mind...

Tables with standard errors:

  • Measurements: Separate by factor, sorted by time period and measurement by default. Columns would be constants, loadings, and standard deviations.
  • Initial factors: Means in first column, and sd/correlations in further columns (if using mixture, probably need also a more disaggregated version of it
  • The rest of the params / category values probably one table each ?
  • ...

Some graphs:

  • Transition equations at mean factors / a few quantiles
  • ...

Aside:

  • Would it make sense to split up delta into meas_constant (symmetric to meas_sd) and whatever might be relevant for the anchoring (I do not have that). Seems to be the last group that has a name that you can only possibly know if you have a sense of the notation.
  • And did we talk somewhere else about shock_variance -> shock_sd ?

Suggestions for renamings of parameters

@janosg: Please disapprove where you see fit...

  • nemf -> n_mixture_components. Without recursing to the manual, it is impossible to find out what this is (note that all other things here https://skillmodels.readthedocs.io/en/latest/names_and_concepts.html#variables-related-to-dimensions do not have to be specified by the user)
  • kappa -> sigma_points_scale. Not described anywhere in docs, sigma_points module API docs do not show up so it is impossible to infer what is meant without looking up the CHS paper
  • probanch_function -> prob_anchoring_function. A bit more verbosity helps readability a lot.

Sort-of related: save_intermediate_optimization_results and save_path seem more useful as optional parameters to the fit() method rather than model-level constants. Same comment for start_params, start_values_per_quantity, maxiter, maxfun.

Make "included_positions" a numpy array for "sigma_points"

Recent Numba throws a warning:

python3.7/site-packages/numba/ir_utils.py:1959: NumbaPendingDeprecationWarning: 
Encountered the use of a type that is scheduled for deprecation: type 'reflected list' found for argument 'included_positions' of function 'translog'.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types

File "lib/python3.7/site-packages/skillmodels/model_functions/transition_functions.py", line 141:
@jit
def translog(sigma_points, coeffs, included_positions):

More context: http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types

Note: I have not looked at the code and I suppose that the list is not changed; else feel free to change the title.

Make attributes immutable where possible

Current Situation

When I wrote skillmodels, I did not see the benefit of immutability. Therefore many attributes of SkillModel, which are generated in ModelSpecProcessor are lists and dictionaries, even though we never change them later.

Desired Situation

Replace lists by tuples, unless we use them in numba functions that in the most recent numba version need them to be numpy arrays (see #33). Replace dictionaries by namedtuples.

Implementation

Do this as early as possible, i.e. in ModelSpecProcessor

Anecdote

When converting skillmodels to use estimagic I lost one day on debugging. The problem was that I had accidentally appended something to a list of measurements. So this is an easy but important change!

Turn off Pandas PerformanceWarnings

Output from fit() is filling up the first few dozen lines with

interactiveshell.py:3326: PerformanceWarning: indexing past lexsort depth may impact performance.

Of course one may shut it off from the user code, but it might be confusing new users, it is no problem and it will never change. So it should be skillmodels' responsibility.

Remove calls to matrix_multiply

When building the package with the lates version of numpy I got the following warning:

DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import matrix_multiply

I think matrix multiply is used in three places. Sometimes for convenience, sometimes because of speed. Is there any good alternative?

Misscellaneous enhancements

  • Make sure scientific notation works in yaml files with model specs
  • Check the parameter index (at least in the debug loglike)

Hit the 2000 columns limit in SQLite - stop logging fixed parameters

In a kitchen-sink version of our serious model, we are trying to fit a model with 1.2k free parameters. An additional 1.4k parameters are fixed.

Maybe this is crazy, but a useful robustness check.

This currently fails with an OperationalError from SQLAlchemy because SQLite has a 2k columns limit.

Is it really necessary to log the fixed parameters or is this more for convenience?

[not sure whether this should go here or over to https://github.com/OpenSourceEconomics/estimagic]

Avoid mixing of strings and integers in index

e.g., I have some stuff where start_params_helpers() produces

category,period,name1,
x,0,0,

among other things. Note the 0 integer for name1. The problem is that if you save this as csv, change the start values, and read it back into Pandas, read_csv will convert the integer into a string and fit() will complain with:

ValueError: Index of start parameters has to be either self.params_index or the index of free parameters from start_params_helpers.

Not the worst thing in the world, but a bit annoying...

Tags and skillmodels.__version__

Goal

We want to be able to find the skillmodels version by typing skillmodels.__version__ and checkout the corresponding source code via a tag.

Proposed implementation

Use zest.releaser to do the tagging and handling of the version number. I don't think it will completely replace our release.py because I didn't see any conda integration. My preferred solution would be to still have one python skript like release.py with a few additional command line arguments.

Since this will be relevant for respy, estimagic, gettsim and skillmodels, maybe someone else could look into it and then help me to implement it in skillmodels and estimagic.

@hmgaudecker, @tobiasraabe feel free to comment.

Better error handling when WA params are used as start params

Die Fehlermeldung von W-A selbst ist informativer (zu wenig Normalisierungen, muss mich da noch reinarbeiten), falls Du das noch auf Deine to-do Liste stecken möchtest. Sollte ja kein Problem sein, das einzufangen.

So, jetzt aber wieder an Obamacare :-)

Cheerio,
HM

Warnungen CHS:

/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/estimation/skill_model.py:769: UserWarning: In model baseline with dataset hrs it is not possible to use estimates from the wa estimator as start values for the chs estimator because of the following reasons:
/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/estimation/skill_model.py:868: UserWarning: Fitting model baseline with the wa estimator in order to get start values for the chs estimator failed. Instead naive start params will be used.
/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/estimation/likelihood_function.py:77: RuntimeWarning: overflow encountered in sqrt_linear_update
/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/estimation/likelihood_function.py:77: RuntimeWarning: invalid value encountered in sqrt_linear_update
/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/fast_routines/kalman_filters.py:290: RuntimeWarning: invalid value encountered in subtract
/home/hmg/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/estimation/likelihood_function.py:64: RuntimeWarning: invalid value encountered in less

Fehler WA:

~/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/model_functions/transition_functions.py in model_coeffs_from_iv_coeffs_linear(iv_coeffs, loading_norminfo, intercept_norminfo, coeff_sum_value, trans_intercept_value)
189 has_trans_intercept=False, loading_norminfo=loading_norminfo,
190 intercept_norminfo=intercept_norminfo, coeff_sum_value=coeff_sum_value,
--> 191 trans_intercept_value=trans_intercept_value)
192
193

~/miniconda3/envs/health-cognition/lib/python3.7/site-packages/skillmodels-0.0.33-py3.7.egg/skillmodels/model_functions/transition_functions.py in general_model_coeffs_from_iv_coeffs(iv_coeffs, iv_intercept_position, has_trans_intercept, loading_norminfo, intercept_norminfo, coeff_sum_value, trans_intercept_value)
582 to_check = [coeff_sum_value, loading_norminfo]
583 assert None in to_check, ('Overidentified scale')
--> 584 assert to_check != [None, None], ('Underidentified scale')
585
586 to_check = [trans_intercept_value, intercept_norminfo]

AssertionError: Underidentified scale

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.