Code Monkey home page Code Monkey logo

neuronunit's Introduction

scidash

A collection of candidates, tests, and records for use with SciDash.

neuronunit's People

Contributors

chihweilhbird avatar croessert avatar cyrus- avatar justasb avatar kedoxey avatar mwatts15 avatar nezanyat avatar rgerkin avatar russelljjarvis avatar sasaray avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuronunit's Issues

getting data back off travis.

Hi @rgerkin
I am trying to do a 5 parameter exhaustive search to use as ground truth against GA optimization results.

param=['a','b','vr','vt','v_peak'] I guess each parameter has 10 points meaning it samples over 10^5 points in parameter space. Even doing this in parallel on my local machine takes too long (obviously).

I am wondering if its possible to do it on travis in a way that the results are accessible after the run? I have read on stack exchange that its possible to push travis data to cloud storage (by using wget to push from inside a Dockerfile), if you have a public ally writable cloud storage device.

Instead of pursuing the travis path, I should instead be using the NSG, which does support saving results, and I should only search two parameters, as two is the limit of what is easily visualized. My only reservation about emailing NSG is handing over a nice Dockerfile to them.

Do you think I should clean up my code, merge it into scidash russell_dev and use the russell_dev branch to build on NSG? Or just the regular dev branch?

I will try to email NSG later today.

Add sanity check for NaN values to neuronunit test class

The exhaustive search exposes parameter values that lead the membrane potential towards +np.inf (actually nan) I was thinking of creating a test sub class that just checks if current injections will lead to nan values, and then it just returns something like test=tests.sufficientData('None') for possibly all of the subtests that involve current injection.

The alternative is to add extra exception handling tests to each sub test class but I feel that this is less efficient. The sanity test is a type of test too, although its preliminary and does not involve comparisons to emperical data.

new changes to backends, NEURON introduces a lot of problems.

In the file backends.py, the implementation of the function def set_run_params(self, **params) is too simplistic, and does not work.

If I run the code block without modification I get the errors, which is indicative of the nested dictionaries not being processed properly:
Ie //izhikevich2007Cell is not supposed to appear anywhere in that hoc variable assignment string:

**Ignoring included LEMS file: Networks.xml
Ignoring included LEMS file: Simulation.xml
0.03
NEURON: syntax error
 near line 1
 m_RS_RS_pop[0].//izhikevich2007Cell={'a': 0.029999999999999999}

When I execute the following code within the set_run_params method

    def set_run_params(self, **params):
        self.params.update(params)
        for v in self.params.values():
             print(type(v))
             if type(v) is not bool and type(v) is not str:
                 paramdict = list(v.values())[0]

I see that the type of v varies three times, but it would be preferable if its type was consistently dictionary.

To paraphrase the return type of params.values() does not consist of exclusively dictionary types, but it also now contains exactly one boolean, and one string at its end, such that the dictionary params, would need cleaning before it could be used for assignments to hoc variables as intended in that code block.

There seems to be a lot going on here, and I will revert to an older version of this file for the time being.

Performance polling and timing of backend NEURON

use pythons time module and or other modules to compare backendNEURON versus backendJneuroML

to check for speed up.

Tweak GA parameters and plot error functions for the pareto front population.

neuronunit/__init__.py

The file neuronunit/__init__.py fails with the stack trace:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.5/site-packages/scoop-0.7.2.0-py3.5.egg/scoop/_control.py", line 150, in runFuture
    future.resultValue = future.callable(*future.args, **future.kargs)
  File "/opt/conda/lib/python3.5/runpy.py", line 254, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/opt/conda/lib/python3.5/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/opt/conda/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "nsga.py", line 649, in <module>
    vmpop, pop, stats, invalid_ind = main()
  File "nsga.py", line 533, in main
    fitnesses = list(toolbox.map(toolbox.evaluate, pop, iter_))
  File "/opt/conda/lib/python3.5/site-packages/scoop-0.7.2.0-py3.5.egg/scoop/futures.py", line 99, in _mapGenerator
    for future in _waitAll(*futures):
  File "/opt/conda/lib/python3.5/site-packages/scoop-0.7.2.0-py3.5.egg/scoop/futures.py", line 360, in _waitAll
    for f in _waitAny(future):
  File "/opt/conda/lib/python3.5/site-packages/scoop-0.7.2.0-py3.5.egg/scoop/futures.py", line 337, in _waitAny
    raise childFuture.exceptionValue
AttributeError: 'RatioScore' object has no attribute 'unpicklable'

I am trying to conditionally make a list type of score.picklable but it seems to fail regardless:

I am using:

if not hasattr(score,'picklable'):
     score.picklable = []

which does not seem to work.

Idea for even greater speed up on NSG

@rgerkin

When you think about it, all the tests in suite judge should be run in parallel, since they are all a bit slow and often involve more simulations/current injections.

The general idea is

score = get_neab.suite.judge(model)#passing in model, changes model

Should be replaced with:

def test_to_model(local_test_methods,model):
    import matplotlib.pyplot as plt
    local_test_methods.judge(model)
    local_test_methods.tests.related_data['vm'].rescale('mV')
    plt.plot(model.results['t'], model.results['vm']
    plt.clf()
    return model.results
local_test_methods = [ i.judge for i in get_neab.suite.tests ]
judged = futures.map(test_to_model,local_test_methods,repeat(model))

Whether or not this works probably depends on if we have really succeeded in making neuronunits test init.py object pickable, but its something you should look in to if you have the time.

from scoop import futures

Scoop build broken and does not work.

The dockerbuilds installation of scoop does not pass its own tests.

It cannot distribute tasks in a genuinely parallel manner, but this seems to be a problem with the source code itself.

Alternatives candidates mpi4py and Ipython-parallel as used in bluepyopt.

Need to email scoop mailing list to try to resolve problems.

As is the backends file is still probably calling jNeuroML unnecessarily.

To eliminate unnecessary jNeuroML calls for all subsequent model instantiations.

Lines 252-252 of model/backends.py should be updated to contain the two extra lines below.

        sort_file_path, _ = self.orig_lems_file_path.split("/LEMS_2007One")
        sort_file_path, _ = os.path.splitext(sort_file_path)
        architecture = platform.machine()
        NEURON_file_path = os.path.join(sort_file_path,architecture)

The code works otherwise but its probably slower.

inject_current interface needs to be standardized

The ReceivesCurrent capability defines an interface method inject_current, which expects the current parameter to be a neo.core.AnalogSignal.

class ReceivesCurrent(Capability):
    """Indicates that somatic current can be injected into the model."""

    def inject_current(self,current):
        """Injects somatic current into the model.  
        Parameters
        ----------
        current : neo.core.AnalogSignal
        This is a time series of the current to be injected.  
        """

        raise NotImplementedError()

However, all of the other classes and examples pass in dictionary objects instead, whose keys and values vary across models.

For example, in neuronunit/tests it passes in a dictionary:

{'amplitude':-10.0*pq.pA, 'delay':100*pq.ms, 
                 'duration':500*pq.ms}}

Furthermore, in neuronunit.neuroconstruct.capabilities, 'ampl' key is used instead of 'amplitude':

cmd += 'err = j.sim.set_current_ampl(%f);' % injected_current['ampl']
        cmd += 'channel.send(err);'

Then in the ReadMe and in Chapter 3, the dictionary values do not use units:

params={'injected_current':{'ampl':0.0053}}) # 0.0053 nA (5.3 pA) of injected 

and

params={'injected_current':{'ampl':0.006}}) # 0.0053 nanoamps of injected 

Since everything already uses dictionaries with similar forms, I propose that the inject_current method interface be changed to accept a dictionary of the following form and include units:

{'amplitude':-10.0*pA, 'delay':100*ms, 'duration':500*ms}}

This will require modifying:

  • inject_current method comments
  • neuronunit.neuroconstruct.capabilities
  • The ReadMe example
  • Chapter 3 example

Neuronunit Tests Dockerfile -> Showcase

@russelljjarvis
I would like to move/copy the contents of the scidash/neuronunit/tests/Dockerfile to scidash/docker-stacks/neuronunit-showcase. The ideas is that neuronunit-showcase will be an environment that can run all the richer features of neuronunit. Eventually these can be considered "core" features (after some more thorough testing).

Unit tests with arguments

@russelljjarvis
When I try to run your unit tests, I get errors like:

======================================================================
ERROR: test_build_single (unit_test.showcase_tests.OptimizationTestCase)
----------------------------------------------------------------------
TypeError: test_build_single() missing 1 required positional argument: 'rh_value'

======================================================================
ERROR: test_main (unit_test.showcase_tests.OptimizationTestCase)
----------------------------------------------------------------------
TypeError: test_main() missing 1 required positional argument: 'ind'

----------------------------------------------------------------------

Ignore for now that they are in a different location. The main concern is that they take arguments (like rh_value and ind) but I don't see how they are supposed to get these arguments passed to them. Was this just a placeholder for unit tests that you planned to finish later, or is there some way that these arguments were getting passed when you ran them with python -m unittest?

Puttting a Sanity Test/NaN test Method inside VmTest class tests/__init__.py

If the NaN test method method is defined inside VmTest then it can be called in the final line of the code block displayed here (an excerpt from get_neab.py)

def update_amplitude(test,tests,score):
    rheobase = score.prediction['value']#first find a value for rheobase
    #then proceed with other optimizing other parameters.
    for i in [4,5,6]:
        # Set current injection to just suprathreshold
        tests[i].params['injected_square_current']['amplitude'] = rheobase*1.01
        tests[i].proceed=tests[i].sanity_check(rh_value=rheobase*1.01)

The next task would be finding a way that simulation data outputs that were used to established the model is sane are then utilized instead of redone in order to speed up optimization times.

changes to backends.py results in cryptic error.

When I revert the below lines back to their previous state this error goes away.

            self.__class__.__bases__ = (self._backend.__class__,) + \
                                        self.__class__.__bases__

            # Add all of the backend's methods to the model instance

            #This line breaks the code.
            #self.__class__.__bases__ = tuple(set((self._backend.__class__,) + \
            #                            self.__class__.__bases__))

274dd19

<class 'neuronunit.models.reduced.ReducedModel'>
{'injected_square_current': {'amplitude': array(40.0) * pA, 'duration': array(1000.0) * ms, 'delay': array(100.0) * ms}}
{'//izhikevich2007Cell': {'a': '0.03', 'b': '-2e-08'}}
> /home/mnt/neuronunit/tests/exhaustive_search.py(190)f()
-> model.update_run_params(vm.attrs)
(Pdb) c
Traceback (most recent call last):
  File "exhaustive_search.py", line 308, in <module>
    rh_value=searcher(f,rh_param,mean_vm)
  File "exhaustive_search.py", line 220, in searcher
    rh_param[1]=list(futures.map(f,rh_param[1],repeat(vms)))
  File "exhaustive_search.py", line 190, in f
    model.update_run_params(vm.attrs)
  File "/home/mnt/neuronunit/tests/../../neuronunit/capabilities/__init__.py", line 106, in inject_square_current
    raise NotImplementedError()
NotImplementedError

pickle dump(s) neuronunit score type does not work, writes empty files/strings

The command: pprint(score.__dict__)

Yields:

{'_data': BlockManager
Items: Index([                  RheobaseTest,            InputResistanceTest,
                     TimeConstantTest,                CapacitanceTest,
                 RestingPotentialTest,     InjectedCurrentAPWidthTest,
       InjectedCurrentAPAmplitudeTest, InjectedCurrentAPThresholdTest],
      dtype='object')
Axis 1: Index([ vr-60.4809785644 a0.0265706689577 b-1.9477508505e-08 C9.20541786191e-05 c-55.0144677225 d0.186827068905 v0-65.8443590361 k0.000968076096567 vt-38.6668043391 vpeak28.2853005216], dtype='object')
ObjectBlock: slice(0, 8, 1), 8 x 1, dtype: object,
 '_item_cache': {},
 '_loc': <pandas.core.indexing._LocIndexer object at 0x7f02d92ff470>,
 'is_copy': None,
 'models': ( vr-60.4809785644 a0.0265706689577 b-1.9477508505e-08 C9.20541786191e-05 c-55.0144677225 d0.186827068905 v0-65.8443590361 k0.000968076096567 vt-38.6668043391 vpeak28.2853005216,),
 'tests': [RheobaseTest,
           InputResistanceTest,
           TimeConstantTest,
           CapacitanceTest,
           RestingPotentialTest,
           InjectedCurrentAPWidthTest,
           InjectedCurrentAPAmplitudeTest,
           InjectedCurrentAPThresholdTest]}
{'models': ( vr-60.4809785644 a0.0265706689577 b-1.9477508505e-08 C9.20541786191e-05 c-55.0144677225 d0.186827068905 v0-65.8443590361 k0.000968076096567 vt-38.6668043391 vpeak28.2853005216,), '_data': BlockManager
Items: Index([                  RheobaseTest,            InputResistanceTest,
                     TimeConstantTest,                CapacitanceTest,
                 RestingPotentialTest,     InjectedCurrentAPWidthTest,
       InjectedCurrentAPAmplitudeTest, InjectedCurrentAPThresholdTest],
      dtype='object')
Axis 1: Index([ vr-60.4809785644 a0.0265706689577 b-1.9477508505e-08 C9.20541786191e-05 c-55.0144677225 d0.186827068905 v0-65.8443590361 k0.000968076096567 vt-38.6668043391 vpeak28.2853005216], dtype='object')
ObjectBlock: slice(0, 8, 1), 8 x 1, dtype: object, 'tests': [RheobaseTest, InputResistanceTest, TimeConstantTest, CapacitanceTest, RestingPotentialTest, InjectedCurrentAPWidthTest, InjectedCurrentAPAmplitudeTest, InjectedCurrentAPThresholdTest], 'is_copy': None, '_item_cache': {}, '_loc': <pandas.core.indexing._LocIndexer object at 0x7f02d92ff470>}
Mechanisms already loaded from path: /home/mnt/neuronunit/tests/NeuroML2.  Aborting.
I

In the NEURON specific instance of this problem:

pickle.dumps(score.related_data.values)

returns a long string, however:

(Pdb) pickle.dumps(score.related_data.keys())
*** TypeError: HocObject: Only Vector instance can be pickled
(Pdb) 

The results types are correct:

(Pdb) type(get_neab.suite.tests[0].last_model.results['vm'])
<class 'list'>
(Pdb) type(get_neab.suite.tests[0].last_model.results['t'])
<class 'list'>
(Pdb) 

Resetting the hoc object to none of the model does not help.

(Pdb) get_neab.suite.tests[0].last_model.h=None

(Pdb) pickle.dumps(get_neab.suite.tests[0].last_model)
*** TypeError: HocObject: Only Vector instance can be pickled
Completely destroying the object does help:

(Pdb) dir(get_neab.suite.tests[0].last_model)
pickle.dumps(get_neab.suite.tests[0])
b'\x80\x03cneuronunit.tests\nRheobaseTest\nq\x00)\x81q\x01}q\x02(X\x15\x00

Create weighted, pooled summary observations

This will require:

  • a new subclass of neuroelectro.NeuroElectroData corresponding to these weighted summaries
  • a new neuroelectro_pooled_observation method for tests.VmTest

These should use the API to take each NeuroElectroDataMap, and compute a weighted mean, with the weights proportional to the reciprocal of the squared standard error. Note that the reported SD for a given data map may in some cases actually be an SE (does the API tell you if this is the case?). So if it is an SD, then w = N/(SD^2), and if it is already an SE than just w=1/SE^2.

off by one type error.

I was wondering if you could enable issues for your IzhikevichModel repository:

``

This is 7 Elements for nu_tests

est_class_params = [(nu_tests.InputResistanceTest,None),
                     (nu_tests.TimeConstantTest,None),
                     (nu_tests.CapacitanceTest,None),
                     (nu_tests.RestingPotentialTest,None),
                     (nu_tests.InjectedCurrentAPWidthTest,None),
                     (nu_tests.InjectedCurrentAPAmplitudeTest,None),
                     (nu_tests.InjectedCurrentAPThresholdTest,None)]

So I think the nested clause should read for i in [4,5,76]:

def update_amplitude(test,tests,score):
    rheobase = score.prediction['value']
    for i in [5,6,7]:

Also different minor fix in the same file:
Technically using neuron as a variable name works out fine in the below:
neuron = {'nlex_id': 'nifext_50'} # Layer V pyramidal cell
Since the way Justas and I am using the neuron module is by making it a class attribute, however I think this clash has the potential to cause havoc in the future. Maybe we could call it nlexid or neurolex?
I think it only reappears one other time in that file. Also the AIBS.ipynb file may contain the same off by one type error.
observation = cls.neuroelectro_summary_observation(neuron)

AP width computation issue

Currently, the AP half-width is computed by taking a 10ms window around an AP and:

  1. Finding the maximum membrane v
  2. The minimum v
  3. Counting the samples where v is above mean(min,max)

Should #2 be using the v of the AP threshold? According to NE definition:

Calculated as the AP duration at the membrane voltage halfway between AP threshold and AP peak. Most commonly calculated using first AP in train at rheobase. current.

After merging from scidash can't find the file where the code for score.unpickleable draws from

Hi @rgerkin
After merging from scidash can't find the file where the code for score.unpickleable draws from. Was it git added?

If you scroll down to the last lines of stdout on the corresponding scidash travis build you can see the same errors:

https://travis-ci.org/scidash/neuronunit/builds/216206049

jovyan@1ea5a442edd8:/home/mnt/neuronunit$ grep -r "unpicklable" *
Binary file tests/__pycache__/__init__.cpython-35.pyc matches
tests/__init__.py:        score.unpicklable.append('plot_vm')

. AttributeError: 'RatioScore' object has no attribute 'unpicklable'

 File "analysis.py", line 108, in build_single
    score = get_neab.suite.judge(model)#passing in model, changes model
  File "/home/jovyan/work/scidash/sciunit/sciunit/__init__.py", line 430, in judge
    deep_error=deep_error)
  File "/home/jovyan/work/scidash/sciunit/sciunit/__init__.py", line 302, in judge
    raise score.score # An exception.  
  File "/home/jovyan/work/scidash/sciunit/sciunit/__init__.py", line 296, in judge
    score = self._judge(model, skip_incapable=skip_incapable)
  File "/home/jovyan/work/scidash/sciunit/sciunit/__init__.py", line 256, in _judge
    score = self._bind_score(score,model,observation,prediction)
  File "/home/jovyan/work/scidash/sciunit/sciunit/__init__.py", line 226, in _bind_score
    score = self.bind_score(score,model,observation,prediction)
  File "/home/mnt/neuronunit/tests/../../neuronunit/tests/__init__.py", line 98, in bind_score
    score.unpicklable.append('plot_vm')
AttributeError: 'RatioScore' object has no attribute 'unpicklable'

documenting problem: Slower performance for optimization code on docker in osx/mac versus Ubuntu

CPU utilization looks like 4 out of CPUs, and I suspect the docker container is starting with less than the full amount of available RAM too. My suspicions were true. Hardly any memory used as a default in docker invocation. Docker runs much faster when provided with all the available memory and CPU.

Solved in the docker-whale icon select preferences, click advanced and then max out memory and CPUs with scroll bars.

Speedup for NEURON simulations

We want to speed up the simulations. In neuronunit/models/__init__.py we have the LEMSModel class. I have added a new backends.py file which will contain different stimulation backends to be used with LEMSModel, i.e. the LEMSModel will always still be instantiated using a path to LEMS file, but some of the operations on it can then be done either to the file, or to some representation of the model in a program like NEURON, depending on the backend.

In backends.py you will see the NEURONBackend, which is currently empty, but which can be filled in by analogy to what the jNEUROMLBackend is currently doing. Note that for jNEUROMLBackend, I am just calling the same code that was already there before, because that is the use case where we actually want to create new LEMS files with each change of parameters. I expect that the NEURONBackend will change parameters instead by making calls to pyneuron. Additional class attributes will need to be added, such as attributes pointing to the NEURON model in memory.

Problems replicating AIBS.ipynb on dockerimage: aibs.py

The file aibs.py https://github.com/scidash/neuronunit/blob/dev/neuronunit/aibs.py#L20

Contains the following lines:


cmd = ct.get_cell(dataset_id) # Cell metadata
sweep_num = None
if kind == 'rheobase':
   sweep_id = cmd['ephys_features'][0]['rheobase_sweep_id']

However the allenbrainsdk has changed and now the lines should read:

cmd = ct.get_ephys_features(dataset_id)
sweep_num = None
if kind == 'rheobase':
   sweep_ids=cmd['rheobase_sweep_id']

Notice also in the above I used:
sweep_ids=cmd['rheobase_sweep_id']
Not
sweep_id=cmd['rheobase_sweep_id'][0]

If I use sweep_id=cmd['rheobase_sweep_id'[0]
Then no sp['id']==i in subsequent code. So I had to create a more complete work around which is located at:
https://github.com/russelljjarvis/sciunitopt/blob/master/AIBS.py#L72-L101

There is also a problem

for sp in experiment_params:
for i in sweep_ids:

replicating AIBS.ipynb: key errors in /neuronunit/models/reduced.py

Hi @rgerkin,

I converted the ipynb file to a regular python file AIBS.py and execute that file. The error calls when I try to invoke sciunit judge https://github.com/russelljjarvis/sciunitopt/blob/master/AIBS.py#L146

 File "AIBS.py", line 180, in <module>
    suite.judge(model)

As you can see the AIBS.py fails at suite.judge(model). At first I thought that problem was in LEMS_2007One.xml
because when I run model.results.keys(). After I run AIBS dict_keys(['t', 'RS_pop[0]/v'])
and because the output column: RS_pop[0]/v is specified in the LEMS_2007One.xml file.

However I have now come around to thinking that the problem is in the file:
/opt/conda/lib/python3.5/site-packages/neuronunit/models/reduced.py", line 28
As the error output below would suggest.

 File "AIBS.py", line 180, in <module>
    suite.judge(model)
  File "/opt/conda/lib/python3.5/site-packages/sciunit/__init__.py", line 430, in judge
    deep_error=deep_error)
  File "/opt/conda/lib/python3.5/site-packages/sciunit/__init__.py", line 302, in judge
    raise score.score # An exception.  
  File "/opt/conda/lib/python3.5/site-packages/sciunit/__init__.py", line 296, in judge
    score = self._judge(model, skip_incapable=skip_incapable)
  File "/opt/conda/lib/python3.5/site-packages/sciunit/__init__.py", line 239, in _judge
    prediction = self.generate_prediction(model)
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/__init__.py", line 468, in generate_prediction
    lookup = self.threshold_FI(model, units)
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/__init__.py", line 507, in threshold_FI
    f(0.0*units)
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/__init__.py", line 500, in f
    n_spikes = model.get_spike_count()
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/capabilities/__init__.py", line 56, in get_spike_count
    spike_train = self.get_spike_train()
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/models/reduced.py", line 40, in get_spike_train
    vm = self.get_membrane_potential(rerun=rerun, **run_params)
  File "/opt/conda/lib/python3.5/site-packages/neuronunit/models/reduced.py", line 28, in get_membrane_potential
    v = np.array(self.results['v'])
KeyError: 'v'

A temporary fix for the file /opt/conda/lib/python3.5/site-packages/neuronunit/models/reduced.py#L28
which seems to work is:

for rkey in self.results.keys():
     if 'v' in rkey:
          v = np.array(self.results[rkey])

Standardize indentation across all .py files

Python is a white-space sensitive language and mixing tabs with spaces for indentation results in syntax errors. We should agree on a convention for indentation, and use it throughout.

I propose that we use "4 spaces per indentation level" for all python files. E.g. one 1 TAB = 4 SPACES.

Using spaces will result in unambiguous indentation, while tabs may be system dependent. The popular code editors allow setting space/tabs preference.

If this is agreeable, I can convert the existing indentation to spaces in all .py files in the repo. However, to minimize merging issues, this should be planned to be done at a point when everyone has checked in their changes.

Bug re-introduced with merge in pull request.

This line:
https://github.com/scidash/neuronunit/blob/master/neuronunit/tests/__init__.py#L180
Notice that in my version below y is cast to quantities pq.ms as well as x-offset.
Should read.

popt, pcov = curve_fit(func, x-offset*pq.ms, y*pq.ms, [0.001,2,y.min()]) # Estimate starting values for better convergence

I corrected this a while ago but I merged this error back in by accident after auto-merging from the merge that resulted from the pull request.

Move to neo 0.5

I was taking a look at ProducesActionPotentials capability which requires the model to provide a neo.core.AnalogSignalArray. The latest version of neo (0.5) no longer has AnalogSignalArray and simply uses AnalogSignal for the same. The capability might therefore need to be updated appropriately.

Docker build NU replication of AIBS.ipynb scipy/optomize/minpack is given parameters it can't optomize.

In an attempted docker build replication of the AIBS notebook scipy/optomize/minpack is given parameters it can't optimize.

See the error messages below:

check_error = suite.judge(model)
File "/opt/conda/lib/python3.5/site-packages/sciunit/init.py", line 430, in judge
deep_error=deep_error)
File "/opt/conda/lib/python3.5/site-packages/sciunit/init.py", line 302, in judge
raise score.score # An exception.
File "/opt/conda/lib/python3.5/site-packages/sciunit/init.py", line 296, in judge
score = self._judge(model, skip_incapable=skip_incapable)
File "/opt/conda/lib/python3.5/site-packages/sciunit/init.py", line 239, in _judge
prediction = self.generate_prediction(model)
File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/init.py", line 215, in generate_prediction
tau = self.class.get_tau(vm, i)
File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/init.py", line 163, in get_tau
coefs = cls.exponential_fit(region, i['delay'])
File "/opt/conda/lib/python3.5/site-packages/neuronunit/tests/init.py", line 175, in exponential_fit
popt, pcov = curve_fit(func, x-offset*pq.s, y, [0.001,2,y.min()]) # Estimate starting values for better convergence
File "/opt/conda/lib/python3.5/site-packages/scipy/optimize/minpack.py", line 680, in curve_fit
raise RuntimeError("Optimal parameters not found: " + errmsg)
RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.