neuralensemble / elephant Goto Github PK
View Code? Open in Web Editor NEWElephant is the Electrophysiology Analysis Toolkit
Home Page: http://www.python-elephant.org
License: BSD 3-Clause "New" or "Revised" License
Elephant is the Electrophysiology Analysis Toolkit
Home Page: http://www.python-elephant.org
License: BSD 3-Clause "New" or "Revised" License
The 5th environment ubuntu
of the CI tests is failing by timing out. This behavior is seen in the last PR integration, but also in PRs such as #110.
The build errors with a timeout:
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received
The build has been terminated
In the newest implementation 6.28 (2017.03.24) of the fim.so
libraries found at
http://www.borgelt.net/pyfim.html
which are used by the SPADE module, giving as input a list of identical transactions returns an empty list. This can be quick fixed in Elephant, but a better solution would be to fix this on the fim.so
side.
The function to calculate the instantaneous rate profile should be able to deal with lists of spike trains (like the time_histogram() function), e.g., to calculate the smoothed PSTH.
Somehow when running the tests of the ubuntu environment via travis, the unittests get stuck and do not run to the end. Travis stops at some point the build due to no response. On my laptop I could investigate that the CSD tests take a lot of time but not more than 10 minutes. On my own branch I even put a travis_wait
command for the tests but that didn't help, too.
I would appreciate any further information and help on this issue.
@toddrjen raised the question in his comments to #2, I'm creating a new issue to make it more visible: Should elephant target currently available Neo versions (0.3.x) or the next, API breaking version (0.4.0)?
Personally, I share Todd's opinion and would directly go with 0.4 compatibility. elephant probably won't be ready for release for a while and would need to be changed as soon as Neo 0.4 comes along.
On the other hand, this means developing against a moving target and that existing code using Neo won't be able to immediately profit from elephant without changes. But it could also help in the development and eventual adaption of the new Neo version.
Currently requirements are specified both directly in setup.py
as well as in requirements.txt
.
Would it be possible to parse requirements.txt
with something like (would require removal of commented out lines in requirements.txt
)
with open('requirements.txt') as fp:
install_requires = fp.read()
setup(install_requires=install_requires, ...)
rather than have two different locations for the same information?
The computation of the optimized kernel width return an error for spiketrain with few and very close spikes. Example to reproduce the error:
import neo
import quantities as pq
import elephant.statistics as stat
st=neo.SpikeTrain([2.12,2.13,2.15]*pq.s, t_stop=10*pq.s)
rate = stat.instantaneous_rate(st,1*pq.ms)
pandas
uses vbench
to run benchmarks on each new version of code. This way all possible "performance regressions" are early detected. Something similar could be useful for elephant.
More information:
I'm trying to plot cross-correlation histograms of the attached spike data using the following elephant code:
import numpy as np
from quantities import s, ms
from neo import SpikeTrain
from elephant.conversion import BinnedSpikeTrain
from elephant.spike_train_correlation import corrcoef
spikes = np.loadtxt("6E.txt", skiprows=1, delimiter=",",
dtype={"names": ("time", "id", ), "formats": (float, int)})
neuron_indices = np.random.choice(14395, 200, replace=False)
spike_times = [SpikeTrain(spikes["time"][spikes["id"] == n] * ms,t_stop=10*s)
for n in neuron_indices]
binned_spike_times = BinnedSpikeTrain(spike_times, binsize=2.0 * ms)
correlation = corrcoef(binned_spike_times)
The corrcoef
function emits a warning of C[i, j] = C[j, i] = enumerator / denominator in corrcoef
and I'm getting NaNs in the resultant correlation matrix. It is quite possible the problem is with my data, but other analysis looks totally fine (raster plotting, CV ISI, rate histograms etc etc) and this warning is not helpful in determining the problem! I am using Elephant 0.5.0 from pip.
6E.txt
I would like to discuss if we should provide a new environment in the install script to test for mpi
and or c
based code. This issue is connected to PR #110 since the SPADE
uses a pre-compiled c
file (which also has to be downloaded) .so
and mpi4py
.
My suggestion would be to add a new matrix environment in install.sh
and in the corresponding .yaml
where we have a minimal setup using conda and mpi4py, additionally we download the c file.
Any further suggestions are welcome.
When trying to access the bin_edges
property of BinnedSpikeTrain
object the following error occurs:
Traceback (most recent call last):
File "test_elephant.py", line 14, in <module>
edges = binned.bin_edges
File "/home/bartosz/.pyenv/versions/anaconda3-2.3.0/lib/python3.4/site-packages/elephant-0.3.0-py3.4.egg/elephant/conversion.py", line 585, in bin_edges
self.num_bins + 1, endpoint=True),
File "/home/bartosz/.pyenv/versions/anaconda3-2.3.0/lib/python3.4/site-packages/numpy/core/function_base.py", line 125, in linspace
return y.astype(dtype, copy=False)
TypeError: astype() got an unexpected keyword argument 'copy'
The error can be reproduced with the following script:
from elephant.conversion import BinnedSpikeTrain
import neo
from quantities import ms
import numpy as np
spiketimes = np.arange(10)*10
tmax = spiketimes.max()
spiketrain = neo.SpikeTrain(spiketimes * ms, t_stop=tmax * ms)
binned = BinnedSpikeTrain(spiketrain, t_start=0 * ms, binsize=50*ms)
edges = binned.bin_edges
It appears in python 3.44, with master branch of Elephant and neo, quantities 0.11.1 and numpy 1.11.0 (I think it does not occur with earlier version of numpy).
The problem is with the following with conversions.py:L584
return pq.Quantity(np.linspace(self.t_start, self.t_stop,
self.num_bins + 1, endpoint=True),
units=self.binsize.units)
It seems that numpy.linspace
returns a quantity object when t_start
and t_stop
are also quantity objects. Contrary to what the docstring says the Quantity.astype
method does not take copy
parameters (numpy.astype
does). The workaround is to use t_start
and t_stop
without their units:
np.linspace(self.t_start.magnitude, self.t_stop.magnitude,
self.num_bins + 1, endpoint=True)
This bug was first spotted by @medelero
@apdavison Is there a function for this yet? I didn't see it in a brief search. If not let me know if I should just contribute the old AnalogSignal.threshold_detection code from NeuroTools, and if so to what module (maybe a new module for AnalogSignal manipulations, which could subsume sta.py since it currently only has one function).
Update documentation on building releases for the Elephant package, e.g., with respect to using twine instead of sdist upload.
There are many Python packages dealing with time series analysis, which provide functions that may be of use to neuroscientists.
For an individual user, finding and identifying the functions of interest among multiple packages will be difficult. Furthermore, some (most) of these functions will not handle Neo objects or even units.
For functions which need to be adapted to handle Neo objects/units, it seems obvious that Elephant should provide wrappers. The question is whether we should also provide wrappers /aliases for functions that do not need to be adapted. An example of the latter is scipy.stats.variation()
, for which I added an alias cv()
(coefficient of variation), as I think the latter name will be much more obvious to neuroscientists.
The arguments for providing aliases are:
(1) providing all the functions of interest to neuroscientists in a single namespace;
(2) giving function names that are more familiar to neuroscientists.
If there are no spikes (no threshold crossing), threshold_detection
returns a SpikeTrain that had undefined length, rather than length zero.
See here
I can fix this easily, but first I'm wondering if there is an upstream fix in Neo to disallow this kind of SpikeTrain
(i.e. there appear to be two ways to generate a SpikeTrain
with zero spikes, and only one of them seems legitimate; should the other one raise and error or be redirected to the legitimate one)?
Hei,
In /spike_train_correlation.py, function '_cch_speed' and '_cch_memory', I tried a binwidth of 0.4 and a windowsize of 40. The input validation raises the error "ValueError: The window has to be a multiple of the binsize" despite being obviously true. In my case
win[0].rescale(binsize.units).magnitude % binsize.magnitude = 0.399999999998
So obviously a numerical issue with a simple tolerance solution
Best,
Tristan
The hyperlink in "Stochastic spike train generation" in [http://elephant.readthedocs.org/en/latest/reference/spike_triggered_average.html] links to the wrong page.
Very very minor issue (it is only a warning, and I have checked on the web, it seems no serious and related to numpy... but maybe you want to check it, in order to have a "clean" output). I have followed the tutorial step-by-step (http://elephant.readthedocs.io/en/latest/tutorial.html)
[about this, excellent tutorial! very clear and well done. A suggestion: you can add a reference, for instance: P. Dayan and L.F. Abbot, "Theoretical Neuroscience" (2001, Par. 1.4)].
My NumPy is numpy==1.11.2
Details in the attached file.
Thank you
VisibleDeprecationWarning.txt
P.S. The warning appears only the first time you use "homogeneous_poisson_process" in the python session. Subsequent calls in the same session produce no warning messages.
Contrary to the definition in the docs, the alpha kernel does not start with non-negative values only after t = 0
in elephant 0.4.1
.
to demonstrate
import neo
import elephant as ele
import scipy as sp
import matplotlib.pyplot as plt
import quantities as pq
st = neo.SpikeTrain([1]*pq.s,t_start=0*pq.s,t_stop=2*pq.s)
kernel = ele.kernels.AlphaKernel(200*pq.ms)
fs = 0.1 * pq.ms
asig = ele.statistics.instantaneous_rate(st,t_start=st.t_start,t_stop=st.t_stop,sampling_period=fs,kernel=kernel)
plt.plot(asig.times.rescale(pq.s),asig)
[plt.axvline(t) for t in st.times]
edit: which is almost the median (not exactly probably the kernel never decays back to zero). I looked at the AlphaKernel
class in kernels.py
and it looks fine, is there some median centering of the kernel going on somewhere else, that might need to be disabled?
from elephant.spike_train_generation import homogeneous_poisson_process
import quantities as pq
print(homogeneous_poisson_process(0.00001*pq.Hz, t_start=3*pq.s, t_stop=2*pq.s))
>>> [] s
If t_start
is greater than t_stop
no error is thrown. This happens when the rate is a small value. I think in that situation it should throw an error.
The problem can be related to neo.SpikeTrain
and occurs if there is an empty list, e.g.,
import neo
print(neo.SpikeTrain([]*pq.s, t_start=5*pq.s, t_stop=4*pq.s))
>>> [] s
In test_spike_train_generation.py
in order to test the spike_extraction function, some reference artificial data, generated with make_spike_extraction_test_data.py
and stored in a .npz file as a neo block, are loaded using the PyNNIO class which does not exist any longer. This makes fail the test.
At the moment neo
is directly installed from the Gthub master branch. It should be installed via pip to have a stable working solution.
Right now the problem is that the pip neo 0.6.1
version is not compatible with the latest numpy
version which we have in our scripts.
I guess it is better to wait for the latest 0.7.0 release of neo
.
For some of the entries in the Travis matrix, the versions of numpy
and scipy
that end up being installed are not the same as the versions requested by the environment variables in .travis.yml
e.g.:
DISTRIB="ubuntu"
: NUMPY_VERSION="1.6.2" PANDAS_VERSION="0.16.0"
What ends up being installed is numpy 1.10.4
, scipy 0.17.0
. The later version of Numpy is being pulled in by Pandas, Scipy by Elephant itself.
DISTRIB="conda_min" NUMPY_VERSION="1.6.2" SCIPY_VERSION="0.11.0"
We get numpy 1.10.2
, scipy 0.16.1
, which are pulled in by installing the mkl
package.
DISTRIB="conda" NUMPY_VERSION="1.9.0" SCIPY_VERSION="0.16.0" PANDAS_VERSION="0.16.0"
We get numpy 1.10.2
, scipy 0.16.1
, again pulled in by mkl
.
Hello Neuronal Ensemble,
I've been working on a function that takes an AnalogSignal as input, finds spikes by threshold and returns a SpikeTrain object of the spike peaks. I also want to extract the waveforms of each spike in an interval around the peak but I am confused about the usage of SpikeTrain.waveforms.
By the neo doc: waveforms: (quantity array 3D (spike, channel_index, time)) The waveforms of each spike.
Therefore my .waveforms attribute has the shape (n, 1, i), where n is the number of spikes, 1 is just 1 because I have only 1 AnalogSignal and i is the extraction interval passed to the function. Is that the intended use of .waveforms?
Thanks,
Daniel
Should we integrate pep8 speaks into Elephant? The tool checks for pep8 compatibility for ongoing pull requests.
https://pep8speaks.com/
https://github.com/OrkoHunter/pep8speaks
When doing the PR in #32 I noticed the nosetests performed by Travis CI fail, with following error:
AssertionError: attr is not equal [names]: FrozenList([None]) != FrozenList([u'0'])
I checked and saw that this issue is related to the test_pandas_bridge.py
unittest. It seems that the panda
version specified in the install script has problems with the newest numpy
library or vice versa.
However after trying locally with all the newest versions the tests still fail.
I assume the unittest regarding the panda test has to be adapted according to the newest versions.
Currently the functions lv() and cv2() in statistics raise an error if the spike trains in input has less three spikes. It might be preferable to replace this with a warning message and returning NaN or None instead (suggested in #108 ). For example this avoids to break for loops over list of spike trains.
import quantities as pq
from quantities import s
is missing from the example in the SFC function
spike_train_correlation.cch() function raise an error also when the parameters window are a multiple of the binsize. Example:
import elephant.spike_train_correlation as corr
import neo
import elephant.conversion as conv
import quantities as pq
sts1 = neo.SpikeTrain([1,2,3]*pq.s, t_stop=4*pq.s)
sts2 = neo.SpikeTrain([1,2,3]*pq.s, t_stop=4*pq.s)
window = [-0.1,0.1]*pq.s
binsize = 0.01*pq.s
cch = corr.cch(conv.BinnedSpikeTrain(sts1, 0.01*pq.ms),
conv.BinnedSpikeTrain(sts2, 0.01*pq.ms),
window=window)
This is due to the rounding error given by % operator (e.g. 0.1%0.01!=0). The control for such condition should be changed.
Travis is discontinuing support for Ubuntu precise
end of April 2018. Our travis script has explicitly dist:precise
at the header. If not specified the Ubuntu build in the build matrix of travis produced errors before. So we should investigate this issue further on and change our travis script accordingly.
For more information see: https://blog.travis-ci.com/2017-08-31-trusty-as-default-status
Given spikes
>>> spike_slice
<SpikeTrain(array([ 503.8 , 520.05 , 536.325, 552.625, 568.925, 585.225, 601.575, 617.95 , 634.325, 650.725, 667.15 , 683.575, 700.05 , 716.525, 733.025, 749.55 , 766.075, 782.625, 799.175, 815.75 , 832.35 , 848.95 , 865.575, 882.225, 898.9 , 915.575, 932.275, 949. , 965.75 , 982.5 , 999.25 , 1016.025, 1032.825, 1049.65 , 1066.5 , 1083.35 , 1100.225, 1117.125, 1134.025, 1150.95 , 1167.9 , 1184.875, 1201.85 , 1218.85 , 1235.875, 1252.9 , 1269.95 , 1287.025, 1304.1 , 1321.175, 1338.275, 1355.4 , 1372.525, 1389.675, 1406.85 , 1424.05 , 1441.25 , 1458.5 , 1475.75 , 1493. ]) * ms, [500.0, 1500.0])>
When I run the elephant.statistics.isi to get the inter-spike interval
isi(spike_slice)
I get
>>> isi(spike_slice)
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "~/miniconda2/lib/python2.7/site-packages/elephant/statistics.py", line 49, in isi intervals = np.diff(spiketrain, axis=axis) File "~/miniconda2/lib/python2.7/site-packages/numpy/lib/function_base.py", line 1578, in diff return a[slice1]-a[slice2] File "~/miniconda2/lib/python2.7/site-packages/neo/core/spiketrain.py", line 461, in __sub__ _check_time_in_range(spikes - time, self.t_start, self.t_stop) File "~/miniconda2/lib/python2.7/site-packages/neo/core/spiketrain.py", line 66, in _check_time_in_range (value, t_start))
ValueError: The first spike ([ 16.25 16.275 16.3 16.3 16.3 16.35 16.375 16.375 16.4 16.425 16.425 16.475 16.475 16.5 16.525 16.525 16.55 16.55 16.575 16.6 16.6 16.625 16.65 16.675 16.675 16.7 16.725 16.75 16.75 16.75 16.775 16.8 16.825 16.85 16.85 16.875 16.9 16.9 16.925 16.95 16.975 16.975 17. 17.025 17.025 17.05 17.075 17.075 17.075 17.1 17.125 17.125 17.15 17.175 17.2 17.2 17.25 17.25 17.25 ] ms) is before t_start (500.0)
Why am I getting this error?
Clearly,
>>> spike_slice[0]
array(503.79999999967873) * ms
does not start before t_start(500.0)
Does isi
function not work with slice taken from a complete spike train?
Note:
isi(whole_spike_train)
np.diff(spike_slice)
0.4.1
0.5.1
I think this warrants its own issue:
This also leads me to another issue I have been thinking about: what do we do about the metadata of a neo object? When, for example, we get the average spike rate of a spike train, we end up with just a quantity. Is that what we want? Might it be a good idea to have some class that stores the output of these sorts of analyses along with the metadata of the original neo object? Or is that overkill?
The problem with this is doing it in a generic manner. You can't really use a SpikeTrain, since the resulting object may not meet the rules of a SpikeTrain. On the other hand, creating a generic "results" class would make it impossible to know what metadata you should expect from an object. And having a more specific SpikeTrainResults object would be difficult since it would need to be able to handle scalars, 1D arrays, and maybe even ND arrays depending on what analyses we allow. So it is a difficult problem, but I think having some way to keep the metadata bound to the results of some manipulation is important.
I think this verges into overkill territory :-) For most results (like the average rate of a spike train), the caller knows exactly from what object the result has been calculated. The caller also knows if and what metadata is needed, while our analysis function doesn't, so I would leave the responsibility upstream.
However, there might be analysis where this information is not available to the caller. For example, an analysis that takes a number of objects, but only uses some of them based on their content. I don't know if we will have such functions - I would try to avoid it but it might be necessary for some algorithms. In that case, I would return providence information to the caller: provide which objects have actually been used. By linking results to the actual objects used in their creation, all metadata is available and we do not need to create new result types with all the complications that come with that.
The module list on the readthedocs page right now is in alphabetical order. A categorical view with the modules under specific topics would improve the readability.
The routine sskernel() works (implicitly! bad) only for gaussian kernel. Besides, when generating kernels it wrongly sets the kernel sigma (which for gaussian is ~bandwidth/5.5) to the kernel bandwidth and therefore effectively generates kernels which are 5.4 times larger than they should
Hi all.
First of all, I want to thank you for your work in this library. It has become my principal tool to work with spike trains, trying to keep it all in python.
Anyway, I noticed this error. Apparently, the normalization was never actually incorporated in the function as described in the documentation. Checked the source code of the package I've downloaded, version 0.4.1. It's simply not the same as the one shown in the documentation.
I suppose I can just copy the source code in the docs and fix mine but I guessed you'd like to know.
Thanks again.
Running neo 0.6.1, quantities 0.12.2.
import numpy as np
import neo
import elephant
import quantities as pq
train = neo.SpikeTrain(times=np.array([1.001, 1.002, 1.005])*pq.s,
t_start=1*pq.s, t_stop=1.01*pq.s)
bs = elephant.conversion.BinnedSpikeTrain(train,
t_start=1 * pq.s, t_stop=1.01 * pq.s,
binsize=1 * pq.ms)
print(bs.bin_edges)
print(bs.bin_centers)
Returns
[1. 1.001 1.002 1.003 1.004 1.005 1.006 1.007 1.008 1.009 1.01 ] ms
[1.5 1.501 1.502 1.503 1.504 1.505 1.506 1.507 1.508 1.509] ms
The correct units for bin_edges
are seconds (instead of milliseconds), whereas the bin_centers
seem pretty off. This bug has to do with the case in which BinnedSpikeTrain
receives different units, because converting manually
bs = elephant.conversion.BinnedSpikeTrain(train,
t_start=1000 * pq.ms, t_stop=1010 * pq.ms,
binsize=1 * pq.ms)
print(bs.bin_edges)
print(bs.bin_centers)
Gives the expected output
[1000. 1001. 1002. 1003. 1004. 1005. 1006. 1007. 1008. 1009. 1010.] ms
[1000.5 1001.5 1002.5 1003.5 1004.5 1005.5 1006.5 1007.5 1008.5 1009.5] ms
For some analysis functions, such as spike triggered average or spike triggered phases, it is desireable to limit the analysis to a small part of the input data. One option is to cut the input data beforehand, but another option is to use something like the window parameter in the spike triggered average. This is particularly useful if the analysis itself is conducted in a sliding window [-w,+w], such that a user who wants to perform an analysis on the interval [t1, t2] would need to actually cut in an interval [t1-w,t2+w] (i.e., the user requires intrinsic knowledge of the function).
Therefore, a common way to handle such functions should be investigated (i.e., implementation of a window parameter).
setup.py nosetests
Or
setup.py nosetests --with-coverage --cover-package=elephant --cover-erase
Or
setup.py -m unittest discover
arr[tuple(seq)]
instead of arr[seq]
. In the future this will be interpreted as an array index, arr[np.array(seq)]
, which will result either in an error or a different result.arr[tuple(seq)]
instead of arr[seq]
. In the future this will be interpreted as an array index, arr[np.array(seq)]
, which will result either in an error or a different result.arr[tuple(seq)]
instead of arr[seq]
. In the future this will be interpreted as an array index, arr[np.array(seq)]
, which will result either in an error or a different result.Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_correlation.py", line 468, in test_window
assert_array_equal(cch_win, cch_win_mem)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 865, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 80, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 80, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 109, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 45, in test_threshold_detection
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_sta.py", line 91, in test_only_one_spike
assert_array_equal(STA, cutout)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 865, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'
Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_sta.py", line 64, in test_spike_triggered_average_with_n_spikes_on_constant_function
assert_array_almost_equal(STA, cutout, 12)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 973, in assert_array_almost_equal
precision=decimal)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'
Ran 439 tests in 338.927s
FAILED (SKIP=1, errors=7)
error: Bad exit status from /var/tmp/rpm-tmp.BVaNjX (%check)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.BVaNjX (%check)
Looking at the documentation on read the docs
I couldn't see a link to the csd modules.
and put on binstar
The import of sklearn breaks the readthedoc build for the asset.py page.
Probably we should import using something like
try
import sklearn
except:
warning("sklearn not available).
to circumvent the problem.
Similar to the Neo project, Elephant should hold the version number in a VERSION file that is parsed in relevant places.
The installation of elephant via pip does not include the spike_extraction_test_data.npz
numpy file, which is needed when running test_spike_train_generation.py
.
In setup.py
we should specify the file to be uploaded.
Add the optimized ASSET module to Elephant. Atm the code lies in a private repository (here).
TODOs (more to add):
asset.py
with the code in the moduleasset
folder and put the necessary files in there. The module requires cython
and other c-based
filessetup.py
to compile the c
files and include corresponding folderThe documentation building step on read the docs
is failing due to deprecated packages in numpydoc
and docutils
. The error is:
Could not import extension numpydoc (exception: No module named 'sphinx.util.compat')
This can be probably fixed when setting the updated versions of the above mentioned packages in the environment.yml
in the doc folder.
The current isi
function has a units
argument. I am wondering if this is really appropriate.
I think modifying the units is probably outside the scope of this sort of function. If someone wants to add or change units, they can do that easily on their own with the returned value or before providing a value. So perhaps this should be left off, and the units are just the units of the input, if any.
However, that is merely my opinion. I am interesting in hearing anyone elses' thoughts.
The following example in the elephant documentation fails:
import elephant.conversion as conv
import neo as n
import quantities as pq
st = n.SpikeTrain([0.5, 0.7, 1.2, 3.1, 4.3, 5.5, 6.7] * pq.s, t_stop=10.0 * pq.s)
x = conv.BinnedSpikeTrain(st, num_bins=10, binsize=1 * pq.s, t_start=0 * pq.s)
print(x.spike_indices)
print(x.to_sparse_array().nonzero()[1])
Using latest development versions of python-neo (baf6562593e85a1c041408a57a2234e5febe652a) and elephant (8341460):
File "delme.py", line 5, in <module>
x = conv.BinnedSpikeTrain(st, num_bins=10, binsize=1 * pq.s, t_start=0 * pq.s)
File "/afsuser/tpfeil/elephant_inst/lib/python2.7/site-packages/elephant-0.2.0-py2.7.egg/elephant/conversion.py", line 424, in __init__
self._convert_to_binned(spiketrains)
File "/afsuser/tpfeil/elephant_inst/lib/python2.7/site-packages/elephant-0.2.0-py2.7.egg/elephant/conversion.py", line 794, in _convert_to_binned
f, c = np.unique(filled_tmp, return_counts=True)
TypeError: unique() got an unexpected keyword argument 'return_counts'
@rmeyes In function instantaneous_rate in module statistics.py:
if acausal: do A ; else: do B. But seems A==B, so why ask for 'acausal' there?
Our minimum version requirements include numpy 1.5.0 and scipy 0.9.0. However, our Ubuntu tests only test the latest version of all packages, and the miniconda tests only support numpy >= 1.6.2 and scipy >= 0.11.0.
I am a bit uncomfortable listing dependencies that our tests cannot verify work properly. Installing either numpy or scipy (not to mention both) manually in travis takes forever, so that is not a feasible approach.
Even someone tests manually before official releases, a lot of small and subtle bugs sneak through that can be very hard to track to a specific change.
So I am thinking it might be best to set our minimum dependencies versions to be the versions supported by travis or greater. What does everyone else think?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.