Code Monkey home page Code Monkey logo

elephant's People

Contributors

ackurth avatar alexvanmeegen avatar alperyeg avatar apdavison avatar danielmk avatar dholstein avatar dizcza avatar emanuelelucrezia avatar espenhgn avatar essink avatar healther avatar juliasprenger avatar junji110 avatar kleinjohann avatar kohlerca avatar mdenker avatar morales-gregorio avatar moritz-alexander-kern avatar ojoenlanuca avatar paulinadabrowska avatar pbouss avatar pietroquaglio avatar rgerkin avatar rgutzen avatar rjurkus avatar rproepp avatar stellalessandra avatar toddrjen avatar vahidrostami avatar zottelsheep avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elephant's Issues

Continuous integration builds for Ubuntu environment failing

The 5th environment ubuntu of the CI tests is failing by timing out. This behavior is seen in the last PR integration, but also in PRs such as #110.

The build errors with a timeout:

No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.

Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received

The build has been terminated

Unittests do not run through

Somehow when running the tests of the ubuntu environment via travis, the unittests get stuck and do not run to the end. Travis stops at some point the build due to no response. On my laptop I could investigate that the CSD tests take a lot of time but not more than 10 minutes. On my own branch I even put a travis_wait command for the tests but that didn't help, too.
I would appreciate any further information and help on this issue.

Target Neo version

@toddrjen raised the question in his comments to #2, I'm creating a new issue to make it more visible: Should elephant target currently available Neo versions (0.3.x) or the next, API breaking version (0.4.0)?

Personally, I share Todd's opinion and would directly go with 0.4 compatibility. elephant probably won't be ready for release for a while and would need to be changed as soon as Neo 0.4 comes along.

On the other hand, this means developing against a moving target and that existing code using Neo won't be able to immediately profit from elephant without changes. But it could also help in the development and eventual adaption of the new Neo version.

Deduplicate requirement information

Currently requirements are specified both directly in setup.py as well as in requirements.txt.

Would it be possible to parse requirements.txt with something like (would require removal of commented out lines in requirements.txt)

with open('requirements.txt') as fp:
    install_requires = fp.read()

setup(install_requires=install_requires, ...)

rather than have two different locations for the same information?

Error in the function statistics.instantaneous_rate when kernel='auto'

The computation of the optimized kernel width return an error for spiketrain with few and very close spikes. Example to reproduce the error:

import neo
import quantities as pq
import elephant.statistics as stat
st=neo.SpikeTrain([2.12,2.13,2.15]*pq.s, t_stop=10*pq.s)
rate = stat.instantaneous_rate(st,1*pq.ms)

C[i, j] = C[j, i] = enumerator / denominator in corrcoef

I'm trying to plot cross-correlation histograms of the attached spike data using the following elephant code:

import numpy as np
from quantities import s, ms
from neo import SpikeTrain
from elephant.conversion import BinnedSpikeTrain
from elephant.spike_train_correlation import corrcoef

spikes = np.loadtxt("6E.txt", skiprows=1, delimiter=",",
                        dtype={"names": ("time", "id", ), "formats": (float, int)})

neuron_indices = np.random.choice(14395, 200, replace=False)
spike_times = [SpikeTrain(spikes["time"][spikes["id"] == n] * ms,t_stop=10*s)
                   for n in neuron_indices]

binned_spike_times = BinnedSpikeTrain(spike_times, binsize=2.0 * ms)
correlation = corrcoef(binned_spike_times)

The corrcoef function emits a warning of C[i, j] = C[j, i] = enumerator / denominator in corrcoef and I'm getting NaNs in the resultant correlation matrix. It is quite possible the problem is with my data, but other analysis looks totally fine (raster plotting, CV ISI, rate histograms etc etc) and this warning is not helpful in determining the problem! I am using Elephant 0.5.0 from pip.
6E.txt

Discussion: new matrix enviorment in the setup script to test for mpi and/or c

I would like to discuss if we should provide a new environment in the install script to test for mpi and or c based code. This issue is connected to PR #110 since the SPADE uses a pre-compiled c file (which also has to be downloaded) .so and mpi4py.

My suggestion would be to add a new matrix environment in install.sh and in the corresponding .yaml where we have a minimal setup using conda and mpi4py, additionally we download the c file.

Any further suggestions are welcome.

Error on accessing `bin_edges` of `BinnedSpikeTrain`

When trying to access the bin_edges property of BinnedSpikeTrain object the following error occurs:

Traceback (most recent call last):
  File "test_elephant.py", line 14, in <module>
    edges = binned.bin_edges
  File "/home/bartosz/.pyenv/versions/anaconda3-2.3.0/lib/python3.4/site-packages/elephant-0.3.0-py3.4.egg/elephant/conversion.py", line 585, in bin_edges
    self.num_bins + 1, endpoint=True),
  File "/home/bartosz/.pyenv/versions/anaconda3-2.3.0/lib/python3.4/site-packages/numpy/core/function_base.py", line 125, in linspace
    return y.astype(dtype, copy=False)
TypeError: astype() got an unexpected keyword argument 'copy'

The error can be reproduced with the following script:

from elephant.conversion import BinnedSpikeTrain
import neo
from quantities import ms
import numpy as np

spiketimes = np.arange(10)*10
tmax = spiketimes.max()

spiketrain = neo.SpikeTrain(spiketimes * ms, t_stop=tmax * ms)
binned = BinnedSpikeTrain(spiketrain, t_start=0 * ms, binsize=50*ms)
edges = binned.bin_edges

It appears in python 3.44, with master branch of Elephant and neo, quantities 0.11.1 and numpy 1.11.0 (I think it does not occur with earlier version of numpy).

The problem is with the following with conversions.py:L584

        return pq.Quantity(np.linspace(self.t_start, self.t_stop,
                                       self.num_bins + 1, endpoint=True),
                           units=self.binsize.units)

It seems that numpy.linspace returns a quantity object when t_start and t_stop are also quantity objects. Contrary to what the docstring says the Quantity.astype method does not take copy parameters (numpy.astype does). The workaround is to use t_start and t_stop without their units:

np.linspace(self.t_start.magnitude, self.t_stop.magnitude,
                                       self.num_bins + 1, endpoint=True)

This bug was first spotted by @medelero

Extract SpikeTrain from AnalogSignal

@apdavison Is there a function for this yet? I didn't see it in a brief search. If not let me know if I should just contribute the old AnalogSignal.threshold_detection code from NeuroTools, and if so to what module (maybe a new module for AnalogSignal manipulations, which could subsume sta.py since it currently only has one function).

Aliases for commonly-used functions from other scientific Python packages

There are many Python packages dealing with time series analysis, which provide functions that may be of use to neuroscientists.

For an individual user, finding and identifying the functions of interest among multiple packages will be difficult. Furthermore, some (most) of these functions will not handle Neo objects or even units.

For functions which need to be adapted to handle Neo objects/units, it seems obvious that Elephant should provide wrappers. The question is whether we should also provide wrappers /aliases for functions that do not need to be adapted. An example of the latter is scipy.stats.variation(), for which I added an alias cv()(coefficient of variation), as I think the latter name will be much more obvious to neuroscientists.

The arguments for providing aliases are:

(1) providing all the functions of interest to neuroscientists in a single namespace;
(2) giving function names that are more familiar to neuroscientists.

Fix threshold_detection to handle signals with no spikes

If there are no spikes (no threshold crossing), threshold_detection returns a SpikeTrain that had undefined length, rather than length zero.

See here

I can fix this easily, but first I'm wondering if there is an upstream fix in Neo to disallow this kind of SpikeTrain (i.e. there appear to be two ways to generate a SpikeTrain with zero spikes, and only one of them seems legitimate; should the other one raise and error or be redirected to the legitimate one)?

Invalid window size/binwidth check

Hei,

In /spike_train_correlation.py, function '_cch_speed' and '_cch_memory', I tried a binwidth of 0.4 and a windowsize of 40. The input validation raises the error "ValueError: The window has to be a multiple of the binsize" despite being obviously true. In my case
win[0].rescale(binsize.units).magnitude % binsize.magnitude = 0.399999999998
So obviously a numerical issue with a simple tolerance solution

Best,
Tristan

Website hyperlink problem

The hyperlink in "Stochastic spike train generation" in [http://elephant.readthedocs.org/en/latest/reference/spike_triggered_average.html] links to the wrong page.

spike_train_generation - VisibleDeprecationWarning

Very very minor issue (it is only a warning, and I have checked on the web, it seems no serious and related to numpy... but maybe you want to check it, in order to have a "clean" output). I have followed the tutorial step-by-step (http://elephant.readthedocs.io/en/latest/tutorial.html)
[about this, excellent tutorial! very clear and well done. A suggestion: you can add a reference, for instance: P. Dayan and L.F. Abbot, "Theoretical Neuroscience" (2001, Par. 1.4)].
My NumPy is numpy==1.11.2
Details in the attached file.
Thank you
VisibleDeprecationWarning.txt

P.S. The warning appears only the first time you use "homogeneous_poisson_process" in the python session. Subsequent calls in the same session produce no warning messages.

alpha kernel is not starting at zero

Contrary to the definition in the docs, the alpha kernel does not start with non-negative values only after t = 0 in elephant 0.4.1.

to demonstrate

import neo
import elephant as ele
import scipy as sp
import matplotlib.pyplot as plt
import quantities as pq

st = neo.SpikeTrain([1]*pq.s,t_start=0*pq.s,t_stop=2*pq.s)
kernel = ele.kernels.AlphaKernel(200*pq.ms)
fs = 0.1 * pq.ms
asig = ele.statistics.instantaneous_rate(st,t_start=st.t_start,t_stop=st.t_stop,sampling_period=fs,kernel=kernel)
plt.plot(asig.times.rescale(pq.s),asig)
[plt.axvline(t) for t in st.times]

produces
alpha_kernel_offset

edit: which is almost the median (not exactly probably the kernel never decays back to zero). I looked at the AlphaKernel class in kernels.py and it looks fine, is there some median centering of the kernel going on somewhere else, that might need to be disabled?

tstart > tstop in homogenous poisson process

from elephant.spike_train_generation import homogeneous_poisson_process
import quantities as pq 
print(homogeneous_poisson_process(0.00001*pq.Hz, t_start=3*pq.s, t_stop=2*pq.s))
>>> []  s 

If t_start is greater than t_stop no error is thrown. This happens when the rate is a small value. I think in that situation it should throw an error.
The problem can be related to neo.SpikeTrain and occurs if there is an empty list, e.g.,

import neo
print(neo.SpikeTrain([]*pq.s, t_start=5*pq.s, t_stop=4*pq.s))
>>> [] s

test_spike_train_generation.py uses the PyNNIO class to load test data

In test_spike_train_generation.py in order to test the spike_extraction function, some reference artificial data, generated with make_spike_extraction_test_data.py and stored in a .npz file as a neo block, are loaded using the PyNNIO class which does not exist any longer. This makes fail the test.

Install neo from pip

At the moment neo is directly installed from the Gthub master branch. It should be installed via pip to have a stable working solution.
Right now the problem is that the pip neo 0.6.1 version is not compatible with the latest numpy version which we have in our scripts.
I guess it is better to wait for the latest 0.7.0 release of neo.

NumPy/SciPy versions installed in Travis are not always as requested

For some of the entries in the Travis matrix, the versions of numpy and scipy that end up being installed are not the same as the versions requested by the environment variables in .travis.yml

e.g.:

DISTRIB="ubuntu": NUMPY_VERSION="1.6.2" PANDAS_VERSION="0.16.0"

What ends up being installed is numpy 1.10.4, scipy 0.17.0. The later version of Numpy is being pulled in by Pandas, Scipy by Elephant itself.

DISTRIB="conda_min" NUMPY_VERSION="1.6.2" SCIPY_VERSION="0.11.0"

We get numpy 1.10.2, scipy 0.16.1, which are pulled in by installing the mkl package.

DISTRIB="conda" NUMPY_VERSION="1.9.0" SCIPY_VERSION="0.16.0" PANDAS_VERSION="0.16.0"

We get numpy 1.10.2, scipy 0.16.1, again pulled in by mkl.

Implementing waveform extraction

Hello Neuronal Ensemble,

I've been working on a function that takes an AnalogSignal as input, finds spikes by threshold and returns a SpikeTrain object of the spike peaks. I also want to extract the waveforms of each spike in an interval around the peak but I am confused about the usage of SpikeTrain.waveforms.
By the neo doc: waveforms: (quantity array 3D (spike, channel_index, time)) The waveforms of each spike.
Therefore my .waveforms attribute has the shape (n, 1, i), where n is the number of spikes, 1 is just 1 because I have only 1 AnalogSignal and i is the extraction interval passed to the function. Is that the intended use of .waveforms?

Thanks,
Daniel

add manifest.in file

According to @rgerkin (see #158) a MANIFEST.in file is necessary to properly include the requirements and LICENCE files in the pypi release.
Add inside:

include requirements.txt
include LICENSE

CI Tests fail

When doing the PR in #32 I noticed the nosetests performed by Travis CI fail, with following error:

AssertionError: attr is not equal [names]: FrozenList([None]) != FrozenList([u'0'])

I checked and saw that this issue is related to the test_pandas_bridge.py unittest. It seems that the panda version specified in the install script has problems with the newest numpy library or vice versa.
However after trying locally with all the newest versions the tests still fail.
I assume the unittest regarding the panda test has to be adapted according to the newest versions.

lv() and cv2() raising an error for too short spike trains

Currently the functions lv() and cv2() in statistics raise an error if the spike trains in input has less three spikes. It might be preferable to replace this with a warning message and returning NaN or None instead (suggested in #108 ). For example this avoids to break for loops over list of spike trains.

Wrong rounding cause an error when window parameter is set for the cch

spike_train_correlation.cch() function raise an error also when the parameters window are a multiple of the binsize. Example:

import elephant.spike_train_correlation as corr
import neo
import elephant.conversion as conv
import quantities as pq
sts1 = neo.SpikeTrain([1,2,3]*pq.s, t_stop=4*pq.s)
sts2 = neo.SpikeTrain([1,2,3]*pq.s, t_stop=4*pq.s)
window = [-0.1,0.1]*pq.s
binsize = 0.01*pq.s
cch = corr.cch(conv.BinnedSpikeTrain(sts1, 0.01*pq.ms),
             conv.BinnedSpikeTrain(sts2, 0.01*pq.ms),
             window=window)

This is due to the rounding error given by % operator (e.g. 0.1%0.01!=0). The control for such condition should be changed.

isi function ValueError

Given spikes
>>> spike_slice
<SpikeTrain(array([ 503.8 , 520.05 , 536.325, 552.625, 568.925, 585.225, 601.575, 617.95 , 634.325, 650.725, 667.15 , 683.575, 700.05 , 716.525, 733.025, 749.55 , 766.075, 782.625, 799.175, 815.75 , 832.35 , 848.95 , 865.575, 882.225, 898.9 , 915.575, 932.275, 949. , 965.75 , 982.5 , 999.25 , 1016.025, 1032.825, 1049.65 , 1066.5 , 1083.35 , 1100.225, 1117.125, 1134.025, 1150.95 , 1167.9 , 1184.875, 1201.85 , 1218.85 , 1235.875, 1252.9 , 1269.95 , 1287.025, 1304.1 , 1321.175, 1338.275, 1355.4 , 1372.525, 1389.675, 1406.85 , 1424.05 , 1441.25 , 1458.5 , 1475.75 , 1493. ]) * ms, [500.0, 1500.0])>
When I run the elephant.statistics.isi to get the inter-spike interval
isi(spike_slice)
I get
>>> isi(spike_slice)
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "~/miniconda2/lib/python2.7/site-packages/elephant/statistics.py", line 49, in isi intervals = np.diff(spiketrain, axis=axis) File "~/miniconda2/lib/python2.7/site-packages/numpy/lib/function_base.py", line 1578, in diff return a[slice1]-a[slice2] File "~/miniconda2/lib/python2.7/site-packages/neo/core/spiketrain.py", line 461, in __sub__ _check_time_in_range(spikes - time, self.t_start, self.t_stop) File "~/miniconda2/lib/python2.7/site-packages/neo/core/spiketrain.py", line 66, in _check_time_in_range (value, t_start))
ValueError: The first spike ([ 16.25 16.275 16.3 16.3 16.3 16.35 16.375 16.375 16.4 16.425 16.425 16.475 16.475 16.5 16.525 16.525 16.55 16.55 16.575 16.6 16.6 16.625 16.65 16.675 16.675 16.7 16.725 16.75 16.75 16.75 16.775 16.8 16.825 16.85 16.85 16.875 16.9 16.9 16.925 16.95 16.975 16.975 17. 17.025 17.025 17.05 17.075 17.075 17.075 17.1 17.125 17.125 17.15 17.175 17.2 17.2 17.25 17.25 17.25 ] ms) is before t_start (500.0)
Why am I getting this error?
Clearly,

>>> spike_slice[0]
array(503.79999999967873) * ms
does not start before t_start(500.0)
Does isi function not work with slice taken from a complete spike train?
Note:

  • I don't get the error with isi(whole_spike_train)
  • I get the same error with np.diff(spike_slice)
  • Elephant 0.4.1
  • Neo 0.5.1
  • this appears to be linked with Neo issue #412

Metadata for results

I think this warrants its own issue:

@toddrjen wrote in #11:

This also leads me to another issue I have been thinking about: what do we do about the metadata of a neo object? When, for example, we get the average spike rate of a spike train, we end up with just a quantity. Is that what we want? Might it be a good idea to have some class that stores the output of these sorts of analyses along with the metadata of the original neo object? Or is that overkill?

The problem with this is doing it in a generic manner. You can't really use a SpikeTrain, since the resulting object may not meet the rules of a SpikeTrain. On the other hand, creating a generic "results" class would make it impossible to know what metadata you should expect from an object. And having a more specific SpikeTrainResults object would be difficult since it would need to be able to handle scalars, 1D arrays, and maybe even ND arrays depending on what analyses we allow. So it is a difficult problem, but I think having some way to keep the metadata bound to the results of some manipulation is important.

I think this verges into overkill territory :-) For most results (like the average rate of a spike train), the caller knows exactly from what object the result has been calculated. The caller also knows if and what metadata is needed, while our analysis function doesn't, so I would leave the responsibility upstream.

However, there might be analysis where this information is not available to the caller. For example, an analysis that takes a number of objects, but only uses some of them based on their content. I don't know if we will have such functions - I would try to avoid it but it might be necessary for some algorithms. In that case, I would return providence information to the caller: provide which objects have actually been used. By linking results to the actual objects used in their creation, all metadata is available and we do not need to create new result types with all the complications that come with that.

sskernel() routine for optimal kernel bandwidth BUGGY

The routine sskernel() works (implicitly! bad) only for gaussian kernel. Besides, when generating kernels it wrongly sets the kernel sigma (which for gaussian is ~bandwidth/5.5) to the kernel bandwidth and therefore effectively generates kernels which are 5.4 times larger than they should

cch() got an unexpected argument 'cross_corr_coef'

Hi all.

First of all, I want to thank you for your work in this library. It has become my principal tool to work with spike trains, trying to keep it all in python.

Anyway, I noticed this error. Apparently, the normalization was never actually incorporated in the function as described in the documentation. Checked the source code of the package I've downloaded, version 0.4.1. It's simply not the same as the one shown in the documentation.

I suppose I can just copy the source code in the docs and fix mine but I guessed you'd like to know.

Thanks again.

BinnedSpikeTrain returns incorrect bin edges and bin centers when arguments have different units

Running neo 0.6.1, quantities 0.12.2.

import numpy as np
import neo
import elephant
import quantities as pq

train = neo.SpikeTrain(times=np.array([1.001, 1.002, 1.005])*pq.s,
                       t_start=1*pq.s, t_stop=1.01*pq.s)

bs = elephant.conversion.BinnedSpikeTrain(train,
                                     t_start=1 * pq.s, t_stop=1.01 * pq.s,
                                     binsize=1 * pq.ms)

print(bs.bin_edges)
print(bs.bin_centers)

Returns

[1.    1.001 1.002 1.003 1.004 1.005 1.006 1.007 1.008 1.009 1.01 ] ms
[1.5   1.501 1.502 1.503 1.504 1.505 1.506 1.507 1.508 1.509] ms

The correct units for bin_edges are seconds (instead of milliseconds), whereas the bin_centers seem pretty off. This bug has to do with the case in which BinnedSpikeTrain receives different units, because converting manually

bs = elephant.conversion.BinnedSpikeTrain(train,
                                     t_start=1000 * pq.ms, t_stop=1010 * pq.ms,
                                     binsize=1 * pq.ms)

print(bs.bin_edges)
print(bs.bin_centers)

Gives the expected output

[1000. 1001. 1002. 1003. 1004. 1005. 1006. 1007. 1008. 1009. 1010.] ms
[1000.5 1001.5 1002.5 1003.5 1004.5 1005.5 1006.5 1007.5 1008.5 1009.5] ms

[Feature] Performing certain analysis functions in a window

For some analysis functions, such as spike triggered average or spike triggered phases, it is desireable to limit the analysis to a small part of the input data. One option is to cut the input data beforehand, but another option is to use something like the window parameter in the spike triggered average. This is particularly useful if the analysis itself is conducted in a sliding window [-w,+w], such that a user who wants to perform an analysis on the interval [t1, t2] would need to actually cut in an interval [t1-w,t2+w] (i.e., the user requires intrinsic knowledge of the function).

Therefore, a common way to handle such functions should be investigated (i.e., implementation of a window parameter).

test failed (unittest / nosetests)

setup.py nosetests
Or
setup.py nosetests --with-coverage --cover-package=elephant --cover-erase
Or
setup.py -m unittest discover

  • /usr/bin/python2 setup.py nosetests --with-coverage --cover-package=elephant --cover-erase
    running nosetests
    running egg_info
    writing requirements to elephant.egg-info/requires.txt
    writing elephant.egg-info/PKG-INFO
    writing top-level names to elephant.egg-info/top_level.txt
    writing dependency_links to elephant.egg-info/dependency_links.txt
    reading manifest file 'elephant.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    warning: no files found matching 'elephant/test/spike_extraction_test_data.npz'
    no previously-included directories found matching 'doc/build'
    writing manifest file 'elephant.egg-info/SOURCES.txt'
    /home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/asset.py:687: RuntimeWarning: invalid value encountered in divide
    AngCoeff = dY / dX
    ./home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/asset.py:687: RuntimeWarning: divide by zero encountered in divide
    AngCoeff = dY / dX
    ......................................................................./home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/cubic.py:220: RuntimeWarning: divide by zero encountered in double_scalars
    L * (L - 1) * (L - 2)))
    /home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/cubic.py:220: RuntimeWarning: divide by zero encountered in long_scalars
    L * (L - 1) * (L - 2)))
    ./home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/cubic.py:110: UserWarning: Test aborted, xihat= 11 > ximax= 10
    warnings.warn('Test aborted, xihat= %i > ximax= %i' % (xi, ximax))
    ...........................S.............................................................................../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/pandas_bridge.py:63: FutureWarning: sortlevel is deprecated, use sort_index(level= ...)
    return obj.sortlevel(0, axis=axis, sort_remaining=True)
    .........................................../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_pandas_bridge.py:2760: RuntimeWarning: invalid value encountered in less
    targ[targ < targ_start] = np.nan
    /home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_pandas_bridge.py:2761: RuntimeWarning: invalid value encountered in greater
    targ[targ > targ_stop] = np.nan
    ../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_pandas_bridge.py:2706: RuntimeWarning: invalid value encountered in less
    targ[targ < targ_start] = np.nan
    ./home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_pandas_bridge.py:2734: RuntimeWarning: invalid value encountered in greater
    targ[targ > targ_stop] = np.nan
    ...../usr/lib64/python2.7/site-packages/scipy/fftpack/basic.py:160: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
    z[index] = x
    /usr/lib64/python2.7/site-packages/scipy/signal/signaltools.py:1593: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
    h = h[ind]
    ......../usr/lib64/python2.7/site-packages/scipy/signal/_arraytools.py:45: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
    b = a[a_slice]
    ....................../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_signal_processing.py:126: RuntimeWarning: invalid value encountered in true_divide
    target = (signal.magnitude - m) / s
    /home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/signal_processing.py:128: RuntimeWarning: invalid value encountered in true_divide
    (sig.magnitude - m.magnitude) / s.magnitude,
    ./usr/lib/python2.7/site-packages/quantities/quantity.py:321: RuntimeWarning: divide by zero encountered in true_divide
    return np.true_divide(other, self)
    ..........................E.....EEEE../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/spike_train_generation.py:335: RuntimeWarning: divide by zero encountered in true_divide
    mean_interval = 1 / rate.magnitude
    ......../home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/spike_train_generation.py:267: RuntimeWarning: invalid value encountered in sqrt
    number = np.ceil(n + 3 * np.sqrt(n))
    ............................/usr/lib64/python2.7/site-packages/scipy/signal/spectral.py:1382: RuntimeWarning: invalid value encountered in true_divide
    Cxy = np.abs(Pxy)**2 / Pxx / Pyy
    ............/usr/lib/python2.7/site-packages/neo/core/basesignal.py:119: RuntimeWarning: invalid value encountered in true_divide
    new_signal = f(other, *args)
    /usr/lib/python2.7/site-packages/neo/core/basesignal.py:119: RuntimeWarning: invalid value encountered in divide
    new_signal = f(other, *args)
    .E..E......................................................................................
    ======================================================================
    ERROR: Test if the window parameter is correctly interpreted.

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_correlation.py", line 468, in test_window
assert_array_equal(cch_win, cch_win_mem)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 865, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'

======================================================================
ERROR: test_peak_detection_threshold (elephant.test.test_spike_train_generation.AnalogSignalPeakDetectionTestCase)

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 80, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'

======================================================================
ERROR: test_peak_detection_time_stamps (elephant.test.test_spike_train_generation.AnalogSignalPeakDetectionTestCase)

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 80, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'

======================================================================
ERROR: test_spike_extraction_waveform (elephant.test.test_spike_train_generation.AnalogSignalSpikeExtractionTestCase)

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 109, in setUp
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'

======================================================================
ERROR: test_threshold_detection (elephant.test.test_spike_train_generation.AnalogSignalThresholdDetectionTestCase)

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_spike_train_generation.py", line 45, in test_threshold_detection
with open(raw_data_file_loc, 'r') as f:
IOError: [Errno 2] No such file or directory: '/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/spike_extraction_test_data.txt'

======================================================================
ERROR: The output should be the same as the input

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_sta.py", line 91, in test_only_one_spike
assert_array_equal(STA, cutout)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 865, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'

======================================================================
ERROR: Signal should average to the input

Traceback (most recent call last):
File "/home/lbazan/rpmbuild/BUILD/elephant-0.6.0/elephant/test/test_sta.py", line 64, in test_spike_triggered_average_with_n_spikes_on_constant_function
assert_array_almost_equal(STA, cutout, 12)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 973, in assert_array_almost_equal
precision=decimal)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 750, in assert_array_compare
hasval='+inf')
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 714, in func_assert_same_pos
x_id = func(x)
File "/usr/lib64/python2.7/site-packages/numpy/testing/_private/utils.py", line 749, in
func=lambda xy: xy == +inf,
File "/usr/lib/python2.7/site-packages/neo/core/analogsignal.py", line 416, in eq
if (self.t_start != other.t_start or
AttributeError: 'float' object has no attribute 't_start'

Name Stmts Miss Cover Missing

elephant/init.py 8 2 75% 31-32
elephant/asset.py 418 260 38% 67-68, 92, 118-126, 204-250, 275-282, 313-321, 352-361, 371-380, 493-500, 507-511, 515-536, 566-596, 622, 766, 860-889, 959-1076, 1119-1165, 1207-1258, 1319-1328, 1378-1410, 1462, 1466, 1525, 1559, 1634, 1678, 1685
elephant/cell_assembly_detection.py 372 82 78% 239, 255, 261, 288, 302-306, 320-322, 381, 405, 409-410, 418, 443-448, 454-455, 552-557, 673-682, 719-736, 747-764, 770-808, 981, 1003, 1095, 1098-1099
elephant/change_point_detection.py 165 23 86% 107, 110, 179-180, 183-184, 187-188, 223-224, 227-228, 299, 301, 303, 305, 308, 310, 312, 314, 318, 473-474
elephant/conversion.py 182 3 98% 147, 803-804
elephant/cubic.py 56 0 100%
elephant/current_source_density.py 160 6 96% 120-121, 130, 179-181, 198
elephant/current_source_density_src/KCSD.py 331 56 83% 46, 49, 52, 66-68, 206-209, 277-278, 329-330, 923-924, 926-935, 944-953, 986-992, 1018-1019, 1022-1059
elephant/current_source_density_src/init.py 2 0 100%
elephant/current_source_density_src/basis_functions.py 42 16 62% 48-50, 81-83, 98-99, 131-132, 147-149, 164-166
elephant/current_source_density_src/icsd.py 375 99 74% 94-100, 103-104, 108-109, 111-112, 114-115, 119-124, 139, 183-185, 197, 216, 283-286, 290-292, 300-303, 328, 392-395, 398-400, 408-411, 422, 446, 528-531, 534-536, 542-544, 569, 779-886
elephant/current_source_density_src/utility_functions.py 123 14 89% 30, 324-338
elephant/kernels.py 138 1 99% 171
elephant/neo_tools.py 49 0 100%
elephant/pandas_bridge.py 90 2 98% 98, 136
elephant/phase_analysis.py 48 0 100%
elephant/signal_processing.py 114 1 99% 333
elephant/spade.py 394 89 77% 71, 296, 305-306, 319-320, 350, 360, 367, 370, 483-492, 647, 758, 780, 785-787, 931-932, 988-989, 1039, 1041, 1049, 1052, 1059, 1142, 1144, 1152, 1168-1179, 1187, 1192, 1204, 1217, 1222, 1328, 1461-1462, 1465, 1470-1496, 1513-1514, 1516-1517, 1520-1521, 1527-1554, 1623
elephant/spade_src/init.py 1 0 100%
elephant/spade_src/fast_fca.py 749 575 23% 66-78, 82, 86, 96, 107-120, 124-137, 141-154, 158-172, 176, 197-198, 200-201, 225-229, 247, 316-352, 402-403, 412-439, 444-451, 456-458, 462-465, 468-471, 477-514, 520-541, 546-558, 563-575, 581-599, 605-628, 632-635, 638-661, 668-676, 687-698, 703-724, 728-755, 759-813, 823-858, 864-865, 868-879, 882-892, 905-922, 927-935, 941-965, 970-979, 984-1124
elephant/spectral.py 160 23 86% 107, 111, 119-122, 127-132, 136-139, 142, 144, 148-149, 153-161
elephant/spike_train_correlation.py 185 1 99% 539
elephant/spike_train_dissimilarity.py 126 2 98% 26-27
elephant/spike_train_generation.py 303 107 65% 58-114, 147-171, 200-253, 291, 471, 515-516, 520, 525, 530-532, 536, 688, 754-755, 763, 794-803, 912-918
elephant/spike_train_surrogates.py 70 2 97% 44-45
elephant/sta.py 92 5 95% 91, 253, 272-275
elephant/statistics.py 289 25 91% 48, 907, 916, 923, 1073, 1081, 1150-1160, 1168-1175, 1253, 1256, 1259
elephant/unitary_event_analysis.py 193 22 89% 86, 143, 200, 278, 312, 387, 471, 477, 557, 565, 599-600, 638, 655, 742, 758, 763, 778, 792-796

TOTAL 5235 1416 73%

Ran 439 tests in 338.927s

FAILED (SKIP=1, errors=7)
error: Bad exit status from /var/tmp/rpm-tmp.BVaNjX (%check)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.BVaNjX (%check)

ASSET conditional import and read the docs

The import of sklearn breaks the readthedoc build for the asset.py page.

Probably we should import using something like

try
    import sklearn
except:
   warning("sklearn not available).

to circumvent the problem.

Integration of optimized ASSET

Add the optimized ASSET module to Elephant. Atm the code lies in a private repository (here).
TODOs (more to add):

  • Compare and merge asset.py with the code in the module
  • Create an asset folder and put the necessary files in there. The module requires cython and other c-based files
  • Adapt setup.py to compile the c files and include corresponding folder

doc build is failing

The documentation building step on read the docs is failing due to deprecated packages in numpydoc and docutils. The error is:

Could not import extension numpydoc (exception: No module named 'sphinx.util.compat')

This can be probably fixed when setting the updated versions of the above mentioned packages in the environment.yml in the doc folder.

Handling of units

The current isi function has a units argument. I am wondering if this is really appropriate.

I think modifying the units is probably outside the scope of this sort of function. If someone wants to add or change units, they can do that easily on their own with the returned value or before providing a value. So perhaps this should be left off, and the units are just the units of the input, if any.

However, that is merely my opinion. I am interesting in hearing anyone elses' thoughts.

Data conversion - BinnedSpikeTrain - fails

The following example in the elephant documentation fails:

import elephant.conversion as conv
import neo as n
import quantities as pq
st = n.SpikeTrain([0.5, 0.7, 1.2, 3.1, 4.3, 5.5, 6.7] * pq.s, t_stop=10.0 * pq.s)
x = conv.BinnedSpikeTrain(st, num_bins=10, binsize=1 * pq.s, t_start=0 * pq.s)
print(x.spike_indices)
print(x.to_sparse_array().nonzero()[1])

Using latest development versions of python-neo (baf6562593e85a1c041408a57a2234e5febe652a) and elephant (8341460):

File "delme.py", line 5, in <module>
  x = conv.BinnedSpikeTrain(st, num_bins=10, binsize=1 * pq.s, t_start=0 * pq.s)
File "/afsuser/tpfeil/elephant_inst/lib/python2.7/site-packages/elephant-0.2.0-py2.7.egg/elephant/conversion.py", line 424, in __init__
  self._convert_to_binned(spiketrains)
File "/afsuser/tpfeil/elephant_inst/lib/python2.7/site-packages/elephant-0.2.0-py2.7.egg/elephant/conversion.py", line 794, in _convert_to_binned
  f, c = np.unique(filled_tmp, return_counts=True)
TypeError: unique() got an unexpected keyword argument 'return_counts'

Dependency versions

Our minimum version requirements include numpy 1.5.0 and scipy 0.9.0. However, our Ubuntu tests only test the latest version of all packages, and the miniconda tests only support numpy >= 1.6.2 and scipy >= 0.11.0.

I am a bit uncomfortable listing dependencies that our tests cannot verify work properly. Installing either numpy or scipy (not to mention both) manually in travis takes forever, so that is not a feasible approach.

Even someone tests manually before official releases, a lot of small and subtle bugs sneak through that can be very hard to track to a specific change.

So I am thinking it might be best to set our minimum dependencies versions to be the versions supported by travis or greater. What does everyone else think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.