Code Monkey home page Code Monkey logo

combinato's Introduction

Combinato Spike Sorting

Introduction

Combinato Spike Sorting is a software for spike extraction, automatic spike sorting, manual improvement of sorting, artifact rejection, and visualization of continuous recordings and spikes. It offers a toolchain that transforms raw data into single/multi-unit spike trains. The software is largely modular, thus useful also if you are interested in just extraction or just sorting of spikes.

Combinato Spike Sorting works very well with large raw data files (tested with 100-channel, 15-hour recordings, i.e. > 300 GB of raw data). Most parts make use of multiprocessing and scale well with tens of CPUs.

Combinato is a collection of a few command-line tools and two GUIs, written in Python and depending on a few standard modules. It is being developed mostly for Linux, but it works on Windows and OS X, too.

The documentation of Combinato is maintained as a Wiki.

Installing Combinato

Tutorial

Please walk through our instructive Tutorial.

More Information

Citing Combinato

When using Combinato in your work, please cite this paper:

Johannes Niediek, Jan Boström, Christian E. Elger, Florian Mormann. „Reliable Analysis of Single-Unit Recordings from the Human Brain under Noisy Conditions: Tracking Neurons over Hours“. PLOS ONE 11 (12): e0166598. 2016. doi:10.1371/journal.pone.0166598.

Contact

Please feel free to use the GitHub infrastructure for questions, bug reports, feature requests, etc.

Johannes Niediek, 2016-2023, [email protected].

combinato's People

Contributors

adriangutierrezg avatar chris3110 avatar ferchaure avatar jniediek avatar peternsteinmetz avatar s-mackay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

combinato's Issues

Using Combinato manager convenience class has a string error (suggested fix)

Hi Johannes,

While using the most recent version of Cominato, I ran into an issue with the combinato.manager.Combinato(SortingManagerGrouped) class, specifically with the called manager.get_group_joined() function. In calling the cluster/spike sign, the current implementation doesn't ignore bytes prefix on the string literal.

I'm not sure how much the whole of manager_cat.py relies on the class initialized variables, so I fixed my clone with a line-by-line edit on the main function I use for reading out sorted Combinato data, below:

(starts at Line 304)

 def get_group_joined(self, gid, times=True, spikes=True, artifacts=True):
        """
        get one group, all clusters joined
        """
        ret = dict()

        gtype = self.get_group_type(gid)

        if (artifacts is False) and (gtype == TYPE_ART):
            return ret

        idx = self.sorting.get_cluster_index_joined(gid)
        n_clusters = len(self.sorting.get_cluster_ids_by_gid(gid))
        # shorten it
        sel = (idx >= self.start_idx) & (idx <= self.stop_idx)

        if not sel.any():
            return ret

        idx = idx[sel] - self.start_idx
        # idx -= self.start_idx
    
        ### fix edit
        decoded_sign = self.sign.decode('utf-8')
        ####
        shape = self.times[decoded_sign].shape[0]
        if idx[-1] >= shape:
            idx = idx[idx < shape]
            print('Shortened index!')

        ret['type'] = gtype
        ret['n_clusters'] = n_clusters
        if times:
            ret['times'] = self.times[decoded_sign][idx]
        if spikes:
            ret['spikes'] = self.spikes[decoded_sign][idx]

        return ret

Results of manual clustering not saving?

After manually sorting the results from combinato (all negative spikes), I have looked at the data in sort_neg_simple/sort_cat.h5. While the timestamp on the file would indicate that it updates after I save my results in the gui, it doesn't seem like the actual results of my manual clustering are saving - all of the results match that of the initial automatic clustering.

Amplitude of waveforms

Hi @jniediek,

I am using Combinato with Blackrock data (.ns6 files). I extract each channel and save it into a mat file with amplitude in uV (so each "spike" is usually ~ 100 uV).

However, the waveforms plotted in the gui and that are output in the sort_cat.h5 file are scaled up by some magnitude, ending up as roughly ~ 3500-8000 mystery units. Compare this to the simulation_5.mat file that is downloadable in the tutorial, in which the raw trace is of magnitude < 10 and the waveforms end up ~100-200 uV.

Is the data being scaled up by roughly ~ 80x at some point in the extraction process? I have looked around but have not been able to find this. This does not seem to be a problem with the neuralynx data.

density plot to blurred if amplitude is small

If the mean amplitude of a cluster is small, i.e. 15 muV, the linespace for the density plot uses, as implemented right now, 2_max_of_mean as # of points. This would lead to 30 points for the example. It is better to implement it in a way that for example at least 150 points are used (as before the change to fit everything to the max_of_means).
this means:
bins_density = np.linspace(-2_max_of_means,
2_max_of_means,
max(150,2_max_of_means))
instead of:
bins_density = np.linspace(-2_max_of_means,
2_max_of_means,
2*max_of_means)

ubuntu installation issue:

following output is shown when running "python tools/test_installation.py"
[[
Unable to load Combinato: No module named combinato.default_options
Check your PYTHONPATH and make sure to copy
combinato/default_options.py to combinato/options.py
Re-run after fixing problems.
Found display
Found 'montage', plotting continuous data possible.
]]

Css-gui: Not seeing sorted data

There are no errors outputed into the terminal when I extract data from either a neuralynx or matlab file nor when I run the simple clustering command. I have gotten it to work with the tutorial's matlab simulation file and with the example ncs file. However, when I run css-gui on my data, I do not see any sorted data files under File --> Open. When I look at the file timestamps, I see that combinato created several folders after running the clustering, but the data.h5 file doesn't update after the initial data extraction. I've made sure that I am running css-gui in the correct directory. When I run css-plot-rawsignal and specify the specific h5 data file, it says that plotting took 0 seconds, but with the shorter example ncs file in the tutorial, it says plotting took 3 seconds. Also, when I run the clustering command, it is creating figures with the sorted spikes into the overview folder. I'm using python2.7. Any help would be greatly appreciated.
combinato.docx

css-gui: Function of "Artifact" button in the gui

screen shot 2017-03-20 at 11 08 39 am

Hey @jniediek ,

I was wondering if the Artifact button (pictured above) could have the same outcome as pressing "A" on the keyboard. When you press A, you send a cluster to the Artifact grouping. However, clicking the Artifact button pictured above does not seem to send a cluster to the Artifact grouping.

This is most helpful when you have a channel that's trash and you want to classify everything as artifact, it can take a while to go through everything by pressing A on the keyboard.

Stalling during css-cluster

Hi!

I am using your combinato package to cluster some stereo-EEG data and have been running into a peculiar problem which I thought you may be able to shed some light on.

I am using the attached code to automate the extraction/clustering procedure (using .mat files I created from Blackrock micro EEG recording output files). It seems “css-extract” and “css-mask-artifacts” works fine, however when the code enters “css-cluster” it seems to stall at the clustering data step. I’ve run it overnight on my local machine and it still seems to stall on that step. However, if I process manually using “css-simple-clustering” it seems to work okay. Would you be able to provide any guidance on why this would be the case? I’ve attached a screenshot of my terminal window as well.

Any help you can provide would be greatly appreciated!

`#!/bin/bash

declare -a arr=("/Users/Pranish/Desktop/Subj_File_Test/UT084_ex" "/Users/Pranish/Desktop/Subj_File_Test/UT086_ex" "/Users/Pranish/Desktop/Subj_File_Test/UT105_ex")

mkdir /Users/Pranish/Desktop/Subj_File_Test/Combinato_Clustering/$(date '+%d-%b-%Y')

cd /Users/Pranish/Desktop/Subj_File_Test/Combinato_Clustering/$(date '+%d-%b-%Y')

touch do_manual_neg.txt

for file in "${arr[@]}"; do

cd $file

for chan in NS6_00[1-8].mat; do

    css-extract --matfile ${chan%%.*}.mat

    css-mask-artifacts

    h5_file="data_${chan%%.*}.h5"

    path=${chan%%.*}/"data_${chan%%.*}.h5"

    cd ${chan%%.*}

    css-prepare-sorting --neg --datafile "$h5_file"

    css-cluster --jobs sort_neg_pra.txt

    css-combine --jobs sort_neg_pra.txt

    cd /Users/Pranish/Desktop/Subj_File_Test/Combinato_Clustering/$(date '+%d-%b-%Y')

    echo "$file$path" >> do_manual_neg.txt

    cd $file

done

done`

image

css-gui error in density plot (Windows platform)

Running css-gui on Windows leads to erroneous waveform density plots in the GUI (cf picture attached).

Platform: Windows
Shell environment: Git Bash
Python version: 3.7.6
Combinato code: current

Capture

Error : Duplicate jobs requested

Hi,

I've been using Combinato successfully for some time. I've started getting this error recently, and I'm not sure why. I have tried this with a fresh install and environment, and I keep getting the same error now, even on data I had successfully clustered in the past. It's a hard error message to parse as this is all MATLAB data and thus should not really utilize the job pipeline?

Opened data_NSX001_CSS.h5 Job ('data_NSX001_CSS.h5', 'neg', 'sort_neg_sal_0000000_0000237') requested 8 times Traceback (most recent call last): File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/css-cluster", line 5, in <module> argument_parser() File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/combinato/cluster/cluster.py", line 305, in argument_parser test_joblist(joblist) File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/combinato/cluster/cluster.py", line 264, in test_joblist raise ValueError('Duplicate jobs requested') ValueError: Duplicate jobs requested Job ('data_NSX001_CSS.h5', 'neg', 'sort_neg_sal_0000000_0000237') requested 8 times Traceback (most recent call last): File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/css-combine", line 5, in <module> parse_args() File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/combinato/cluster/concatenate.py", line 386, in parse_args test_joblist(jobs) File "/home1/salman.qasim/Salman_Python_Scripts/combinato-master/combinato/cluster/cluster.py", line 264, in test_joblist raise ValueError('Duplicate jobs requested') ValueError: Duplicate jobs requested

Not allowing to merge clusters - artifacts

Hi @jniediek ,

This might be a trivial question, but in css-gui, when I try to merge a cluster into the "artifacts" group, I get "Not merging Artifacts and [cluster #]" for every instance. Is there anything I could do to fix this?

Would you know why that is the case?

Thanks,

Pranish

refactor: only one command with options --extract, --cluster, --plot etc

Use only one command 'combinato' with options such as --cluster --plot-extracted --combine etc.
This would clean up the directory and make handling of options easier for the user.
Also, it could incorporate a logger that writes out options etc easily.
Also, this would allow to add an option --all-work that just runs the whole processing chain

but it's too much work now

css-extract --matfile issue

Hello,

I have been trying to use the css-extract command on a mat file (i.e. with the option --matfile), which was produced from a Blackrock NS6 file (1 mat file for recording from 1 channel). Although the command has worked in the past in our hands, it recently did not work for one of my mat files: the program stays stuck after reading the file- cf screenshot below.
image

I have noticed that the command does work on the file, if I reduce the size of the file- i.e. for instance, I remove half of the data from this recording. I am thus assuming this issue might be due to my original file being too large- ~250MB. Does that sound right? Have people encountered this issue before?

Thank you!

Lou

refactor: better management of options.py

The following should happen with options.py

  • options should be grouped into several variables by application, i.e. extract, cluster, GUI, etc
  • all options that are still set in individual Python files should become options in options.py
  • it should be possible to use different option files and choose such a file by an --option (thanks @chris3110 for the idea)

Using Combinato with hdf5 files **EDIT** actually an issue with size

Hey,

I have recently started splitting our BlackRock data into hdf5 files. I tried to extract spikes from this data, and I keep stalling after the first job, on any file I try. The process does not proceed past the point below:

css-extract --files 20170224-145604-001_chan107.hdf5 --h5
5
Shortening maxima list from 44 to 44
2
Shortening maxima list from 30 to 30
3
Shortening maxima list from 82 to 82
Job name: 20170224-145604-001_chan107 pending jobs: dict_keys([0]) jnow: 0
Initialized 20170224-145604-001_chan107/data_20170224-145604-001_chan107.h5
saving 20170224-145604-001_chan107, count 0
4
Shortening maxima list from 65 to 65
Work exited
Job name: 20170224-145604-001_chan107 pending jobs: dict_keys([1]) jnow: 1
saving 20170224-145604-001_chan107, count 1

If I interrupt, it seems to be stalled here:

Process Process-6:
Process Process-4:
Traceback (most recent call last):
Process Process-5:
  File "/home1/salman.qasim/combinato/css-extract", line 5, in <module>
Process Process-2:
    main()
  File "/home1/salman.qasim/combinato/combinato/extract/extract.py", line 99, in main
    mp_extract(jobs, nWorkers)
  File "/home1/salman.qasim/combinato/combinato/extract/mp_extract.py", line 175, in mp_extract
    p.join()
  File "/home1/salman.qasim/miniconda3/envs/spikeinterface/lib/python3.8/multiprocessing/process.py", line 149, in join
    res = self._popen.wait(timeout)
  File "/home1/salman.qasim/miniconda3/envs/spikeinterface/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)
  File "/home1/salman.qasim/miniconda3/envs/spikeinterface/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
    pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

windows support: multiprocessing convention differs

On windows, css-extract complains about not finding the css-extract 'module'. This needs to be fixed, either by forcing single-processing on windows, or by tracing down the exact reason for the problem.

Error when running on personal data

Hi,

I have successfully installed combinato on my Laptop and have completed the first part of the tutorial using the data provided, but when I run css-extract --matfile myfile.mat it gives the following error:

(base) C:\Users\Name\Desktop\combinato>python css-extract --matfile data_matlab_corrected.mat
Reading from matfile data_matlab_corrected.mat
Using default sampling rate (24.0 kHz)
Process Process-1:
Traceback (most recent call last):
  File "D:\anaconda3\lib\multiprocessing\process.py", line 315, in _bootstrap
    self.run()
  File "D:\anaconda3\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Name\Desktop\combinato\combinato\extract\mp_extract.py", line 130, in read
    data = read_matfile(fname)
  File "C:\Users\Name\Desktop\combinato\combinato\extract\tools.py", line 31, in read_matfile
    fdata = data['data'].ravel()
KeyError: 'data'

I even checked the format of my data with the sample data provided for the tutorial and it's double type and the data is [channel x time]

Any Advice on how to move forward?
Thank you.

combinato installation on Windows

Dear Johannes,

I’m trying to install combinato on Windows and have some problems.

I run the following code
python setup_options.py

and get the following results.
combinato\options.py exists, not doing anything.

Everything seems to go well at this point.

Then I run
python tools/test_installation.py

and get
Unable to load Combinato: No module named 'combinato' Check your PYTHONPATH and make sure to copy combinato/default_options.py to combinato/options.py Re-run after fixing problems. 'montage' from ImageMagick not found: [WinError 2] Не удается найти указанный файл Plotting continuous data will not work

I added the combinato folder to my PYTHONPATH. I don’t understand what “to copy
combinato/default_options.py to combinato/options.py” means. Could you explain what this expression means?

Could you help me with my question?

Best wishes,
Luba

Digital filter critical frequency values

Hi Johannes,
I have a problem with my combinato installation on Windows.

When I try to run
'python css-extract --files A2.ncs' (A2 being the file name of my own file)
it gives me the following error message:

File "C:\Users\PauliY\Anaconda3\lib\site-packages\scipy\signal\filter_design.py", line 2366, in iirfilter
raise ValueError("Digital filter critical frequencies "
ValueError: Digital filter critical frequencies must be 0 < Wn < 1

I ploted the values for Wn and it seems like the first loop runs smoothly and in the second loop Wn[1] is higher than 1 (1.5 to be exact).

Any help is appreciated, thank you.
Yves

Programmatically way to set parameters

Hi Johannes,

Is there a way to edit all the parameters programmatically?
I read that I can make my own local_options.py. But I saw other important parameters on combinato/combinato/extract/extract_spikes.py is it possible to have them on local_option.py ?

If you think that is ok, I can code a pull request where extract_spikes.py try to load its parameters from local_option.py if they are there.

And maybe the same for: combinato/combinato/basics/filters.py that has a few more parameters.

Implementation of analysis sessions

I thought that, in order to eliminate the dependence of the folder structure from the file names, it could be useful to implement analysis sessions with dedicated folders; files to be analysed could be added directly from the overview-gui with an open file dialog and then all the relevant functions (overview plots, spike extraction, etc...) could be called by the gui itself as soon as a file is needed but is not found. This would also solve issue #7. I'll try to lay the foundations for that while I finish the upgrade for mat-files.

Implement Combinato in SpikeInterface

Hi Johannes,

I wanted to ask you if you can implement Combinato in the SpikeInterface project (https://github.com/SpikeInterface). Or give me a hand doing it.

Cool things about the idea:

  • It will be super easy to compare Combinato with other algorithms.
  • Combinato will be able to work with all the file formats supported by SpikeInterface.

I know that combination is implemented on Python 2 but even sorters on Matlab (like Wave_clus) are implemented on SpikeInterface, therefore I think we can overcome that limitation.

The implementation has two parts:

  • Given a spikeInterface recording run Combinato. This one is just deciding which is the best file format for Combinato and the cli to call it (I guess css-extract and then css-simple-clustering). [Yes the cli of Combinato helps a lot for this.] I guess that all the parameters can be given just saving a local_options.py in the same folder that the data. Examples of the codes to do this for other sorters are here.

  • Load the results of Combinato. Loading the results of Combinato, SpikeInterface needs only the class and sample of each spike, just that. I don't remember how Combinato label the discarded or unsorted spikes
    Examples of the codes to do this for other sorters are here.

Please let me know if you can help me with the implementation.

Cheers
Fernando

Single floating point precision corrupts timestamps

Dear wasserverein,

you mentioned you save your data to matlab files using single precision floating point numbers. Single precision only guarantees a correct representation of up to 6 significant digits without loss of precision. (on average, 7.22 decimal digits can be represented without loss).
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
https://en.wikipedia.org/wiki/IEEE_floating_point

Our spike timestamps, even if you save them in ms, are usually between 8 and 10 significant digits left of the decimal point. This means the data is not even precise to the correct ms. This rounding to the next representable number causes gaps in the autocorrelation.

Thank you for your time.

Mantrid

Issue running 'python setup_options.py'

Hello I am using a 16-inch MacBook Pro 2019 with an Intel 2.6 GHz 6-Core Intel Core i7 processor. When running the 'python setup_options.py' command when in the Combinato shell I obtain the following error:

[Errno 86] Bad CPU type in executable: '/Users/nicholas.tastet/Applications/anaconda3/lib/python3.8/site-packages/combinato/spc/cluster_maci.exe'

I have attempted to route to another executable to see if that was the issue but that also resulted in a failure

Does anyone know how I can edit the executable file in question on MacOS so that I can resolve the issue?

Header-reading function ncs_info not compatible with Pegasus 2.1.1 header syntax (suggested fix)

Hi Johannes,

I realized that the current ncs_info() function for reading the header of NCS files (in nlxio) is not compatible with the Pegasus' 2.1.1 Opening/Closing date and time syntax. I think this is partially because it expects this information to be in a 7-item-long string list in this line:

elif len(field) == 7:

I guess this has to do with how Cheetah headers look like (since it works well when dealing with them), but the NCS files generated with Pegasus 2.1.1 have header entries that look like this:

'-TimeCreated yyyy/mm/dd HH:MM:SS'
'-TimeClosed yyyy/mm/dd HH:MM:SS'

which results in a 3-long-item list. (I currently have solved the issue by pre-modifying the Pegasus file's header to look like a Cheetah file):

'## Time Opened (m/d/y): m/dd/yyyy (h: m:s.ms) hh:MM:ss.f'
'## Time Closed (m/d/y): m/dd/yyyy (h: m:s.ms) hh:MM:ss.f'

Is this header information called again within combinato/important for clustering?

In any case, I could suggest a quick-and-dirty fix somewhat like this (starting from line 123 in nlxio):

    if len(field) == 2:
        try:
            field[1] = int(field[1])
        except ValueError:
            try:
                field[1] = float(field[1])
            except ValueError:
                pass
        d[field[0][1:]] = field[1]
     
    ### fix start
    #Dealing with Pegasus Opened/Closed date&time strings
    elif len(field) == 3:
        if field[0] in ('-TimeCreated', '-TimeClosed'):
            pddate = datetime.strptime(field[1] + ' ' + field[2], '%Y/%m/%d %H:%M:%S')
            dt = datetime(pddate.year, pddate.month, pddate.day, pddate.hour, \
                                   pddate.minute, pddate.second)
            d[field[0][1:]] = dt
    ### fix end

    elif len(field) == 7:
        if field[0] == '##':
            if field[2] in ('Opened', 'Closed'):
                timeg = TIME_PATTERN.match(field[6]).groups()
                pdt = datetime.strptime(
                    field[4] + ' ' + timeg[0],
                    '%m/%d/%Y %H:%M:%S')
                dt = datetime(pdt.year,
                              pdt.month,
                              pdt.day,
                              pdt.hour,
                              pdt.minute,
                              pdt.second,
                              int(timeg[1])*1000)
            d[field[2].lower()] = dt
if 'AcqEntName' not in d:
    d[u'AcqEntName'] = 'channel' + str(d['ADChannel'])
return d

One could also get fancy and make sure to homogenize key values in the dictionary for completion. I haven't tested this yet with my own clone, but I have done something similar with some of the stuff I've written for reading both Cheetah and Pegasus header info.

Cheers!
Adrian.

numpy error

Hi I'm getting
"TypeError: The numpy boolean negative, the - operator, is not supported, use t he ~ operator or the logical_not function instead." I'm sure why, maybe because I'm using newest numpy
For now I saw this issue in:
-the function template_match, inside combinato/cluster/dist.py

  • main() in combinato/cluster/create_groups.py

no plots in `overview` folder

Hello!

I am working through the Combinato Tutorial 1 but am noticing behaviour that deviates from what is documented. When I run css-plot-sorted --label sort_pos_simple, the intended behaviour is that an overview directory is created which has a single plot -- however, the overview directory which is created for me is empty.

Is this by any chance related to Issue #38, and would you be able to advise on how to fix it?

Running recent code on Windows

I ran Combinato's most recent code on Windows with Python version 3.7.6 and I get the following error message when using css-gui in Git Bash:

$ css-gui
/usr/bin/env: ‘python3’: No such file or directory

I believe this is because on Windows, python is called via python instead of the flexibility we have with a mac, where calling python3 is the same as python. So the shebang line at the begining of the css-gui script (and other scripts) #!/usr/bin/env python3 leads to the error. I have manually changed the shebang line to #!/usr/bin/env python and it seems to get rid of the error.

Is there a way to modify scripts with a shebang line that "adapts" depending on the platform detected, i.e. with an if/else statement?

Thanks!

Large files

I'm trying to use Combinato to analyze a long recording a .mat file of ~3.5 GB. When I try to get the spikes using extract I got the error:
OverflowError: cannot serialize a bytes object larger than 4 GiB
I tried as well with python 2 and I got:
OverflowError: cannot serialize a string larger than 2 GiB.

I should be doing something wrong. What do you recommend? I could cut the file in smaller parts but It looks like the multiple file support is just for ncs files. Maybe I could sort the parts independently and then use css-combine, but I'm not sure about that.

cheers,

Issue with plotting

Hello Johannes,

I recently downloaded combinato here and am running it on Ubuntu using VirtualBox. I recently corrected the issue brought up by @a-darcher. However, now I am having the following issue with regards to plotting. The same error repeats itself multiple times, so I believe the same issue is affecting all of the plots. Can you advise on what I might be able to do resolve this error?

Traceback (most recent call last):
  File "/home/brad/combinato/combinato/guisort/sorter.py", line 632, in updateListView
    self.updateActiveTab()
  File "/home/brad/combinato/combinato/guisort/sorter.py", line 723, in updateActiveTab
    self.updateGroupInfo()
  File "/home/brad/combinato/combinato/guisort/sorter.py", line 750, in updateGroupInfo
    self.groupOverviewFigure.updateInfo(group)
  File "/home/brad/combinato/combinato/guisort/sort_widgets.py", line 246, in updateInfo
    self.overTimeAx.plot(x, y, 'b.',
  File "/home/brad/.local/lib/python3.8/site-packages/matplotlib/axes/_axes.py", line 1743, in plot
    lines = [*self._get_lines(*args, data=data, **kwargs)]
  File "/home/brad/.local/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 273, in __call__
    yield from self._plot_args(this, kwargs)
  File "/home/brad/.local/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 399, in _plot_args
    raise ValueError(f"x and y must have same first dimension, but "
ValueError: x and y must have same first dimension, but have shapes (2893,) and (1,)

Screen Shot 2020-08-04 at 11 35 34 AM

Unable to load Combinato

Hi,
Please find the error explanation when i tried to install combinato. Actually same error messages was discussed under the title "combinato installation on Windows #51" but I tried all suggestions even by YvesPauli and still i have same error

(base) PS C:\Users\SUKRU\Anaconda3\Lib\site-packages\combinato-master> python tools/test_installation.py
Unable to load Combinato: No module named 'combinato'
Check your PYTHONPATH and make sure to copy
combinato/default_options.py to combinato/options.py
Re-run after fixing problems.
'montage' from ImageMagick not found: [WinError 2] The system cannot find the file specified
Plotting continuous data will not work
(base) PS C:\Users\SUKRU\Anaconda3\Lib\site-packages\combinato-master>

In my computer, there is Win10 and standard Anaconda

Sampling rate for old Ncs files

Hi,

I am experiencing a very peculiar issue. While trying to use Combinato to sort older .ncs data (~2005), my spike times are shifted forward with respect to the spike times detected by WaveClus. This was seen by comparing the smallest and biggest spiketimes between the Combinato and WaveClus results, with the largest Combinato spike times extending beyond the actual length of the recording.

They are not shifted forward by a consistent amount of time between recordings, but it is on the magnitude of ~1-2 minutes usually. This has not been an issue with more recent .Ncs files, circa 2014-2015.

python setup_options.py giving error

C:\Users\username\combinato>python setup_options.py
Could not execute SPC binary C:\Users\Umar Shahzad\combinato\spc\cluster_64.exe
[WinError 193] %1 is not a valid Win32 application

Kindly help me. im using windows

TypeError when loading clustering results into the GUI

I've been getting an error when trying to load into the GUI (after running css-gui) which launches okay.

The error was:
TypeError: decoding str is not supported
Being raised from here.

I've traced the issue to this commit

If I locally revert this commit, dropping the string typecasting, it then seems to work.

For me, at least, the line seems to fail as the temp variable is a string (example: 'neg'), which then can't be decoded, and so the TypeError is thrown. It seems it's expected that temp would be a bytes object, but this seems to not be guaranteed.

I don't know enough about the context here to know why temp ends up the type it does, but I think it might be useful to revert and/or tweak the above commit - one potentially easy fix is to type check temp before attempting to decode it.

Error with large data files

Hi,

Im trying to sort a larger dataset than normal and am running into the following error:
screen shot 2018-04-04 at 6 50 15 pm

It seems there's an issue with the multiprocessing.Process trying to work with such large data. I tried increasing the number of cores being used via nWorkers (dramatically) but this did not address the issue. Any advice?

For reference, I'm able to process data of length 80,000,000 samples ok, but 2x that fails.

Python/multiprocessing in windows

Hi! I was trying combinato in wWindows, and I got this error

File "C:\ProgramData\Miniconda3\envs\combinato\lib\multiprocessing\forking.py", line 504, in prepare
ImportError : file, path_name, etc = imp.find_module(main_name, dirs)
ImportError No module named css-extractfile, path_name, etc = imp.find_module(main_name, dirs):
ImportError: No module named css-extract
No module named css-extract

I found something similar here, changed the name of the file css-extract to css-extract.py and it works. Just to tell you, If someone gets the same issue

Readme.md: Missing information

  • Move datafile descriptions to a different file
  • Add description of from combinato import Combinato
  • Finish GUI description
  • Add description of various options in options.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.