Code Monkey home page Code Monkey logo

cascade's People

Contributors

adrianroggenbach avatar bouromain avatar bryanlimy avatar dependabot[bot] avatar hluetck avatar ptrrupprecht avatar vanwalleghem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cascade's Issues

Noise Level

Hello,

I was trying Cascade on the df/f extracted from CaImAn, and I noticed that the noise level of my data was
"Noise levels (mean, std; in standard units): 0.08, 0.04"
which was much lower than 1. Is this a problem?

Thank you very much.

"Global_EXC_15Hz_smoothing50ms" model mislabeled?

Hi Peter,

I was looking at recent updates to this repo and noted that the Global_EXC_15Hz_smoothing50ms model was added recently. I was curious about this model (previously, I was using Global_EXC_15Hz_smoothing100ms) so I attempted to download and run the 50ms model.

When attempting to do so (cascade.predict(...)), I found the following text output:

The selected model was trained on 18 datasets, with 5 ensembles for each noise level, at a sampling rate of 3Hz, with a resampled ground truth that was smoothed with a Gaussian kernel of a standard deviation of 50 milliseconds. 
 

Loaded model was trained at frame rate 3 Hz
...

(and I ran into a Python error that I'm not sure is related).

Could it be that the Global_EXC_15Hz_smoothing50ms model was mislabeled, i.e. either it's actually a 3 Hz model, or it's in fact a 15 Hz model but with an incorrect descriptor?

-Tony

Add absolute path functionality

Add possibility to use absolute paths for models and training datasets. This make the use of the package much more flexible, since the user can use the functionality of the package while keeping his own folder structure in the project.

I realized the need for this when implementing the algorithm in my own pipeline.

I will implement this probably next week.

predicted spikes are not saved with the .mat extension

Hey all,

nice tool and really cool that it's on collab, I'll put a comment later on biorxiv with some tests I'm running, but I noticed a small bug where the predicted spikes don't have the .mat extension. It just need to be changed to:

#@markdown By default saves as variable **`spike_rates`** both to a *.mat-file and a *.npy-file. You can uncomment the file format that you do not need or leave it as it is. 

folder = os.path.dirname(example_file)
file_name = 'predictions_' + os.path.splitext( os.path.basename(example_file))[0]
save_path = os.path.join(folder, file_name)

# save as mat file
sio.savemat(save_path+'.mat', {'spike_rates':spike_rates})

# save as numpy file
np.save(save_path, spike_rates)

Kind regards

Discrepancy between DS01-OGB1-m-V1 and SpikeFinder dataset #2

I have noticed a difference between the CASCADE dataset DS01-OGB1-m-V1 and the original dataset #2 from the SpikeFinder study.
The ground-truth spike times seem randomly offset with respect to the fluorescence traces. Would you be able to look into this?

Latest installation instructions (e.g. TensorFlow, CUDA versions)

What are the current recommended versions for a local installation under Windows?
Is there a performance advantage to using (installing with) GPU support, if one only uses pretrained models?

The local install documentation for Windows currently states the following (see below).

CPU-only:
conda create -n Cascade python=3.7 tensorflow==2.3 keras==2.3.1 h5py numpy scipy matplotlib seaborn ruamel.yaml spyder.

With GPU:
conda create -n Cascade python=3.7 tensorflow-gpu==2.3.0 keras h5py numpy scipy matplotlib seaborn ruamel.yaml spyder

For GPU installs (e.g. for Deeplabcut) Ithere are usually also some CUDA/cudnn version requirements. Is that true here also? Any recommendations?

There are more recent tensorflow versions available (e.g. 2.10), which is part of why I'm asking whether these instructions are up-to-date. I am very new to python, conda, etc. and hope to get the install right from the start, as debugging version conflicts would be beyond my abilities ...

Thank you!

With Tensorflow 2, training stops too early

There might be a problem with using Tensorflow 2. It does not affect application of the trained models to new data, but it might affect retraining new models from scratch. There is no error, just a less optimally trained model. The issue was seen in Tensorflow 2.3 under Linux.

Train additional 4.25 Hz model

Add a model as before (Global_EXC_4.25Hz_smoothing300ms) but with higher noise levels included (up to a noise level of roughly 20).

info_file_link URL broken

The method download_model in cascade.py has a hardcoded URL, which I think is supposed to allow a file available_models.yaml to be downloaded? But the URL seems to be broken (for me - is it user error?). I get a 503 error from python, and a "Sorry for the short delay" error message in a browser. https://drive.switch.ch/index.php/s/IJWlhwjDT3aw2ro/download

I ran into this problem while trying to re-follow the collaboratory example steps after I switched to a local installation.

PS: Thanks for this interesting tool!

Experimental settings of "Comparison with existing methods"?

Hi, I am new to this field, so the questions I asked maybe a bit silly.

I don't seem to find some experimental settings in "Comparison with existing methods", such as the resampling rate of each dataset. Would you mind providing some more specific experimental settings so that I can reproduce the results?

I am also confused about some of the figure icons, such as the '50%', '6HZ' and '10s' in fig. 4a. What do they mean?

Thanks, in advance.

Any benchmarking with nonnegative least squares?

Hi again!

I just wanted to check whether there has been any benchmarking with Cascade via non-negative least squares? Is there any particular reason to use least-squares specifically?

If you were interested in discussing further offline, I would be interested in suggesting Cascade as a post-processing pipeline to EXTRACT (https://github.com/schnitzer-lab/EXTRACT-public) after we can figure out a way to perform further benchmarking. Thanks!

New model trained at 7 Hz

New model request by user:

  • Global_EXC_7.0Hz_smoothing200ms
  • Global_EXC_7.0Hz_smoothing200ms_causalkernel

Interpretation of spike "prob"

Hi again.

Since we last spoke, I have been applying CASCADE to my two-photon Ca2+ imaging datasets and generally have been very pleased with the transformation of Ca2+ fluorescence traces into inferred spike rate traces. CASCADE captures many of the intuitions that I have with regards to the dynamics of GECIs and how they relate to underlying electrical activity of the neuron.

This time, I wanted to ask about the interpretation of the spike_prob output of CASCADE. The FAQ of the "Calibrated_spike_inference_with_Cascade" demo script states:

The output spike_prob is the estimated probability of action potentials (spikes), at the same resolution as the original calcium recording. If you sum over the trace in time, you will get the estimated number of spikes. If you multiply the trace with the frame rate, you will get an estimate of the instantaneous spike rate. Spike probability and spike raes can therefore be converted by multiplication with the frame rate.

Thus, my initial interpretation was that the value of spike_prob for a frame was the "probability" that a spike had occurred within that frame. As a "probability", I expected that the value would be constrained to be between 0 and 1.

However, during my use of CASCADE on my data (technical details at the end of the post), I observed that it's possible for the spike_prob value to be greater than 1 for some frames. Here's an example:
image

So first, I wanted to ask if you thought my interpretation of spike_prob is wrong, and whether you thought that values of spike_prob that exceed 1 was due to an error in my use of the algorithm.

I then started thinking about the interpretation of spike_prob. The FAQ states that:

If you multiply the trace with the frame rate, you will get an estimate of the instantaneous spike rate. Spike probability and spike raes can therefore be converted by multiplication with the frame rate.

(By the way, typo on "rates".)

If spike_prob, as a probability, is constrained to be between 0 and 1, then this implies that the maximum spike rate that can be predicted by CASCADE is the imaging frame rate (since it would be frame rate multiplied by prob=1). This, however, didn't seem right to me, since a neuron could easily spike more rapidly than the imaging frame rate. For example, if one were imaging at 5 Hz, i.e. with 200 ms frame periods, a neuron can definitely spike more than once within that frame.

Thus, I started to wonder whether the spike_prob variable should be interpreted not as a "probability" that a spike occurred within that frame, but instead the expected number of spikes in that frame. This way, I think the output retains the stated properties of spike_prob (e.g. sum the trace in time to get total # of spikes; multiply by the frame rate to get the instantaneous spike rate), but without the intuition that the values should be constrained to lie between 0 and 1. I wonder what you think?

Here are some technical details on my use of CASCADE (i.e. in the image above):

  • Imaging medium spiny neurons in the striatum expressing jGCaMP7f;
  • Ca2+ movie was acquired at 30 Hz (using a standard 8 kHz resonant scanner);
  • I binned every pair of frames to obtain Ca2+ traces at 15 Hz (for SNR purposes and other technical reasons);
  • I used the "Global_EXC_15Hz_smoothing200ms" model.

I'd be happy to share the DFF traces with you if that would help the discussion.

Thanks again for sharing CASCADE!

Request for model training

Hi Peter,
Thanks for developing this tool, it's really useful!
Would it be possible to have to train a model for Excitatory and inhibitory cells combined at 5Hz?
I am also wondering since we have been using GCaMP7f and this is not included in the ground truth, do you have any idea of the expected performance?
Thanks!

Laura

Same neuron having different noise levels in two trials

Hi Peter,
I have recorded odor-evoked activities from ~1000 neurons in the olfactory bulb of a zebrafish.
I realized that ~300 neurons of those had different noise level in two different trials (indicated by the trace_noise_levels variable in cascade.py).
May I ask whether this is a normal phenomenon?
In addition, for the same neuron, should I then use models with different noise level, to predict its spike probabilities in different trials?

Model request

Train a model for excitatory neurons in mouse cortex at a frame rate of 2 Hz.

Problem when generating ground truth for low sampling rate and low noise models

There is a problem in the function calibrated_ground_truth_artificial_noise() in cascade2p\utils.py. This did not affect any function to predict spikes from raw traces but affects for example the script to evaluate model performance when applied to low-frequency (< 5 Hz) models (Demo_benchmark_model.py).

The problem: To generate ground truth for low-sampling rate models (e.g., 3 Hz), the available ground truth recordings at e.g. 30 Hz need to be temporally downsampled. Previously, the noise level was estimated from the original recordings and the noise level of the downsampled recording estimated based on theoretical considerations. It turns out that these considerations (assumption of Gaussian distribution of dF/F values) is not sufficient.

The solution: To address the problem, the noise level of the downsampled recording is now checked upon resampling, and a function based on gradient ascent iteratively corrects the additive noise until the noise level reaches the target value.

Prediction variable name in `Calibrated_spike_inference_with_Cascade.ipynb`

First of all, thanks for sharing this exciting tool for Ca2+ data analysis.

I have a minor question/comment regarding a variable name in the demo script Calibrated_spike_inference_with_Cascade.ipynb.

In Step 8, the result of cascade.predict(...) is stored in a variable called spike_rates. In Step 11, this variable is stored into a MAT file with the variable name spike_rates as well.

However, the first entry of the FAQ indicates that the output of the algorithm is the probability of spikes at the same resolution as the Ca2+ recording. It states that we can convert the output into the instantaneous spike rate by multiplying with the frame rate.

So, I wanted to confirm that the variable spike_rate that is returned by cascade.predict(...) is indeed spike probabilities, and not rates (i.e. the predict function does not multiply by the frame rate internally). I think this is the case, but I wanted to double check. If so, it might be clearer for users to label the output of cascade.predict to be spike_prob (or something similar).

Inconsistent verbose

Hi Again!

I believe the variable "verbose" is inconsistently labeled, and sometimes not labelled at all (e.g. infer_discrete_spikes). It would be very nice if there was a single global variable that could perfectly silence the output of Cascade, especially for batch processing. Thanks!

Problem running Demo_benchmark_model.py

Hi! I have used the toolbox to predict the spike probability of my 2P recordings (jRCaMP1a, 30Hz). On average the noise levels are around 3-4, so I decided to use the 'Global_EXC_30Hz_smoothing50ms_causalkernel' (maybe the 100ms smoothing would be better?). Since the data is kind of noisy I decided to not get the discrete spikes, as suggested.
When I tried to quantify the expected performance of the model using the demo script, I encountered some problems (or maybe I'm doing something wrong)

  1. The names of the training datasets in the config file of the pre-trained model and in the Ground Truth folder differ, so the function cascade.train gives an error because it cannot find the correct dataset folder. This I managed to fix by changing the dataset names in the config file.
  2. I get empty arrays, ‘calcium’ and ‘ground truth’, from ‘utils.preprocess_groundtruth_artificial_noise_balanced’ function. I get this warning ‘…/Documents/Cascade-master/cascade2p/utils.py:403: RuntimeWarning: All-NaN slice encountered’.

I only modified the model to 'Global_EXC_30Hz_smoothing50ms_causalkernel' and set the noise level to 4 in the script.

Thanks in advance

df/f scaling variation with laserpower

Dear Peter,
first of all congrats on this fantastic work.
I have run Cascade now on all sorts of different datasets. While it generally seems to perform well I do have a suggestion. Cascade does, at least, implicitly rely on absolute values. IIRC the solution to this problem is dividing the trace by 100 depending on how df/f is calculated. However, in my experience df/f is still dependent on laser power, probably due to different response functions of the indicator and the surrounding tissue. This is partially captured by the noise model. In practice the traces of cells can still be scaled differently depending on laser power (0 to 0.7 with low laser power or 0 to 15 with high laser power, actual values may differ...) while exhibiting a similar noise level. Rescaling with an exponential function before giving it as an input to Cascade seems to work but I am unsure about the reliability.
Do you think it would be possible to add next to the noise an exponential function or something similar that varies systematically between certain parameters to the data during training?

Going through traces 1 by 1

Hey,

I want to implement the toolbox into our imaging pipeline. For the noise level and/or the inference, is it important to give cascade all traces at once or can I infer them one-by-one with the same results? Our pipeline usually would go through data like that, this is why I am asking.

Thank you in advance,

martin

Smoothing of dF/F traces for prediction

Hi Peter,
thanks for creating this nice tool.
I have a question regarding whether I need to smooth my dF/F traces, before I use them as input to the model for spike inference.
In the README.md, it mentions that the ground-truth traces was slightly smoothed for training the model. Then, for example, if I wanted to use the model Global_EXC_7.5Hz_smoothing200ms, to predict spike prob from my own dF/F traces, do I need to run

smoothed_traces = gaussian_filter(dF_traces, sigma=0.2*7.5)

before I run

spike_prob = cascade.predict( 'Global_EXC_7.5Hz_smoothing200ms', smoothed_traces )

?

Best wishes,
Bo

Problem with ruamel.yaml

When using ruamel.yaml to load config files or the list of available pretrained models, an error appears for newer versions of ruamel.yaml:

"load()" has been removed

config.yaml wrong for Universal models

Hey,

it seems there's a typo in all the config.yaml files for the Universal models, it says they've been trained on (line 13)

-DS07-GCaMP6f-zf-dD

but the provided ground truth is

DS07-GCaMP6f-zf-OB

which one is right?

Non-verbose output during inference

During inference, the output is relatively verbose, which is annoying for daily use of the algorithm. Write an option to make inference non-verbose.

Conflicts in version of packages for local installation (for CPU)

Dear Peter,

First of all, thanks for sharing CASCADE with the world. It will help a great number of neuroscientist in their work around the world! :)

I am trying to incorporate CASCADE in my pipeline for peak detection and thus need to install it locally. Unfortunately, I can't manage to find a way to install it using the command on my Anaconda Prompt: conda create -n Cascade python=3.6 tensorflow==2.3 keras==2.3.1 h5py==2.10.0 numpy scipy matplotlib seaborn ruamel.yaml spyder.

There seems to be (a lot) of conflicts going on between the version of the various packages needed.

I have tried to install/uninstall my version of Anaconda (and also update a few packages) without managing to find a way to make everything work together. Could you please give the exact version of each package you are using so I can find a combination of version that allow a smooth installation?

Thanks a lot for your help!

Best wishes,

Simon Zamora

PS: Here is an output of all the conflicts in case you want to go through them

(base) C:\Users\simon>conda create -n Cascade python=3.6 tensorflow==2.3 keras==2.3.1 h5py==2.10.0 numpy scipy matplotlib seaborn ruamel.yaml spyder
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment:
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
Examining tensorflow==2.3: 42%|███████████████████████▎ | 5/12 [00:03<00:00, 16.37it/s]-Examining seaborn: 58%|█████████████████████████████████████▎ | 7/12 [00:03<00:03, 1.66it/s]|Examining matplotlib: 83%|██████████████████████████████████████████████████ | 10/12 [00:06<00:01, 1.27it/s]/Examining conflict for python numpy tensorflow h5py seaborn ruamel.yaml matplotlib scipy spyder: 58%|▌| 7/12 [00:09<00\Examining conflict for python tensorflow: 67%|███████████████████████████▎ | 8/12 [00:11<00:04, 1.20s/it]\failed / / |

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package openssl conflicts for:
seaborn -> python[version='>=3.6'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a']
matplotlib -> python[version='>=3.10,<3.11.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a']
ruamel.yaml -> python[version='>=3.9,<3.10.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a']
tensorflow==2.3 -> python=3.7 -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a']
numpy -> python[version='>=3.10,<3.11.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1h,<1.1.2a']
spyder -> python[version='>=3.10,<3.11.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1h,<1.1.2a']
h5py==2.10.0 -> python[version='>=3.9,<3.10.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a']
scipy -> python[version='>=3.10,<3.11.0a0'] -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a|>=1.1.1d,<1.1.2a|>=1.1.1e,<1.1.2a|>=1.1.1f,<1.1.2a|>=1.1.1g,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1h,<1.1.2a']

Package python conflicts for:
scipy -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']
python=3.6
matplotlib -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']
seaborn -> python[version='>=2.7,<2.8.0a0|>=3.6|>=3.7,<3.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0']
h5py==2.10.0 -> python[version='>=3.6,<3.7.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0']
numpy -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']
seaborn -> matplotlib[version='>=2.2'] -> python[version='>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.7.1,<3.8.0a0']
keras==2.3.1 -> keras-base=2.3.1 -> python[version='3.5.|3.6.|3.7.|3.8.|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|3.9.*']
ruamel.yaml -> python[version='>=3.10,<3.11.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']
matplotlib -> cycler[version='>=0.10'] -> python[version='>=3.6']
spyder -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']
h5py==2.10.0 -> numpy[version='>=1.16.6,<2.0a0'] -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.5,<3.6.0a0']
ruamel.yaml -> ruamel.yaml.clib[version='>=0.1.2'] -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0']
spyder -> atomicwrites[version='>=1.2.0'] -> python[version='>=2.7|>=3.7|>=3|>=3.5|>=3.6']

Package certifi conflicts for:
ruamel.yaml -> setuptools -> certifi[version='>=2016.09|>=2016.9.26']
matplotlib -> tornado -> certifi[version='>=2016.09|>=2016.9.26|>=2020.06.20']
spyder -> setuptools[version='>=49.6.0'] -> certifi[version='>=2016.09|>=2016.9.26']

Package numpy conflicts for:
numpy
matplotlib -> numpy[version='>=1.14.6,<2.0a0']
keras==2.3.1 -> keras-base=2.3.1 -> numpy[version='>=1.9.1']
scipy -> numpy[version='>=1.11.3,<2.0a0|>=1.14.6,<2.0a0|>=1.16.6,<2.0a0|>=1.21.2,<2.0a0|>=1.15.1,<2.0a0']
matplotlib -> matplotlib-base[version='>=3.5.0,<3.5.1.0a0'] -> numpy[version='>=1.15.4,<2.0a0|>=1.16.6,<2.0a0|>=1.19.2,<2.0a0|>=1.21.2,<2.0a0']
seaborn -> numpy[version='>=1.13.3|>=1.15|>=1.9.3']
seaborn -> matplotlib[version='>=2.2'] -> numpy[version='>=1.11|>=1.11.3,<1.12.0a0|>=1.11.3,<2.0a0|>=1.12.1,<2.0a0|>=1.13.3,<2.0a0|>=1.14.6,<2.0a0|>=1.19.2,<2.0a0|>=1.20.3,<2.0a0|>=1.20.2,<2.0a0|>=1.16.6,<2.0a0|>=1.15.4,<2.0a0|>=1.21.2,<2.0a0|>=1.15.1,<2.0a0|>=1.9|>=1.17.0,<2.0a0|>=1.19.1,<2.0a0|>=1.4.0']
h5py==2.10.0 -> numpy[version='>=1.11.3,<2.0a0|>=1.16.6,<2.0a0']

Package zlib conflicts for:
python=3.6 -> sqlite[version='>=3.33.0,<4.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
tensorflow==2.3 -> tensorflow-base==2.3.0=eigen_py37h17acbac_0 -> zlib[version='>=1.2.11,<1.3.0a0']
scipy -> python[version='>=3.10,<3.11.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
seaborn -> matplotlib[version='>=2.2'] -> zlib[version='>=1.2.11,<1.3.0a0']
numpy -> python[version='>=3.10,<3.11.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
ruamel.yaml -> python[version='>=3.10,<3.11.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
matplotlib -> zlib[version='>=1.2.11,<1.3.0a0']
spyder -> python[version='>=3.10,<3.11.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
h5py==2.10.0 -> hdf5[version='>=1.10.6,<1.10.7.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']

Package pyqt conflicts for:
seaborn -> matplotlib[version='>=2.2'] -> pyqt[version='5.|5.6.|5.9.|>=5.6,<6.0a0|>=5.9.2,<5.10.0a0']
matplotlib -> pyqt[version='5.
|5.6.|5.9.|>=5.6,<6.0a0|>=5.9.2,<5.10.0a0']
spyder -> qtconsole[version='>=4.2'] -> pyqt[version='>=5.9.2,<5.10.0a0']
spyder -> pyqt[version='5.*|>=5.6,<5.13']

Package vs2015_runtime conflicts for:
python=3.6 -> vs2015_runtime[version='>=14.16.27012,<15.0a0']
python=3.6 -> vc[version='>=14.1,<15.0a0'] -> vs2015_runtime[version='>=14.0.25123,<15.0a0|>=14.0.25420|>=14.15.26706|>=14.27.29016|>=14.16.27012']

Package tzdata conflicts for:
spyder -> python[version='>=3.10,<3.11.0a0'] -> tzdata
seaborn -> python[version='>=3.6'] -> tzdata
ruamel.yaml -> python[version='>=3.9,<3.10.0a0'] -> tzdata
h5py==2.10.0 -> python[version='>=3.9,<3.10.0a0'] -> tzdata
matplotlib -> python[version='>=3.10,<3.11.0a0'] -> tzdata
numpy -> python[version='>=3.10,<3.11.0a0'] -> tzdata
scipy -> python[version='>=3.10,<3.11.0a0'] -> tzdata

Package tensorflow conflicts for:
tensorflow==2.3
keras==2.3.1 -> tensorflow

Package six conflicts for:
keras==2.3.1 -> keras-base=2.3.1 -> six[version='>=1.9.0']
seaborn -> patsy -> six
spyder -> cookiecutter[version='>=1.6.0'] -> six[version='>=1.10|>=1.11.0|>=1.5']
numpy -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
scipy -> mkl-service[version='>=2.3.0,<3.0a0'] -> six
h5py==2.10.0 -> six
matplotlib -> cycler[version='>=0.10'] -> six[version='>=1.5']
tensorflow==2.3 -> tensorboard[version='>=2.3.0'] -> six[version='>=1.10.0|>=1.12|>=1.12.0|>=1.14.0']

Package colorama conflicts for:
python=3.6 -> pip -> colorama
spyder -> ipython[version='>=7.6.0'] -> colorama[version='>=0.3.5']

Package h5py conflicts for:
h5py==2.10.0
keras==2.3.1 -> keras-base=2.3.1 -> h5py

Package matplotlib conflicts for:
seaborn -> matplotlib[version='>=1.4.3|>=2.1.2|>=2.2']
matplotlib

Package numpy-base conflicts for:
numpy -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.17.2.|1.17.3.|1.17.4.|1.18.1.|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.3|1.20.3|1.20.3|1.21.2|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0|1.17.0|1.17.0',build='py36hc3f5095_0|py37h5c71026_6|py36h5c71026_6|py27h0bb1d87_6|py36h5c71026_7|py36h5c71026_7|py37h5c71026_7|py36h5c71026_8|py35h4a99626_8|py35h4a99626_9|py37h4a99626_9|py37h8128ebf_9|py36h8128ebf_9|py35h8128ebf_9|py35h8128ebf_10|py36h8128ebf_10|py27h2753ae9_10|py27hb1d0314_11|py36h555522e_1|py27h917549b_1|py35h5c71026_0|py36h5c71026_0|py35h5c71026_0|py36h5c71026_0|py37h5c71026_0|py27h0bb1d87_1|py36h5c71026_1|py36h5c71026_2|py37h5c71026_2|py27h0bb1d87_2|py36h5c71026_3|py27h0bb1d87_4|py37h5c71026_4|py36h5c71026_4|py35h4a99626_4|py36h8128ebf_4|py27h2753ae9_4|py35h8128ebf_4|py27hb1d0314_5|py35h4a99626_0|py37h4a99626_0|py35h8128ebf_0|py27h2753ae9_0|py35h8128ebf_0|py27h2753ae9_0|py36h8128ebf_0|py27h2753ae9_1|py27h2753ae9_0|py36hc3f5095_0|py36hc3f5095_0|py27hb1d0314_0|py36hc3f5095_0|py36hc3f5095_0|py27hb1d0314_0|py36hc3f5095_0|py36hc3f5095_0|py36hc3f5095_0|py38hc3f5095_0|py39h2e04a8b_1|py37h5bb6eb2_3|py38ha3acd2a_0|py38ha3acd2a_0|py39hbd0edd7_0|py38haf7ebc8_0|py37hc2deb75_0|py39hc2deb75_0|py37hc2deb75_0|py37h0829f74_0|py310h0829f74_0|py38h0829f74_0|py39h0829f74_0|py39hc2deb75_0|py38hc2deb75_0|py38hc2deb75_0|py39haf7ebc8_0|py37haf7ebc8_0|py36ha3acd2a_0|py37ha3acd2a_0|py36ha3acd2a_0|py37ha3acd2a_0|py38h5bb6eb2_3|py36h5bb6eb2_3|py39h5bb6eb2_3|py27hb1d0314_0|py37hc3f5095_0|py37hc3f5095_0|py27hb1d0314_0|py37hc3f5095_0|py37hc3f5095_0|py36hc3f5095_0|py37hc3f5095_0|py27hb1d0314_0|py27hb1d0314_1|py36hc3f5095_1|py37hc3f5095_1|py37hc3f5095_0|py27hb1d0314_1|py36hc3f5095_1|py37hc3f5095_1|py27hb1d0314_0|py37hc3f5095_0|py37hc3f5095_0|py27hb1d0314_0|py27h2753ae9_0|py36h8128ebf_0|py37h8128ebf_0|py37h8128ebf_0|py36h8128ebf_0|py37h8128ebf_0|py36h8128ebf_0|py37h8128ebf_0|py27hfef472a_0|py36h4a99626_0|py36hc3f5095_5|py37hc3f5095_5|py38hc3f5095_4|py37h8128ebf_4|py37h5c71026_3|py27h0bb1d87_3|py37h5c71026_1|py27h0bb1d87_0|py27h0bb1d87_0|py35h555522e_1|py38hc3f5095_12|py36hc3f5095_12|py27hb1d0314_12|py37hc3f5095_12|py36h2a9b21d_11|py37h2a9b21d_11|py37h8128ebf_11|py36h8128ebf_11|py37h8128ebf_10|py27h2753ae9_9|py27hfef472a_9|py36h4a99626_9|py27h0bb1d87_8|py37h5c71026_8|py27h0bb1d87_7|py35h5c71026_7|py37h5c71026_7|py27h0bb1d87_7|py37hc3f5095_0']
numpy -> mkl_fft -> numpy-base[version='>=1.0.14,<2.0a0|>=1.0.6,<2.0a0|>=1.0.2,<2.0a0|>=1.0.4,<2.0a0']

Package vc conflicts for:
python=3.6 -> vc[version='14.*|>=14.1,<15.0a0']
python=3.6 -> sqlite[version='>=3.30.1,<4.0a0'] -> vc=9

Package setuptools conflicts for:
ruamel.yaml -> setuptools
spyder -> ipython[version='>=7.6.0'] -> setuptools[version='>=18.5']
matplotlib -> setuptools
tensorflow==2.3 -> tensorboard[version='>=2.3.0'] -> setuptools[version='>=41.4']
spyder -> setuptools[version='>=39.0.0|>=49.6.0']
seaborn -> matplotlib[version='>=2.2'] -> setuptools
python=3.6 -> pip -> setuptools

Package sip conflicts for:
matplotlib -> pyqt -> sip[version='4.18.|4.19.13.|>=4.19.13,<=4.19.14|>=4.19.4,<=4.19.8']
spyder -> pyqt[version='>=5.6,<5.13'] -> sip[version='4.18.|4.19.13.|>=4.19.13,<=4.19.14|>=4.19.4,<=4.19.8']

Package vs2008_runtime conflicts for:
seaborn -> python -> vs2008_runtime
scipy -> python[version='>=2.7,<2.8.0a0'] -> vs2008_runtime
matplotlib -> python[version='>=2.7,<2.8.0a0'] -> vs2008_runtime[version='>=9.0.30729.1,<10.0a0']
numpy -> python[version='>=2.7,<2.8.0a0'] -> vs2008_runtime[version='>=9.0.30729.1,<10.0a0']
spyder -> python[version='>=2.7,<2.8.0a0'] -> vs2008_runtime

Package mkl conflicts for:
numpy -> mkl_random -> mkl[version='>=2020.1,<2021.0a0']
numpy -> mkl[version='>=2018.0.0,<2019.0a0|>=2018.0.1,<2019.0a0|>=2018.0.2,<2019.0a0|>=2018.0.3,<2019.0a0|>=2019.1,<2021.0a0|>=2019.3,<2021.0a0|>=2019.4,<2021.0a0|>=2021.2.0,<2022.0a0|>=2021.3.0,<2022.0a0|>=2021.4.0,<2022.0a0|>=2019.4,<2020.0a0']

Package pyyaml conflicts for:
spyder -> watchdog[version='>=0.10.3'] -> pyyaml[version='>=3.10']
keras==2.3.1 -> keras-base=2.3.1 -> pyyaml

Package tk conflicts for:
ruamel.yaml -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']
spyder -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']
seaborn -> python[version='>=3.6'] -> tk[version='>=8.6.11,<8.7.0a0']
matplotlib -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']
scipy -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']
numpy -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.11,<8.7.0a0']

Package scipy conflicts for:
scipy
seaborn -> statsmodels[version='>=0.5.0'] -> scipy[version='>=0.14|>=1.3']
keras==2.3.1 -> keras-base=2.3.1 -> scipy[version='>=0.14']
seaborn -> scipy[version='>=0.15.2|>=1.0|>=1.0.1']

Package blas conflicts for:
scipy -> blas[version='|1.0',build=mkl]
h5py==2.10.0 -> numpy[version='>=1.16.6,<2.0a0'] -> blas[version='
|1.0',build=mkl]
matplotlib -> numpy[version='>=1.14.6,<2.0a0'] -> blas[version='|1.0',build=mkl]
seaborn -> numpy[version='>=1.15'] -> blas[version='
|1.0',build=mkl]
numpy -> blas[version='*|1.0',build=mkl]

Package packaging conflicts for:
matplotlib -> matplotlib-base[version='>=3.5.0,<3.5.1.0a0'] -> packaging[version='>=20.0']
python=3.6 -> pip -> packaging
spyder -> sphinx[version='>=0.6.6'] -> packaging

Package intel-openmp conflicts for:
numpy -> mkl[version='>=2021.4.0,<2022.0a0'] -> intel-openmp[version='2021.|2022.']
scipy -> mkl[version='>=2021.4.0,<2022.0a0'] -> intel-openmp=2021

Package wheel conflicts for:
python=3.6 -> pip -> wheel
tensorflow==2.3 -> tensorboard[version='>=2.3.0'] -> wheel[version='>=0.26']

Package requests conflicts for:
spyder -> cookiecutter[version='>=1.6.0'] -> requests[version='>2.0.0|>=2.0.0|>=2.23.0|>=2.5.0']
tensorflow==2.3 -> tensorboard[version='>=2.3.0'] -> requests[version='>=2.21.0|>=2.21.0,<3']
python=3.6 -> pip -> requests

Pretrained model download fails

Currently, it does not work to download pretrained models. These models are stored at a specific Swiss-based served (SwitchDrive), which is currently down. The current state of the server can be seen here: https://twitter.com/SWITCHdrive_op

We hope that this issue gets resolved quickly.

Local installations of Cascade with already downloaded models will continue to work as before.

Apple M1 Chip problem

It appears that there is a problem with the current version of Tensorflow used by Cascade with the Apple M1 chips. There seems to be solutions to the problem, but this would require updating both the Tensorflow and the Python versions used for Cascade. I would really appreciate if you could consider addressing this issue in the future. Thanks!

Train new models

Train new global models with a frame rate of 3 Hz and 4.25 Hz.

Problem using h5py (load models)

There is a compatibility problem with the h5py package. h5py version 3.0 is incompatible with tensorflow/keras <=2.3. This should be mentioned in the install instructions and be tested.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.