Code Monkey home page Code Monkey logo

deeplabcut-core's Introduction

DeepLabCut-core DLC CORE!

JUNE 2021: THIS CODE IS NOW DEPRECIATED! DeepLabCut now supports 2.2 for standard, multi-animal, and DeepLabCut-Live! See main repo for details!

UPDATE JAN 2021: We will be using this space as the tensorflow 2 test-bed, but then all DeepLabCut will be within the main package. The headless version will be pip install deeplabcut, while the full GUI supported version is pip install deeplabcut[gui]. This means deeplabcutcore will be depreciated once TF2 is merged into the main repo

Currently up to date with DeepLabCut v2.1.8.1. AND uses tensorflow 2.X

Core functionalities of DeepLabCut, excluding all GUI functions.

Please be aware, you can create projects, etc. with the full deeplabcut package. Here, you will need to create the training set and train, evaluate, etc. not inter-mixing with using the deeplabcut package (it currently supports tensorflow 1.x). We recommend looking at this google colab notebook to help you, and this blog post about our transitition to tensorflow 2.

Python package PyPI - Python Version License PyPI PyPI - Downloads

Install from GitHub: pip install git+https://github.com/DeepLabCut/DeepLabCut-core

PyPi: pip install deeplabcutcore

Documentation is located at DeepLabCut's main GitHub page.

deeplabcut-core's People

Contributors

alexemg avatar alyetama avatar jpellman avatar maflister avatar mmathislab avatar stes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deeplabcut-core's Issues

deeplabcutcore import error(RTX3070)

Hello everyone
I have a issue.
I install deeplabcutcore and packages when I import Deeplabcutcore occur no module error 'tensorflow.contrib'
what I do next?

(DLC-GPU) C:\Windows\system32>pip list
Package Version


  •                    nsorflow-gpu
    

-ensorflow-gpu 2.3.0
absl-py 0.11.0
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
attrs 20.3.0
backcall 0.2.0
bayesian-optimization 1.2.0
bleach 3.2.1
cachetools 4.2.0
certifi 2020.12.5
cffi 1.14.4
chardet 4.0.0
click 7.1.2
colorama 0.4.4
cycler 0.10.0
Cython 0.29.21
decorator 4.4.2
deeplabcut-core 0.0b0
deeplabcutcore 0.0b3
defusedxml 0.6.0
easydict 1.9
entrypoints 0.3
filterpy 1.4.5
gast 0.3.3
google-auth 1.24.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
grpcio 1.31.0
h5py 2.10.0
idna 2.10
imageio 2.9.0
imageio-ffmpeg 0.4.3
imgaug 0.4.0
importlib-metadata 2.0.0
intel-openmp 2021.1.2
ipykernel 5.3.4
ipython 7.19.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 2.11.2
joblib 1.0.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-core 4.7.0
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.1
llvmlite 0.34.0
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.0.3
mistune 0.8.4
mkl-fft 1.2.0
mkl-random 1.1.1
mkl-service 2.3.0
mock 4.0.3
moviepy 1.0.1
msgpack 1.0.2
msgpack-numpy 0.4.7.1
nb-conda 2.2.1
nb-conda-kernels 2.3.1
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.3
networkx 2.5
notebook 6.1.6
numba 0.51.1
numexpr 2.7.2
numpy 1.16.4
oauthlib 3.1.0
opencv-python 3.4.13.47
opencv-python-headless 4.5.1.48
opt-einsum 3.3.0
packaging 20.8
pandas 1.1.5
pandocfilters 1.4.3
parso 0.8.1
patsy 0.5.1
pickleshare 0.7.5
Pillow 8.1.0
pip 20.3.3
proglog 0.1.9
prometheus-client 0.9.0
prompt-toolkit 3.0.8
protobuf 3.13.0
psutil 5.8.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
Pygments 2.7.4
pyparsing 2.4.7
pyreadline 2.1
pyrsistent 0.17.3
python-dateutil 2.8.1
pytz 2020.5
PyWavelets 1.1.1
pywin32 227
pywinpty 0.5.7
PyYAML 5.3.1
pyzmq 20.0.0
qtconsole 4.7.7
QtPy 1.9.0
requests 2.25.1
requests-oauthlib 1.3.0
rsa 4.7
ruamel.yaml 0.16.12
ruamel.yaml.clib 0.2.2
scikit-image 0.17.2
scikit-learn 0.24.0
scipy 1.4.1
Send2Trash 1.5.0
setuptools 51.1.2.post20210112
Shapely 1.7.1
six 1.15.0
statsmodels 0.12.1
tables 3.6.1
tabulate 0.8.7
tb-nightly 1.14.0a20190301
tensorboard 2.4.1
tensorboard-plugin-wit 1.7.0
tensorflow 2.3.0
tensorflow-estimator 2.3.0
tensorflow-gpu-estimator 2.3.0
tensorpack 0.9.8
termcolor 1.1.0
terminado 0.9.2
testpath 0.4.4
tf-estimator-nightly 1.14.0.dev2019030115
tf-slim 1.1.0
threadpoolctl 2.1.0
tifffile 2021.1.11
tornado 6.1
tqdm 4.56.0
traitlets 5.0.5
urllib3 1.26.2
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.36.2
widgetsnbextension 3.5.1
wincertstore 0.2
wrapt 1.12.1
wxPython 4.0.4
zipp 3.4.0

(DLC-GPU) C:\Windows\system32>python
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
Failed calling sys.interactivehook
Traceback (most recent call last):
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site.py", line 408, in register_readline
import readline
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\readline.py", line 6, in
from pyreadline.rlmain import Readline
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline_init_.py", line 12, in
from . import logger, clipboard, lineeditor, modes, console
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\modes_init_.py", line 3, in
from . import emacs, notemacs, vi
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\modes\emacs.py", line 15, in
import pyreadline.lineeditor.history as history
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\lineeditor\history.py", line 257
q.add_history(RL("aaaa"),encoding='utf8'))
^
SyntaxError: invalid syntax

import deeplabcutcore
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore_init_.py", line 20, in
from deeplabcutcore.create_project import create_new_project, create_new_project_3d, add_new_videos, load_demo_data
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project_init_.py", line 4, in
from deeplabcutcore.create_project.demo_data import load_demo_data
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project\demo_data.py", line 14, in
from deeplabcutcore.utils import auxiliaryfunctions
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils_init_.py", line 1, in
from deeplabcutcore.utils.make_labeled_video import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils\make_labeled_video.py", line 38, in
from deeplabcutcore.pose_estimation_tensorflow.config import load_config
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow_init_.py", line 13, in
from deeplabcutcore.pose_estimation_tensorflow.nnet import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet_init_.py", line 16, in
from deeplabcutcore.pose_estimation_tensorflow.nnet.pose_net import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet\pose_net.py", line 9, in
import tensorflow.contrib.slim as slim
ModuleNotFoundError: No module named 'tensorflow.contrib'
^Z

(DLC-GPU) C:\Windows\system32>ipython
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import deeplabcutcore

ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import deeplabcutcore

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore_init_.py in
18
19
---> 20 from deeplabcutcore.create_project import create_new_project, create_new_project_3d, add_new_videos, load_demo_data
21 from deeplabcutcore.create_project import create_pretrained_project, create_pretrained_human_project
22 from deeplabcutcore.generate_training_dataset import extract_frames, select_cropping_area

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project_init_.py in
2 from deeplabcutcore.create_project.new_3d import create_new_project_3d
3 from deeplabcutcore.create_project.add import add_new_videos
----> 4 from deeplabcutcore.create_project.demo_data import load_demo_data
5 from deeplabcutcore.create_project.modelzoo import create_pretrained_human_project, create_pretrained_project

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project\demo_data.py in
12 from pathlib import Path
13 import deeplabcutcore
---> 14 from deeplabcutcore.utils import auxiliaryfunctions
15
16 def load_demo_data(config,createtrainingset=True):

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils_init_.py in
----> 1 from deeplabcutcore.utils.make_labeled_video import *
2 from deeplabcutcore.utils.auxiliaryfunctions import *
3 from deeplabcutcore.utils.video_processor import *
4 from deeplabcutcore.utils.plotting import *
5

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils\make_labeled_video.py in
36
37 from deeplabcutcore.utils import auxiliaryfunctions
---> 38 from deeplabcutcore.pose_estimation_tensorflow.config import load_config
39 from skimage.util import img_as_ubyte
40 from skimage.draw import circle_perimeter, circle, line,line_aa

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow_init_.py in
11 from deeplabcutcore.pose_estimation_tensorflow.dataset import *
12 from deeplabcutcore.pose_estimation_tensorflow.models import *
---> 13 from deeplabcutcore.pose_estimation_tensorflow.nnet import *
14 from deeplabcutcore.pose_estimation_tensorflow.util import *
15

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet_init_.py in
14 from deeplabcutcore.pose_estimation_tensorflow.nnet.losses import *
15 from deeplabcutcore.pose_estimation_tensorflow.nnet.net_factory import *
---> 16 from deeplabcutcore.pose_estimation_tensorflow.nnet.pose_net import *
17 from deeplabcutcore.pose_estimation_tensorflow.nnet.predict import *

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet\pose_net.py in
7 import re
8 import tensorflow as tf
----> 9 import tensorflow.contrib.slim as slim
10 from tensorflow.contrib.slim.nets import resnet_v1
11 from deeplabcutcore.pose_estimation_tensorflow.dataset.pose_dataset import Batch

ModuleNotFoundError: No module named 'tensorflow.contrib'

WIP tf 2.2+ migration

I started a branch called TF2.2alpha that is a start at migrating to TF2.

I followed the guide here:
https://www.tensorflow.org/guide/migrate
and am utilizing pip install tf_ slim
also see issue DeepLabCut/DeepLabCut#601

Here is the log of the outstanding issues, some of which are resolved, and some need more work. Zero rush. just did this for a bit fo fun.

short list:

Converted 187 files
Detected 3 issues that require attention
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
File: DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train 2.py
--------------------------------------------------------------------------------
DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train 2.py:205:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
--------------------------------------------------------------------------------
File: DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train.py
--------------------------------------------------------------------------------
DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train.py:207:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
--------------------------------------------------------------------------------
File: DeepLabCut-core/deeplabcutcore/pose_estimation_tensorflow/train.py
--------------------------------------------------------------------------------
DeepLabCut-core/deeplabcutcore/pose_estimation_tensorflow/train.py:207:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.

report.txt

remaining issues (a like):

(1) testscript is failing due to no video to load into project

  • CREATING PROJECT Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/videos" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/labeled-data" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/training-datasets" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/dlc-models" Copying the videos WARNING: No valid videos were found. The project was not created ... Verify the video files and re-create the project. Traceback (most recent call last): File "testscript.py", line 57, in <module> cfg=dlc.auxiliaryfunctions.read_config(path_config_file) File "/Users/mwmathis/Documents/DeepLabCut-core_v3/deeplabcutcore/utils/auxiliaryfunctions.py", line 132, in read_config "Config file is not found. Please make sure that the file exists and/or that you passed the path of the config file correctly!") FileNotFoundError: Config file is not found. Please make sure that the file exists and/or that you passed the path of the config file correctly!

  • dlccore.train_network( ) pose_estimation_tensorflow/nnet/pose_net.py", line 69, in prediction_layers with tf.variable_scope('pose', reuse=reuse): AttributeError: module 'tensorflow' has no attribute 'variable_scope'

UnboundLocalError: local variable 'cfg' referenced before assignment

Hi all,

I am having this issue when I attempt "dlc.extract_frames(config_path, crop = True"
I am using Visual Studio Code as a text editor instead of ATOM, could this be the source of the problem? It seems to be in the config.yaml and I even tried it using an old config file that used to work

Hope to remove the Intel-openmp dependency

I try to install deeplabcut-core to download pretrained model on my jetson nano. However, I found DLC need Intel-openmp. ARM frame CPU becomes more and more popular (like Apple M1). Maybe you can consider my suggestion :; Also I wonder how you test your models on Jetson Xavier. If I trained a model on an x86, it can be implemented on an arm directly with tensorRT support? Thanks!

Multi-animal DLC Compatibility with RTX 3070 GPU

OS: Win 10
DeepLabCut Version: 2.2rc1
Anaconda env used: DLC Core
Tensorflow version: 2.4
Cuda version: 11.3
driver: 466.27
cuDNN: 8.2

Hi folks,

Using DLCore, I have managed to train a dataset on a single animal using my RTX 3070 GPU. However, I am having issues running maDLC on the same. My understanding is that DLCore does not have the code to run maDLC, but has someone found a way around this or is this yet to be released?

Attributes

Hi! Thank you for making such a great tool!

Your Operating system and DeepLabCut version

OS: Windows 10
DeepLabCut Version: DeepLabCut 2.1.8.2
Anaconda env used: DeepLabCut & TF 1.13.1

OS: Linux (High Performance Cluster)
DeepLabCut Version: DeepLabCut 2.1.8.1
Virtual env used: DeepLabCut-core & TF 1.13.1 (GPU)

Describe the problem

I created a project with the full package (DeepLabCut 2.1.8.2) and labelled on Windows. Everything was transferred to the cluster and the paths were adapted in the config.yaml from Windows to Linux. However, since I did not label all images I wanted to call dropimagesduetolackofannotation with DeepLabCut-core. I got an AttributeError: module 'deeplabcutcore' has no attribute 'dropimagesduetolackofannotation'.

On both Windows and Linux dir(DeepLabCut) have different outputs. The windows output includes dropimagesduetolackofannotation, whereas it is not in the core-version output.

After I checked trainingsetmanipulation.py and saw it was included I thought something might be going on, since create_training_dataset works just fine. I am planning on doing some more trainingset manipulation (select my own train & test set ect.), so I thought I would check. I am not sure if this was intended or not.

Traceback

unload python 3.7.4.
load python 3.6.8.
load cuda 10.0 library and binaries.
load cudnn 7.5.0.56 library and binaries.
Traceback (most recent call last):
File "DLC_CustomTrainingSet.py", line 9, in
deeplabcut.dropimagesduetolackofannotation
AttributeError: module 'deeplabcutcore' has no attribute 'dropimagesduetolackofannotation'

How to Reproduce the problem
Steps to reproduce the behavior:
Run the following script:

import deeplabcutcore as deeplabcut

Set config path

config_path = '/home/.../..../.../.../config.yaml'

Remove images without annotations

deeplabcut.dropimagesduetolackofannotation(config_path)

Additional context
A successful (training, evaluation and analyzing) transition between Windows and Linux was already achieved.

Output dir(deeplabcutcore)

['CropVideo', 'DEBUG', 'DownSampleVideo', 'ShortenVideo', 'builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'add_new_videos', 'analyze_time_lapse_frames', 'analyze_videos', 'analyze_videos_converth5_to_csv', 'analyzeskeleton', 'auxfun_videos', 'auxiliaryfunctions', 'calibrate_cameras', 'check_labels', 'check_undistortion', 'convertannotationdata_fromwindows2unixstyle', 'convertcsv2h5', 'create_labeled_video', 'create_labeled_video_3d', 'create_new_project', 'create_new_project_3d', 'create_pretrained_human_project', 'create_pretrained_project', 'create_project', 'create_training_dataset', 'create_training_model_comparison', 'evaluate_network', 'export_model', 'extract_frames', 'extract_outlier_frames', 'filterpredictions', 'generate_training_dataset', 'load_demo_data', 'merge_datasets', 'mergeandsplit', 'os', 'platform', 'plot_trajectories', 'pose_estimation_3d', 'pose_estimation_tensorflow', 'post_processing', 'refine_training_dataset', 'return_evaluate_network_data', 'return_train_network_path', 'select_cropping_area', 'train_network', 'triangulate', 'utils']

Output dir(deeplabcut)

['CropVideo', 'DEBUG', 'DownSampleVideo', 'ShortenVideo', 'VERSION', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'add_new_videos', 'adddatasetstovideolistandviceversa', 'analyze_time_lapse_frames', 'analyze_videos', 'analyze_videos_converth5_to_csv', 'analyzeskeleton', 'auxfun_videos', 'auxiliaryfunctions', 'calibrate_cameras', 'check_labels', 'check_undistortion', 'comparevideolistsanddatafolders', 'convertannotationdata_fromwindows2unixstyle', 'convertcsv2h5', 'create_labeled_video', 'create_labeled_video_3d', 'create_new_project', 'create_new_project_3d', 'create_pretrained_human_project', 'create_pretrained_project', 'create_project', 'create_training_dataset', 'create_training_model_comparison', 'dropannotationfileentriesduetodeletedimages', 'dropduplicatesinannotatinfiles', 'dropimagesduetolackofannotation', 'evaluate_network', 'export_model', 'extract_frames', 'extract_outlier_frames', 'filterpredictions', 'generate_training_dataset', 'gui', 'label_frames', 'launch_dlc', 'load_demo_data', 'merge_datasets', 'mergeandsplit', 'mpl', 'multiple_individuals_labeling_toolbox', 'os', 'platform', 'plot_trajectories', 'pose_estimation_3d', 'pose_estimation_tensorflow', 'post_processing', 'refine_labels', 'refine_training_dataset', 'return_evaluate_network_data', 'return_train_network_path', 'select_crop_parameters', 'select_cropping_area', 'train_network', 'triangulate', 'utils', 'version']

Compatibility with RTX 3080

OS: Win 10
DeepLabCut Version: DeepLabCut-core tf 2.2 alpha
Anaconda env used: DLC-GPU (clone the DLC-GPU env and uninstall the CUDA and cudnn)
Tensorflow Version: TF2.3, TF2.4, or tf-nightly, installed with pip (see below)
Cuda version: 11.0 and 11.1 (see below)

Hi everyone,
First of all, I want to say thank you to the deeplabcut team! I have been using the DLC for whisker tracking on an RTX 2060 for a while and it significantly facilitates my project.
Recently, I got an RTX 3080 in the lab. However, I had a hard time setting it up for DLC due to the compatibility issue. First, I noticed that RTX 3000 series does not support CUDA 10.x or earlier versions, so I installed CUDA 11.0 or CUDA 11.1 with the coresponding CuDNN on my windows. And I also cloned DLC-GPU conda environment and uninstalled the original CUDA and cudnn in the environment to prevent conflict.
TensorFlow starts to support CUDA 11.0 from TensorFlow 2.4, so I installed the TensorFlow 2.4 or tf-nightly-2.5 in the conda environment (via pip). I also tried TF-2.3 to check whether TF-2.3 is indeed incompatible with CUDA 11.x. I followed the
https://github.com/DeepLabCut/DeepLabCut-core/blob/tf2.2alpha/Colab_TrainNetwork_VideoAnalysis_TF2.ipynb
to install DeepLabCut-core tf 2.2 alpha and tf-slim and run the deeplabcut-core. However, I could not get it to start training in any of the settings.
Here is the summary
CUDA 11.0 | TF-2.3 | TF cannot recognize GPU as it is looking for .dll files that only exist in CUDA10.x
CUDA 11.0 | TF-2.4 | TF can recognize GPU smoothly, cannot start training with an error message (see Notes 1)
CUDA 11.0 | TF-nightly | TF can recognize GPU smoothly, cannot start training with an error message (see Notes 1)
CUDA 11.1 | TF-2.4| TF can recognize GPU with a trick (see Notes 2), cannot start training with no error message
CUDA 11.1 | TF-nightly | TF can recognize GPU with a trick (see Notes 2), cannot start training with no error message
I tested some simple TensorFlow script (https://www.tensorflow.org/tutorials/quickstart/advanced), they seemed to work fine on GPU in the last 4 configurations that I listed above.

Notes 1: Error message: failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED. And I saw the VRAM exploded in Windows Task manager after I started training. I tried to restrict the memory to a lower use by "config.gpu_options.per_process_gpu_memory_fraction = 0.6". It did not help, unfortunately.

Notes 2: TF could not recognize GPU because it could find "cusolver64_10.dll" which exists in CUDA 11.0 but replaced by "cusolver64_11.dll" in CUDA 11.1. So I copied "cusolver64_11.dll" and renamed it as "cusolver64_10.dll". Although TF can recognize GPU after that, it cannot start training. I saw the VRAM usage increased (but did not explode) in task manager after training start and after ~ 30 seconds, ipython or python just closed itself without any error message.

I also carefully followed the suggestions in DeepLabCut/DeepLabCut#944. They are very useful suggestions. However, I still cannot get my RTX3080 work.

Do you have any more suggestions that I could try?
Does anyone have a guide to set DLC-Core on RTX 3000 Series?

Thank you in advance

Issues with installing matplotlib

Specs:
OS: Windows 10
Graphics card: RTX3070
CUDA: 9.0
Python: 3.9

Due to series 3000 cards not working with Tensorflow 1.x, I'm trying to run the headless DeepLabCut with tensorflow 2.0.

Issue:
When I make a fresh anaconda environment and run: pip install git+https://github.com/DeepLabCut/[email protected]
(code I retrieved from the colab), I am unable to install matplotlib.

>>>pip install git+https://github.com/DeepLabCut/[email protected]
Collecting git+https://github.com/DeepLabCut/[email protected]
  Cloning https://github.com/DeepLabCut/DeepLabCut-core.git (to revision tf2.2alpha) to c:\users\jc\appdata\local\temp\pip-req-build-r3dhdv6n
Collecting certifi
  Using cached certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet
  Using cached chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting click
  Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting easydict
  Using cached easydict-1.9.tar.gz (6.4 kB)
Collecting h5py~=2.7
  Using cached h5py-2.10.0.tar.gz (301 kB)
Collecting intel-openmp
  Using cached intel_openmp-2021.1.2-py2.py3-none-win_amd64.whl (3.3 MB)
Collecting imgaug
  Using cached imgaug-0.4.0-py2.py3-none-any.whl (948 kB)
Collecting ipython
  Using cached ipython-7.19.0-py3-none-any.whl (784 kB)
Collecting ipython-genutils
  Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting matplotlib==3.0.3
  Using cached matplotlib-3.0.3.tar.gz (36.6 MB)
    ERROR: Command errored out with exit status 1:
     command: 'C:\Users\JC\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\JC\\AppData\\Local\\Temp\\pip-install-cy4dervr\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\JC\\AppData\\Local\\Temp\\pip-install-cy4dervr\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\JC\AppData\Local\Temp\pip-pip-egg-info-wqn2llym'
         cwd: C:\Users\JC\AppData\Local\Temp\pip-install-cy4dervr\matplotlib\
    Complete output (47 lines):
    ============================================================================
    Edit setup.cfg to change the build options

    BUILDING MATPLOTLIB
                matplotlib: yes [3.0.3]
                    python: yes [3.9.1 (tags/v3.9.1:1e5d33e, Dec  7 2020,
                            17:08:21) [MSC v.1927 64 bit (AMD64)]]
                  platform: yes [win32]

    REQUIRED DEPENDENCIES AND EXTENSIONS
                     numpy: yes [not found. pip may install it below.]
          install_requires: yes [handled by setuptools]
                    libagg: yes [pkg-config information for 'libagg' could not
                            be found. Using local copy.]
                  freetype: no  [The C/C++ header for freetype
                            (freetype2\ft2build.h) could not be found.  You may
                            need to install the development package.]
                       png: no  [The C/C++ header for png (png.h) could not be
                            found.  You may need to install the development
                            package.]
                     qhull: yes [pkg-config information for 'libqhull' could not
                            be found. Using local copy.]

    OPTIONAL SUBPACKAGES
               sample_data: yes [installing]
                  toolkits: yes [installing]
                     tests: no  [skipping due to configuration]
            toolkits_tests: no  [skipping due to configuration]

    OPTIONAL BACKEND EXTENSIONS
                       agg: yes [installing]
                     tkagg: yes [installing; run-time loading from Python Tcl /
                            Tk]
                    macosx: no  [Mac OS-X only]
                 windowing: yes [installing]

    OPTIONAL PACKAGE DATA
                      dlls: no  [skipping due to configuration]

    ============================================================================
                            * The following required packages can not be built:
                            * freetype, png
                            * Please check http://gnuwin32.sourceforge.net/packa
                            * ges/freetype.htm for instructions to install
                            * freetype
                            * Please check http://gnuwin32.sourceforge.net/packa
                            * ges/libpng.htm for instructions to install png
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

I've tried installing freetype and libpng libraries. However, I only run into more errors once I have done that. Any idea what the issue is here?

Joe

list of fxns needed updated from dlc to dlc-c

unable to triangulate

I have a GPU card RTX3090, so I chose to use deeplabcutcore.

I got a TypeError: unhashable type: 'CommentedMap' while running deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=True) (already import deeplabcutcore as deeplabcut).

And I found that if I set filterpredictions=False, I got another error IndexError: list index out of range.

If I use import deeplabcut, it works well but really slowly!

Hope you can help.

IndexError: list index out of range

Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-1.avi using config_file_camera-1
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-1.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-1DLC_resnet50_finger3d-camera1Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-1 DLC_resnet50_finger3d-camera1Mar5shuffle1_2000
Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-5.avi using config_file_camera-5
Snapshotindex is set to 'all' in the config.yaml file. Running video analysis with all snapshots is very costly! Use the function 'evaluate_network' to choose the best the snapshot. For now, changing snapshot index to -1!
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1\train\snapshot-2000
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-5.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-5DLC_resnet50_finger3d-camera5Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-5 DLC_resnet50_finger3d-camera5Mar5shuffle1_2000
Undistorting...
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-24-682fd20e3c04> in <module>
      4 video_path = 'D:\\deeplabcut-video\\3dvideos'
      5 
----> 6 deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=False)

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in triangulate(config, video_path, videotype, filterpredictions, filtertype, gputouse, destfolder, save_as_csv)
    212             #undistort points for this pair
    213             print("Undistorting...")
--> 214             dataFrame_camera1_undistort,dataFrame_camera2_undistort,stereomatrix,path_stereo_file = undistort_points(config,dataname,str(cam_names[0]+'-'+cam_names[1]),destfolder)
    215             if len(dataFrame_camera1_undistort) != len(dataFrame_camera2_undistort):
    216                 import warnings

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in undistort_points(config, dataframe, camera_pair, destfolder)
    314     if True:
    315         # Create an empty dataFrame to store the undistorted 2d coordinates and likelihood
--> 316         dataframe_cam1 = pd.read_hdf(dataframe[0])
    317         dataframe_cam2 = pd.read_hdf(dataframe[1])
    318         scorer_cam1 = dataframe_cam1.columns.get_level_values(0)[0]

IndexError: list index out of range

TypeError: unhashable type: 'CommentedMap'

Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-1.avi using config_file_camera-1
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
0it [00:00, ?it/s]
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-1.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-1DLC_resnet50_finger3d-camera1Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-1 DLC_resnet50_finger3d-camera1Mar5shuffle1_2000
Filtering with median model D:\deeplabcut-video\3dvideos\finger-camera-1.avi

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in __init__(self, values, categories, ordered, dtype, fastpath)
    342             try:
--> 343                 codes, categories = factorize(values, sort=True)
    344             except TypeError as err:

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in factorize(values, sort, na_sentinel, size_hint)
    677         codes, uniques = _factorize_array(
--> 678             values, na_sentinel=na_sentinel, size_hint=size_hint, na_value=na_value
    679         )

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value, mask)
    500     uniques, codes = table.factorize(
--> 501         values, na_sentinel=na_sentinel, na_value=na_value, mask=mask
    502     )

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize()

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()

TypeError: unhashable type: 'CommentedMap'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-25-3fd320d1d100> in <module>
      4 video_path = 'D:\\deeplabcut-video\\3dvideos'
      5 
----> 6 deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=True)

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in triangulate(config, video_path, videotype, filterpredictions, filtertype, gputouse, destfolder, save_as_csv)
    205                     print(destfolder, vname , DLCscorer)
    206                     if filterpredictions:
--> 207                         filtering.filterpredictions(config_2d,[video],videotype=videotype,shuffle=shuffle,trainingsetindex=trainingsetindex,filtertype=filtertype,destfolder=destfolder)
    208                         dataname.append(os.path.join(destfolder,vname + DLCscorer + '.h5'))
    209 

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\post_processing\filtering.py in filterpredictions(config, video, videotype, shuffle, trainingsetindex, filtertype, windowlength, p_bound, ARdegree, MAdegree, alpha, save_as_csv, destfolder)
    108                     Dataframe = pd.read_hdf(sourcedataname,'df_with_missing')
    109                     for bpindex,bp in tqdm(enumerate(cfg['bodyparts'])):
--> 110                         pdindex = pd.MultiIndex.from_product([[scorer], [bp], ['x', 'y','likelihood']],names=['scorer', 'bodyparts', 'coords'])
    111                         x,y,p=Dataframe[scorer][bp]['x'].values,Dataframe[scorer][bp]['y'].values,Dataframe[scorer][bp]['likelihood'].values
    112 

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\indexes\multi.py in from_product(cls, iterables, sortorder, names)
    558             iterables = list(iterables)
    559 
--> 560         codes, levels = factorize_from_iterables(iterables)
    561         if names is lib.no_default:
    562             names = [getattr(it, "name", None) for it in iterables]

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in factorize_from_iterables(iterables)
   2723         # For consistency, it should return a list of 2 lists.
   2724         return [[], []]
-> 2725     return map(list, zip(*(factorize_from_iterable(it) for it in iterables)))

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in <genexpr>(.0)
   2723         # For consistency, it should return a list of 2 lists.
   2724         return [[], []]
-> 2725     return map(list, zip(*(factorize_from_iterable(it) for it in iterables)))

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in factorize_from_iterable(values)
   2695         # but only the resulting categories, the order of which is independent
   2696         # from ordered. Set ordered to False as default. See GH #15457
-> 2697         cat = Categorical(values, ordered=False)
   2698         categories = cat.categories
   2699         codes = cat.codes

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in __init__(self, values, categories, ordered, dtype, fastpath)
    343                 codes, categories = factorize(values, sort=True)
    344             except TypeError as err:
--> 345                 codes, categories = factorize(values, sort=False)
    346                 if dtype.ordered:
    347                     # raise, as we don't have a sortable data structure and so

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in factorize(values, sort, na_sentinel, size_hint)
    676 
    677         codes, uniques = _factorize_array(
--> 678             values, na_sentinel=na_sentinel, size_hint=size_hint, na_value=na_value
    679         )
    680 

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value, mask)
    499     table = hash_klass(size_hint or len(values))
    500     uniques, codes = table.factorize(
--> 501         values, na_sentinel=na_sentinel, na_value=na_value, mask=mask
    502     )
    503 

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize()

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()

TypeError: unhashable type: 'CommentedMap'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.