Code Monkey home page Code Monkey logo

dlcutils's Introduction

Welcome! 👋

DeepLabCut™️ is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. Read a short development and application summary below.

Please click the link above for all the information you need to get started! Please note that currently we support only Python 3.10+ (see conda files for guidance).

Developers Stable Release:

  • Very quick start: You need to have TensorFlow installed (up to v2.10 supported across platforms) pip install "deeplabcut[gui,tf]" that includes all functions plus GUIs, or pip install deeplabcut[tf] (headless version with PyTorch and TensorFlow).

Developers Alpha Release:

We recommend using our conda file, see here or the new deeplabcut-docker package.

Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at Nature Protocols paper.

For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org

🐭 pose tracking of single animals demo Open in Colab

🐭🐭🐭 pose tracking of multiple animals demo Open in Colab

  • See more demos here. We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab.

Why use DeepLabCut?

In 2018, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to rats, humans, various fish species, bacteria, leeches, various robots, cheetahs, mouse whiskers and race horses. DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see Pretraining boosts out-of-domain robustness for pose estimation and Lauer et al 2022). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

Left: Due to transfer learning it requires little training data for multiple, challenging behaviors (see Mathis et al. 2018 for details). Mid Left: The feature detectors are robust to video compression (see Mathis/Warren for details). Mid Right: It allows 3D pose estimation with a single network and camera (see Mathis/Warren). Right: It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see Nath* and Mathis* et al. 2019).

DeepLabCut is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See DLC-Utils for some helper code.

Code contributors:

DLC code was originally developed by Alexander Mathis & Mackenzie Mathis, and was extended in 2.0 with the core dev team consisting of Tanmay Nath (2.0-2.1), and currently (2.1+) with Jessy Lauer and (2.3+) Niels Poulsen. DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the 100+ contributors. Please see AUTHORS for more details!

This is an actively developed package and we welcome community development and involvement.

Get Assistance & be part of the DLC Community✨:

🚉 Platform 🎯 Goal ⏱️ Estimated Response Time 📢 Support Squad
Image.sc forum
🐭Tag: DeepLabCut
To ask help and support questions👋 Promptly🔥 DLC Team and The DLC Community
GitHub DeepLabCut/Issues To report bugs and code issues🐛 (we encourage you to search issues first) 2-3 days DLC Team
Gitter To discuss with other users, share ideas and collaborate💡 2 days The DLC Community
GitHub DeepLabCut/Contributing To contribute your expertise and experience🙏💯 Promptly🔥 DLC Team
🚧 GitHub DeepLabCut/Roadmap To learn more about our journey✈️ N/A N/A
Twitter Follow To keep up with our latest news and updates 📢 Daily DLC Team
The DeepLabCut AI Residency Program To come and work with us next summer👏 Annually DLC Team

References:

If you use this code or data we kindly ask that you please cite Mathis et al, 2018 and, if you use the Python package (DeepLabCut2.x) please also cite Nath, Mathis et al, 2019. If you utilize the MobileNetV2s or EfficientNets please cite Mathis, Biasi et al. 2021. If you use versions 2.2beta+ or 2.2rc1+, please cite Lauer et al. 2022.

DOIs (#ProTip, for helping you find citations for software, check out CiteAs.org!):

Please check out the following references for more details:

@article{Mathisetal2018,
    title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
    author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe  and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
    journal = {Nature Neuroscience},
    year = {2018},
    url = {https://www.nature.com/articles/s41593-018-0209-y}}

 @article{NathMathisetal2019,
    title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
    author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
    journal = {Nature Protocols},
    year = {2019},
    url = {https://doi.org/10.1038/s41596-019-0176-0}}
    
@InProceedings{Mathis_2021_WACV,
    author    = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
    title     = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2021},
    pages     = {1859-1868}}
    
@article{Lauer2022MultianimalPE,
    title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
    author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and     Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
    journal={Nature Methods},
    year={2022},
    volume={19},
    pages={496 - 504}}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170}}

Review & Educational articles:

@article{Mathis2020DeepLT,
    title={Deep learning tools for the measurement of animal behavior in neuroscience},
    author={Mackenzie W. Mathis and Alexander Mathis},
    journal={Current Opinion in Neurobiology},
    year={2020},
    volume={60},
    pages={1-11}}

@article{Mathis2020Primer,
    title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
    author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
    journal={Neuron},
    year={2020},
    volume={108},
    pages={44-65}}

Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
    author = {Mathis, Alexander and Warren, Richard A.},
    title = {On the inference speed and video-compression robustness of DeepLabCut},
    year = {2018},
    doi = {10.1101/457242},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
    eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
    journal = {bioRxiv}}

License:

This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission.

SuperAnimal models are provided for research use only (non-commercial use).

Major Versions:

  • For all versions, please see here.

VERSION 3.0: A whole new experience with PyTorch🔥. While the high-level API remains the same, the backend and developer friendliness have greatly improved, along with performance gains!

VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience.

VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects).

VERSION 2.0-2.1: This is the Python package of DeepLabCut that was originally released in Oct 2018 with our Nature Protocols paper (preprint here). This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see panel b).

VERSION 1.0: The initial, Nature Neuroscience version of DeepLabCut can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11

News (and in the news):

💜 We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch

💜 The DeepLabCut Model Zoo launches SuperAnimals, see more here.

💜 DeepLabCut supports multi-animal pose estimation! maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the new 2.2+ releases for what's new & how to install it, please see our new paper, Lauer et al 2022, and the new docs on how to use it!

💜 We support multi-animal re-identification, see Lauer et al 2022.

💜 We have a real-time package available! http://DLClive.deeplabcut.org

dlcutils's People

Contributors

alexemg avatar alyetama avatar fedeclaudi avatar jakeshirey avatar mmathislab avatar polarbean avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dlcutils's Issues

Running MotionMapper for multiple DLC csvs and mapping onto same behavioral space

Hi,

I just had two questions: (1) Can I use motionmapper to map multiple csvs from DLC to one single postural map? Do I just put in multiple file names into deeplabcut_into_motionMapper.m for the variable deeplabcutfile (e.g. deeplabcutfile = 'data1.csv','data2.csv')? Will this do what I described above?

(2) How do I map one dataset's postural time series data onto another dataset's? This was done in the original paper (section 4.4 of Berman et al.) for male and female flies.

Thanks,
Annie

Times between frames and total length of movement

Greetings

We have a trained 3D DLC system in which we want to analyze the speed and response time of short macaque monkey videos.
I went through the steps and got 3D trained network of DLC and data of the videos through the DLC.
First how do I know how much time is between two frames(we have the Excel table for a video that the DLC analyzed).
Second, I would love to know if there is an existing code that I can use to get the total length of movement for a particular limb for 1 video (e.g. a finger)

image

Many thanks in advance
Dvir

Time Spent in ROI Issue

Hi there. I am using the time_in_ROI.py to analyze open field behavior of rats with DLC markers. Specifically, i am looking at the time spent in the center of the open field. My only ROI is the "center" box. When i run the code that outputs the time spent in the ROIs, i get odd number for the [avg_time_in_roi_sec] variable. The numbers for the time spent in the center are unrealistically high and do not match the animals movement. I attached a plot of the animal's "back" marker distance from the center. You can see that the [avg_time_in_roi_sec] variable outputs about 145 seconds, while the animal barely enters the center (the "center" box is about 80 pixels in either direction of the center). All other output variables from time_in_ROI.py match the animals behavior. I appreciate the help! - Victoria
Screen Shot 2022-11-23 at 14 47 23
Screen Shot 2022-11-23 at 3 01 22 PM

ROI analysis on Colab

Hey, thanks for DLC and DLC utils they're amazing :) I'm trying to run this piece of code for ROI analysis on Colab:

bodyparts=Dataframe.columns.get_level_values(1) #you can read out the header to get body part names!
bodyparts2plot=bodyparts #you could also take a subset, i.e. =['snout']
%matplotlib inline
PlottingResults(Dataframe,bodyparts2plot,alphavalue=.2,pcutoff=.5,fs=(8,4))

But I continue to get this "KeyError: 'likelihood'". Can you help me, please? Thanks in advance!

Error when running code

I have been following all instructions, but I still got the following error:

Error in deeplabcut_into_motionMapper (line 119)
[embeddingValues{i},~] = run_tSne(data,parameters);

Could you help me out with this?

Need help with code

Hello! I've been following the code, step by step, and it has been working up to this point:

image

After that, I didn't see anything (X or Y coordinates or anything related to the video)? What should I be doing now?

Issue with arquived

when I try execult
video ='IIR1DIS75T1.mp4'
DLCscorer= 'IIR1DIS75T1DLC_resnet50_APNov2shuffle1_190000_filtered'
dataname = str(Path(video).stem) + DLCscorer + '.h5'

#loading output of DLC
Dataframe = pd.read_hdf(os.path.join(dataname))

this erro is show


FileNotFoundError Traceback (most recent call last)
in
4
5 #loading output of DLC
----> 6 Dataframe = pd.read_hdf(os.path.join(dataname))

/usr/local/lib/python3.7/dist-packages/pandas/io/pytables.py in read_hdf(path_or_buf, key, mode, errors, where, start, stop, columns, iterator, chunksize, **kwargs)
425
426 if not exists:
--> 427 raise FileNotFoundError(f"File {path_or_buf} does not exist")
428
429 store = HDFStore(path_or_buf, mode=mode, errors=errors, **kwargs)

FileNotFoundError: File IIR1DIS75T1IIR1DIS75T1DLC_resnet50_APNov2shuffle1_190000_filtered.h5 does not exist
image

DLC Motionmapper parallel pool paused?

I ran the deeplabcut_into_motionMapper.m using data that I got from DLC and for some reason it stops once it generates figure 7 (see picture below). I have tried running this several times and it always pauses there. It also says "Parallel pool using he 'local' profile is shutting down."

image

time_in_each_roi BUG: counting time when point not in ROI

As reported by a user, time_in_each_roi consider's the tracked point to be in the closest ROI even if it is, in fact, outside the specified roi box.
The desired behaviour is that only the time spent when the point is inside the box should be counted.

I'm reporting this here for reference and will fix it ASAP, will post when fixed.

Running the Windows 2 Unix converter on cluster

Im running the DLCUtils function on the cluster and i get the following error:

import deeplabcut
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/deeplabcut/init.py", line 33, in
from deeplabcut import pose_estimation_tensorflow
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/deeplabcut/pose_estimation_tensorflow/init.py", line 17, in
from deeplabcut.pose_estimation_tensorflow.nnet import *
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/deeplabcut/pose_estimation_tensorflow/nnet/init.py", line 14, in
from deeplabcut.pose_estimation_tensorflow.nnet.losses import *
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/deeplabcut/pose_estimation_tensorflow/nnet/losses.py", line 5, in
import tensorflow as tf
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/init.py", line 22, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/init.py", line 49, in
from tensorflow.python import pywrap_tensorflow
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.17' not found (required by /icm/hydra/home/grid/plgmirandeitor/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

Any idea what to do to solve the problem?

AttributeError: 'tuple' object has no attribute 'topleft'

I have set the rois dictionary with 'middle' as a key and a named tuple value as follows.

position = namedtuple('position', ['topleft', 'bottomright'])
rois = {'middle': position((300, 400), (500, 800))}

bp_data.shape is (23188, 2); data from the DLC2 where 23188 frames contain one body part label.

When I executed the 'get_roi_at_each_frame(bp_data, rois)' function, I get the following error.


ValueError Traceback (most recent call last)
in ()
----> 1 get_roi_at_each_frame(bp_data, rois)

in get_roi_at_each_frame(bp_data, rois)
57 for idx, center in enumerate(centers):
58 cnt = np.tile(center, data_length).reshape((data_length, 2))
---> 59 dist = np.hypot(np.subtract(cnt[:, 0], bp_data[:, 0]), np.subtract(cnt[:, 1], bp_data[:, 1]))
60 distances[:, idx] = dist
61

ValueError: operands could not be broadcast together with shapes (2,) (23188,)

Any help would be greatly appreciated.

DLC to motionmapper : Error Using scatter - CDatat must be an RGB triplet

Hello,
I am trying to use the deeplabcut_into_motionMapper.m function but got the following error:

Mean value of sigma: 0.010111
Minimum value of sigma: 0.0022873
Maximum value of sigma: 0.087411
Iteration 1: error is 99.4207, change is 1
Error using scatter (line 110)
CData must be an RGB triplet, an M-by-1 vector of the same length as X, or an M-by-3 matrix.

Error in tsne_p_sparse (line 149)
scatter(ydata(:,1),ydata(:,2),[],parameters.signalLabels,'filled')

Error in tsne_d (line 60)
[yData,errors] = tsne_p_sparse(P, parameters, no_dims, relTol);

Error in run_tSne (line 39)
[yData,betas,P,errors] = tsne_d(D,parameters);

Error in deeplabcut_into_motionMapper (line 106)
[embeddingValues{i},~] = run_tSne(BRAC36002e,parameters);

any idea about why this happened?

time spent in ROI questions

I am looking to calculate food cup behavior for a rat -- specifically how much time their head spends in a recessed food cup in a wall. I see that there are currently 2 utils to calculate time spent in a region of interest: time_in_ROI.py, and the polarbean DLC_ROI tool. And I have some basic questions that I have been thinking about...

  • which one of the two would be better suited for my specific task?
  • if I am only looking to quantify the food cup behavior, do I still need to do certain DLC functions after training/evaluating the network such as: analyze videos, plot trajectories, filter predictions, etc? I know that I should probably create the labeled videos, but I am not sure about the others.
  • I have been going through this page for time_spent_in_ROI.py and trying to understand the code. For import time_in_each_roi it says that "the function needs to be in the same folder as the notebook". What is the jupyter notebook? Is it required to use the ROI analysis? I am running DLC through my university's linux cluster so I am not sure how to incorporate the notebook into that. -- do I need to run this through a GPU or do I just type all the code out in terminal using ipython?

Thanks in advance.

ValueError: unsupported pickle protocol: 5

This can happen when loading a DLC pickle file across systems with different python versions.

You can convert it:

!pip3 install pickle5
path_to_protocol5 = '/Users/alex/Code/dlc_playingdata/MultiMouse-Daniel-2019-12-16/training-datasets/iteration-1/UnaugmentedDataSet_MultiMouseDec16/Documentation_data-MultiMouse_95shuffle0.pickle'

import pickle5 as p
import pickle


with open(path_to_protocol5, "rb") as fh:
data = p.load(fh)

with open(path_to_protocol5, "wb") as f:
  # Pickle the 'labeled-data' dictionary using the highest protocol available.
  pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)

See https://stackoverflow.com/questions/63329657/python-3-7-error-unsupported-pickle-protocol-5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.