Code Monkey home page Code Monkey logo

staf's Introduction


Build Type Linux MacOS Windows
Build Status Status Status Status

OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images.

It is authored by Ginés Hidalgo, Zhe Cao, Tomas Simon, Shih-En Wei, Yaadhav Raaj, Hanbyul Joo, and Yaser Sheikh. It is maintained by Ginés Hidalgo and Yaadhav Raaj. OpenPose would not be possible without the CMU Panoptic Studio dataset. We would also like to thank all the people who has helped OpenPose in any way.


Authors Ginés Hidalgo (left) and Hanbyul Joo (right) in front of the CMU Panoptic Studio

Contents

  1. Results
  2. Features
  3. Related Work
  4. Installation
  5. Quick Start Overview
  6. Send Us Feedback!
  7. Citation
  8. License

Results

Whole-body (Body, Foot, Face, and Hands) 2D Pose Estimation


Testing OpenPose: (Left) Crazy Uptown Funk flashmob in Sydney video sequence. (Center and right) Authors Ginés Hidalgo and Tomas Simon testing face and hands

Whole-body 3D Pose Reconstruction and Estimation


Tianyi Zhao testing the OpenPose 3D Module

Unity Plugin


Tianyi Zhao and Ginés Hidalgo testing the OpenPose Unity Plugin

Runtime Analysis

We show an inference time comparison between the 3 available pose estimation libraries (same hardware and conditions): OpenPose, Alpha-Pose (fast Pytorch version), and Mask R-CNN. The OpenPose runtime is constant, while the runtime of Alpha-Pose and Mask R-CNN grow linearly with the number of people. More details here.

Features

  • Main Functionality:
    • 2D real-time multi-person keypoint detection:
      • 15, 18 or 25-keypoint body/foot keypoint estimation, including 6 foot keypoints. Runtime invariant to number of detected people.
      • 2x21-keypoint hand keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
      • 70-keypoint face keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
    • 3D real-time single-person keypoint detection:
      • 3D triangulation from multiple single views.
      • Synchronization of Flir cameras handled.
      • Compatible with Flir/Point Grey cameras.
    • Calibration toolbox: Estimation of distortion, intrinsic, and extrinsic camera parameters.
    • Single-person tracking for further speedup or visual smoothing.
  • Input: Image, video, webcam, Flir/Point Grey, IP camera, and support to add your own custom input source (e.g., depth camera).
  • Output: Basic image + keypoint display/saving (PNG, JPG, AVI, ...), keypoint saving (JSON, XML, YML, ...), keypoints as array class, and support to add your own custom output code (e.g., some fancy UI).
  • OS: Ubuntu (20, 18, 16, 14), Windows (10, 8), Mac OSX, Nvidia TX2.
  • Hardware compatibility: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.
  • Usage Alternatives:
    • Command-line demo for built-in functionality.
    • C++ API and Python API for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.

For further details, check the major released features and release notes docs.

Related Work

Installation

If you want to use OpenPose without installing or writing any code, simply download and use the latest Windows portable version of OpenPose!

Otherwise, you could build OpenPose from source. See the installation doc for all the alternatives.

Quick Start Overview

Simply use the OpenPose Demo from your favorite command-line tool (e.g., Windows PowerShell or Ubuntu Terminal). E.g., this example runs OpenPose on your webcam and displays the body keypoints:

# Ubuntu
./build/examples/openpose/openpose.bin
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi

You can also add any of the available flags in any order. E.g., the following example runs on a video (--video {PATH}), enables face (--face) and hands (--hand), and saves the output keypoints on JSON files on disk (--write_json {PATH}).

# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand --write_json output_json_folder/

Optionally, you can also extend OpenPose's functionality from its Python and C++ APIs. After installing OpenPose, check its official doc for a quick overview of all the alternatives and tutorials.

Send Us Feedback!

Our library is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...

  1. Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPose.
  2. Want to add/show some cool functionality/demo/project made on top of OpenPose. We can add your project link to our Community-based Projects section or even integrate it with OpenPose!

Citation

Please cite these papers in your publications if OpenPose helps your research. All of OpenPose is based on OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, while the hand and face detectors also use Hand Keypoint Detection in Single Images using Multiview Bootstrapping (the face detector was trained using the same procedure than the hand detector).

@article{8765346,
  author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  year = {2019}
}

@inproceedings{simon2017hand,
  author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},
  year = {2017}
}

@inproceedings{cao2017realtime,
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  year = {2017}
}

@inproceedings{wei2016cpm,
  author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Convolutional pose machines},
  year = {2016}
}

Paper links:

License

OpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details. Interested in a commercial license? Check this FlintBox link. For commercial queries, use the Contact section from the FlintBox link and also send a copy of that message to Yaser Sheikh.

staf's People

Contributors

0x333333 avatar bikz05 avatar bryant1410 avatar cngzhnp avatar esemeniuc avatar fragalfernando avatar gineshidalgo99 avatar henczati avatar jimfcarroll avatar jlsneto avatar kndt84 avatar matthijsburgh avatar pinocchioo avatar ps2 avatar saya-rbt avatar shivenmian avatar skrish13 avatar sobeit-tim avatar soulslicer avatar subail avatar thecaffeinedev avatar thomasfaingnaert avatar vinjn avatar vrichter avatar vvirag avatar wbadart avatar xiangdonglai avatar xiangyann avatar zhec avatar ziutinyat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

staf's Issues

Feature selection

1.v

TOP_5 joints variance
1 4 0.040981022197503907955606905488821212202310562134
2 3 0.040624188485369222556542467827966902405023574829
3 21 0.040333394984267481597761673128843540325760841370
4 19 0.039929118386592181433325521311417105607688426971
5 5 0.039016170265210169121328220853683887980878353119

2.a

TOP_5 关节 方差值
1 4 0.079356693239538089734708137257257476449012756348
2 19 0.079313491536386213076603723948210244998335838318
3 3 0.078303607239022327002331280709768179804086685181
4 21 0.078045254622060661331417463770776521414518356323
5 15 0.076616159942793610193589870505093131214380264282

3.joint distance

TOP_5 joints couple variance
1 4,20 0.659489312195387644699451357155339792370796203613
2 4,16 0.654281662051314549799485575931612402200698852539
3 4,19 0.583666011175153376377977565425680950284004211426
4 4,15 0.576293365050162931240151920064818114042282104492
5 3,20 0.495535929465969027241101230174535885453224182129

4.frame distance

TOP_5 choose_joints joints variance
1 4 20 0.687899460715868937832340179738821461796760559082
2 20 4 0.683161521189947218424265429348452016711235046387
3 4 16 0.682662311075556083039828081382438540458679199219
4 16 4 0.677066758837093107814553150092251598834991455078
5 4 19 0.612921060118330363053473774925805628299713134766

5.plane distance

TOP_5 choose_plane joints variance
1 4,23,25 16 0.072560544146205760429602094063739059492945671082
2 4,23,25 20 0.072258584290735186628218400528567144647240638733
3 4,23,25 15 0.069762446753528356557794154468865599483251571655
4 4,23,25 19 0.069232650862826802806715420501859625801444053650
5 1,23,25 20 0.057242597558232397036981353721785126253962516785

6.normal plane distance

TOP_5 plane normal joints variance
1 16 1,20 4 0.247866987153226664419847224962722975760698318481
1 4 1,20 16 0.247866987153226664419847224962722975760698318481
2 20 1,16 4 0.245400962629865659891947871074080467224121093750
2 4 1,16 20 0.245400962629865659891947871074080467224121093750
3 4 1,20 15 0.235008377098905235635939448002318385988473892212
4 4 1,16 19 0.233195035892368757179937688306381460279226303101
5 4 20,23 4 0.230468054023701157673187367436185013502836227417
5 16 20,23 16 0.230468054023701157673187367436185013502836227417

What does STAF Mean?

What's the difference from the original version of openpose?

Does STAF mean Spatio-Temporal-Affinty-Field?
([CVPR 2019] Efficient Online Multi-Person 2D Pose Tracking with Recurrent Spatio-Temporal Affinity Fields)

cuDNN problems?

I have an error in building:

Found cuDNN: ver. ??? found (include: /usr/local/cuda/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
CMake Error at cmake/Modules/FindCuDNN.cmake:43 (message):
cuDNN version >3 is required.
Call Stack (most recent call first):
CMakeLists.txt:427 (find_package)

Your System Configuration

- **CUDA version**: 10.2
- **cuDNN version**: 8.0.5
- **GPU model**: 2080Ti 11GB

BODY 21A not found

I can't run demo. With this command build/examples/openpose/openpose.bin --model_pose BODY_21A --tracking 1 --render_pose 1 I obtain:

Error:
String (`BODY_21A`) does not correspond to any model (BODY_25, COCO, MPI, MPI_4_layers).

If I try with BODY_25:

Error:
Person tracking (`--tracking` flag) is in experimental phase and only allows tracking of up to 1 person at the time. Please, also include the `--number_people_max 1` flag when using the `--tracking` flag. Tracking more than one person at the time is not expected as short- nor medium-term goal.

RTX 3080 compatibility?

I can't get STAF to work on my new PC with RTX3080. According to another issue in the main openpose repository they made some changes to make it compatible, I couldn't figure what exactly though.

Thanks in advance!

Result Flag?

Hi! There is no issue here, but is it possible to extract results of the tracking by using any flag?

Openpose models unavailable =(

Hi! Trying to download models by getModels.sh and directly by links, but always get 502 Bad Gateway error? Am I only one who can't access to the models now?

Evaluate MPII dataset?

Posting rules

  1. Duplicated posts will not be answered. Check the FAQ section, other GitHub issues, and general documentation before posting. E.g., low-speed, out-of-memory, output format, 0-people detected, installation issues, ...).
  2. Fill the Your System Configuration section (all of it or it will not be answered!) if you are facing an error or unexpected behavior. Feature requests or some other type of posts might not require it.
  3. No questions about training or 3rd party libraries:
    • OpenPose only implements testing.
    • Caffe errors/issues, check Caffe documentation.
    • CUDA check failed errors: They are usually fixed by re-installing CUDA, then re-installing the proper cuDNN version, and then re-compiling (or re-installing) OpenPose. Otherwise, check for help in CUDA forums.
    • OpenCV errors: Install the default/pre-compiled OpenCV or check for online help.
  4. Set a proper issue title: add the Ubuntu/Windows keyword and be specific (e.g., do not call it: Error).
  5. Only English comments.
    Posts which do not follow these rules will be ignored, closed, or reported with no further clarification.

Issue Summary

Executed Command (if any)

Note: add --logging_level 0 --disable_multi_thread to get higher debug information.

OpenPose Output (if any)

Errors (if any)

Type of Issue

You might select multiple topics, delete the rest:

  • Compilation/installation error
  • Execution error
  • Help wanted
  • Question
  • Enhancement / offering possible extensions / pull request / etc
  • Other (type your own type)

Your System Configuration

  1. Whole console output (if errors appeared), paste the error to PasteBin and then paste the link here: LINK

  2. OpenPose version: Latest GitHub code? Or specific commit (e.g., d52878f)? Or specific version from Release section (e.g., 1.2.0)?

  3. General configuration:

    • Installation mode: CMake, sh script, manual Makefile installation, ... (Ubuntu); CMake, ... (Windows); ...?
    • Operating system (lsb_release -a in Ubuntu):
    • Operating system version (e.g., Ubuntu 16, Windows 10, ...):
    • Release or Debug mode? (by default: release):
    • Compiler (gcc --version in Ubuntu or VS version in Windows): 5.4.0, ... (Ubuntu); VS2015 Enterprise Update 3, VS2017 community, ... (Windows); ...?
  4. Non-default settings:

    • 3-D Reconstruction module added? (by default: no):
    • Any other custom CMake configuration with respect to the default version? (by default: no):
  5. 3rd-party software:

    • Caffe version: Default from OpenPose, custom version, ...?
    • CMake version (cmake --version in Ubuntu):
    • OpenCV version: pre-compiled apt-get install libopencv-dev (only Ubuntu); OpenPose default (only Windows); compiled from source? If so, 2.4.9, 2.4.12, 3.1, 3.2?; ...?
  6. If GPU mode issue:

    • CUDA version (cat /usr/local/cuda/version.txt in most cases):
    • cuDNN version:
    • GPU model (nvidia-smi in Ubuntu):
  7. If CPU-only mode issue:

    • CPU brand & model:
    • Total RAM memory available:
  8. If Python API:

    • Python version: 2.7, 3.7, ...?
    • Numpy version (python -c "import numpy; print numpy.version.version" in Ubuntu):
  9. If Windows system:

    • Portable demo or compiled library?
  10. If speed performance issue:

    • Report OpenPose timing speed based on this link.

PoseIds Python API

Hi!

Thanks for the amazing work on tracking for Openpose. I m trying to get poseIds from the Python API for tracking, but when I try to access this field it returns error.

Is possible to get back the IDS?

I can work and try to implement it by myself, could I have some guidance on how to output this data?

Thanks so much

Way to evaluate MPII dataset

Posting rules

  1. Duplicated posts will not be answered. Check the FAQ section, other GitHub issues, and general documentation before posting. E.g., low-speed, out-of-memory, output format, 0-people detected, installation issues, ...).
  2. Fill the Your System Configuration section (all of it or it will not be answered!) if you are facing an error or unexpected behavior. Feature requests or some other type of posts might not require it.
  3. No questions about training or 3rd party libraries:
    • OpenPose only implements testing.
    • Caffe errors/issues, check Caffe documentation.
    • CUDA check failed errors: They are usually fixed by re-installing CUDA, then re-installing the proper cuDNN version, and then re-compiling (or re-installing) OpenPose. Otherwise, check for help in CUDA forums.
    • OpenCV errors: Install the default/pre-compiled OpenCV or check for online help.
  4. Set a proper issue title: add the Ubuntu/Windows keyword and be specific (e.g., do not call it: Error).
  5. Only English comments.
    Posts which do not follow these rules will be ignored, closed, or reported with no further clarification.

Issue Summary

Is there a way to evaluate mpii dataset?

Executed Command (if any)

Note: add --logging_level 0 --disable_multi_thread to get higher debug information.

OpenPose Output (if any)

Errors (if any)

Type of Issue

You might select multiple topics, delete the rest:

  • Compilation/installation error
  • Execution error
  • Help wanted
  • Question
  • Enhancement / offering possible extensions / pull request / etc
  • Other (type your own type)

Your System Configuration

  1. Whole console output (if errors appeared), paste the error to PasteBin and then paste the link here: LINK

  2. OpenPose version: Latest GitHub code? Or specific commit (e.g., d52878f)? Or specific version from Release section (e.g., 1.2.0)?

  3. General configuration:

    • Installation mode: CMake, sh script, manual Makefile installation, ... (Ubuntu); CMake, ... (Windows); ...?
    • Operating system (lsb_release -a in Ubuntu):
    • Operating system version (e.g., Ubuntu 16, Windows 10, ...):
    • Release or Debug mode? (by default: release):
    • Compiler (gcc --version in Ubuntu or VS version in Windows): 5.4.0, ... (Ubuntu); VS2015 Enterprise Update 3, VS2017 community, ... (Windows); ...?
  4. Non-default settings:

    • 3-D Reconstruction module added? (by default: no):
    • Any other custom CMake configuration with respect to the default version? (by default: no):
  5. 3rd-party software:

    • Caffe version: Default from OpenPose, custom version, ...?
    • CMake version (cmake --version in Ubuntu):
    • OpenCV version: pre-compiled apt-get install libopencv-dev (only Ubuntu); OpenPose default (only Windows); compiled from source? If so, 2.4.9, 2.4.12, 3.1, 3.2?; ...?
  6. If GPU mode issue:

    • CUDA version (cat /usr/local/cuda/version.txt in most cases):
    • cuDNN version:
    • GPU model (nvidia-smi in Ubuntu):
  7. If CPU-only mode issue:

    • CPU brand & model:
    • Total RAM memory available:
  8. If Python API:

    • Python version: 2.7, 3.7, ...?
    • Numpy version (python -c "import numpy; print numpy.version.version" in Ubuntu):
  9. If Windows system:

    • Portable demo or compiled library?
  10. If speed performance issue:

    • Report OpenPose timing speed based on this link.

BODY_21A does not correspond to any model error

Hi @soulslicer, i am able to build openpose on ubuntu machine without any errors but when i run the command:
build/examples/openpose/openpose.bin --model_pose BODY_21A --tracking 1 --render_pose 1
i get the error
String ('BODY_21A') does not correspond to any model (BODY_25, COCO, MPI, MPI_4_layers)
where as it is able to work for BODY_25 model with tracking 0. Can you pls help what is going wrong?
Thanks in advance.

BODY_25B model

Could you provide the link to download body_25_video3 model and the corresponding deploy.prototxt file?

would openpose+STAF run on Jetson AGX Xavier + Nvidia Isaac?

hello!

i am super excited about your openpose+STAF implementation.

we are trying to run it on Nvidia Isaac on a Jetson AGX Xavier for an autonomous driving project.

is there anything we should keep in mind version-wise (versions of opencv / openpose) in order to get your work with STAF running on the Jetson AGX Xavier + Isaac? do you think it would work?

thank you and all the best

Running inference doesn't produce person_ID's when running STAF

Issue Summary

Ran inference using the example code, and no person_ID tracking happens

Executed Command (if any)

Note: add --logging_level 0 --disable_multi_thread to get higher debug information.

`./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_video output/makeithot.avi --write_json output/ --model_pose BODY_25 --render_pose 2 --tracking 0

OpenPose Output (if any)

{"version":1.3,"people":[{"person_id":[],"pose_keypoints_2d":[762.894,341.857,0.9845,727.531,436.005,0.859083,647.967,433.189,0.808931,645.041,550.937,0.807426,756.881,598.082,0.847462,812.774,438.999,0.850725,871.685,562.654,0.453353,883.382,556.819,0.714393,789.368,633.314,0.639624,736.262,645.098,0.608079,859.943,695.053,0.878571,871.723,907.091,0.592937,851.103,621.622,0.639177,1068.91,668.458,0.872738,1118.97,904.126,0.713615,742.26,321.305,0.941923,774.594,327.235,0.834386,695.1,335.955,0.884205,0,0,0,1183.71,912.903,0.352292,1180.73,907.05,0.352431,1115.98,927.514,0.442241,907.035,927.523,0.117059,898.148,933.516,0.113883,862.88,924.612,0.341001],"face_keypoints_2d":[],"hand_left_keypoints_2d":[],"hand_right_keypoints_2d":[],"pose_keypoints_3d":[],"face_keypoints_3d":[],"hand_left_keypoints_3d":[],"hand_right_keypoints_3d":[]}]}

Errors (if any)

No errors, was not able to reproduce paper's results from running inference. When BODY_21A was passed, the error that it could not be found was produced.

Type of Issue

You might select multiple topics, delete the rest:

  • Help wanted
  • Question
  • Enhancement / offering possible extensions / pull request / etc
  • Other (type your own type)
    Was not able to replicate results in README.md

Your System Configuration

  1. Whole console output (if errors appeared), paste the error to PasteBin and then paste the link here: LINK

running regular openpose through docker on GCP. There are no errors.

  1. OpenPose version: Latest GitHub code? Or specific commit (e.g., d52878f)? Or specific version from Release section (e.g., 1.2.0)?
    openpose/staf latest version

  2. General configuration:

    • Installation mode: CMake, sh script, manual Makefile installation, ... (Ubuntu); CMake, ... (Windows); ...?
    • Operating system (lsb_release -a in Ubuntu):
    • Operating system version (e.g., Ubuntu 16, Windows 10, ...):
    • Release or Debug mode? (by default: release):
    • Compiler (gcc --version in Ubuntu or VS version in Windows): 5.4.0, ... (Ubuntu); VS2015 Enterprise Update 3, VS2017 community, ... (Windows); ...?

ubuntu deep learning machine. CUDA 10.0. OP/master runs fine.

BODY_21A not available and only one person tracking?

I've already run the .sh file, but when I run the command build/examples/openpose/openpose.bin --model_pose BODY_21A --tracking 1 --render_pose 1, the error message String (BODY_21A) does not correspond to any model (BODY_25, COCO, MPI, MPI_4_layers) appears.

And if I deleted the --model_pose BODY_21A flag or run with --model_pose BODY_25, the following error message appears:

Person tracking (`--tracking` flag) is in experimental phase and only allows tracking of up to 1 person at the time. Please, also include the `--number_people_max 1` flag when using the `--tracking` flag. Tracking more than one person at the time is not expected as short- nor medium-term goal.

Is there anything I can do to run the demo with multi-person tracking? Or is it really that only single person tracking is available at the moment?

Thanks!

Not support local media file????

I have finally!! make openpose read video on centos server. but with the following command:
build/examples/openpose/openpose.bin --model_pose BODY_21A --tracking 1 --render_pose 1 --display=0 --write_video=result/render4/result.avi --write_json=result/keypoint4/ --video example/video.avi
it only producer 1 frame result
I think your producer support webcam, then it will support local media file .avi also, wouldn't you???

About track id

sorry to bother, but I wonder how to keep the track_id still when I use '--write_video' to save the processed videos. The ids show up when the program runs, but disappear in the video.

image

image

Windows 10: BODY_21A cannot find caffemodel

build/bin/OpenPoseDemo.exe --model_pose BODY_21A --video C:/Users/richa/OneDrive/sample_data/011320_120906PM.MP4
Running STAF with BODY_21A keeps giving me error:
Caffe trained model file not found: models\pose/body_21a/pose_iter_XXXXXX.caffemodel.

I have the pose_iter_264000.caffemodel and pose_deploy.prototxt in the body_21a folder within models but it still cannot properly find the caffemodel.

Trying STAF with BODY_25 works perfectly but I would like to use BODY_21A

Is multi-people tracking still in experimental phase?

Hi, I downloaded this STAF repository. After compilation, I ran the command build/examples/openpose/openpose.bin --model_pose COCO --tracking 1 --render_pose 1 --net_resolution="320x-1" but I got the error "Error: Person tracking (--tracking flag) is in experimental phase and only allows tracking of up to 1 person at the time. Please, also include the --number_people_max 1 flag when using the --tracking flag. Tracking more than one person at the time is not expected as short- nor medium-term goal". Does it mean that multiple people tracking is not yet finished? Thank you.

CPU Version does not work

error when building the CPU_ONLY build. Does it support really cpu only ?

Here is the build error:

poseExtractorCaffeStaf.cpp:12:18: fatal error: cuda.h: No such file or directory
compilation terminated.

Python API support?

Hi, I am wondering if there is a way to run it in python. I tried to build Python wrapper, and it turned out I have to modify some code in order to make tracking arg pass to correct place. However, the Python api seems only returns the result of a single image, instead of performing tracking.

Here is the code I modified in order to make Python working:

diff --git a/python/openpose/openpose_python.cpp b/python/openpose/openpose_python.cpp
index 655ca89b..af8d5616 100644
--- a/python/openpose/openpose_python.cpp
+++ b/python/openpose/openpose_python.cpp
@@ -151,8 +151,11 @@ public:
                 (float)FLAGS_hand_alpha_heatmap, (float)FLAGS_hand_render_threshold};
             opWrapper->configure(wrapperStructHand);
             // Extra functionality configuration (use WrapperStructExtra{} to disable it)
+            const op::WrapperStructTracking wrapperStructTracking{
+                FLAGS_tracking}; // Raaj: Add your flags in here
+            opWrapper->configure(wrapperStructTracking);
             const WrapperStructExtra wrapperStructExtra{
-                FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_tracking, FLAGS_ik_threads};
+                FLAGS_3d, FLAGS_3d_min_views, FLAGS_identification, FLAGS_ik_threads};
             opWrapper->configure(wrapperStructExtra);
             // Output (comment or use default argument to disable any output)
             const WrapperStructOutput wrapperStructOutput{

Here is my python script:

# From Python
# It requires OpenCV installed for Python
import sys
import cv2
import os
import glob
import numpy as np
from sys import platform
from tqdm import tqdm
import argparse
import pdb

try:
    # Import Openpose (Windows/Ubuntu/OSX)
    dir_path = os.path.dirname(os.path.realpath(__file__))
    try:
        # Windows Import
        if platform == "win32":
            # Change these variables to point to the correct folder (Release/x64 etc.)
            sys.path.append(dir_path + '/../../python/openpose/Release');
            os.environ['PATH']  = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' +  dir_path + '/../../bin;'
            import pyopenpose as op
        else:
            # Change these variables to point to the correct folder (Release/x64 etc.)
            sys.path.append('../../python');
            # If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it.
            # sys.path.append('/usr/local/python')
            from openpose import pyopenpose as op
    except ImportError as e:
        print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
        raise e

    # Flags
    parser = argparse.ArgumentParser()
    args = parser.parse_known_args()

    # Custom Params (refer to include/openpose/flags.hpp for more parameters)
    params = dict()
    params["model_pose"] = "BODY_21A"
    params["tracking"] = 1
    params["render_pose"] = 1
    params["model_folder"] = "/xxx/openpose_multi/openpose/models/"

    # Add others in path?
    for i in range(0, len(args[1])):
        curr_item = args[1][i]
        if i != len(args[1])-1: next_item = args[1][i+1]
        else: next_item = "1"
        if "--" in curr_item and "--" in next_item:
            key = curr_item.replace('-','')
            if key not in params:  params[key] = "1"
        elif "--" in curr_item and "--" not in next_item:
            key = curr_item.replace('-','')
            if key not in params: params[key] = next_item

    # Construct it from system arguments
    # op.init_argv(args[1])
    # oppython = op.OpenposePython()

    # Starting OpenPose
    opWrapper = op.WrapperPython()
    opWrapper.configure(params)
    opWrapper.start()

    image_root = "/xxx/"
    save_root = "/xxx/"
    if not os.path.exists(save_root):
        os.makedirs(save_root)
    image_paths = glob.glob(image_root + "*.jpg")
    for image_path in tqdm(image_paths):
        # Process Image
        datum = op.Datum()
        imageToProcess = cv2.imread(image_path)
        datum.cvInputData = imageToProcess
        opWrapper.emplaceAndPop([datum])
        image_output = datum.cvOutputData
        pdb.set_trace()
        landmarks = datum.poseKeypoints
        cv2.imwrite(os.path.join(save_root, os.path.basename(image_path)[:-4] + "_pose.png"), image_output)
        np.save(os.path.join(save_root, os.path.basename(image_path)[:-4] + "_pose.npy"), landmarks)

except Exception as e:
    print(e)
    sys.exit(-1)

It seems to me that I have to pass a video into Python wrapper to make tracking work, but I have no idea how. It would be of great help if you could provide some instructions on that. I am expecting to get tracking IDs from landmarks detected.

Not support VIDEO?

build/examples/openpose/openpose.bin --model_pose BODY_21A --tracking 1 --render_pose 1 --video examples/media/video.avi

Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...

Error:
VideoCapture (IP camera/video) could not be opened for path: 'examples/media/video.avi'. If it is a video path, is the path correct?

Coming from:

  • /data1/tfplus/powerchen/SMPL_X/openpose-staf/src/openpose/producer/videoCaptureReader.cpp:VideoCaptureReader():38
  • /data1/tfplus/powerchen/SMPL_X/openpose-staf/src/openpose/producer/videoCaptureReader.cpp:VideoCaptureReader():42
  • /data1/tfplus/powerchen/SMPL_X/openpose-staf/src/openpose/producer/producer.cpp:createProducer():471
  • /data1/tfplus/powerchen/SMPL_X/openpose-staf/include/openpose/wrapper/wrapperAuxiliary.hpp:configureThreadManager():1219
  • /data1/tfplus/powerchen/SMPL_X/openpose-staf/include/openpose/wrapper/wrapper.hpp:exec():444

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.