Code Monkey home page Code Monkey logo

monocularrgb_3d_handpose_wacv18's Introduction

Monocular RGB, real time 3D hand pose estimation in the wild

This repository contains scripts for testing the work: "Using a single RGB frame for real time 3D hand pose estimation in the wild"

You can download the full paper from here

3D Hand pose estimation on a wild youtube video

Overview

Our method the enables the real-time estimation of the full 3D pose of one or more human hands using a single commodity RGB camera. Recent work in the area has displayed impressive progress using RGBD input. However, since the introduction of RGBD sensors, there has been little progress for the case of monocular color input.

We capitalize on the latest advancements of deep learning, combining them with the power of generative hand pose estimation techniques to achieve real-time monocular 3D hand pose estimation in unrestricted scenarios. More specifically, given an RGB image and the relevant camera calibration information, we employ a state-of-the-art detector to localize hands.

Subsequently we run a pretrained network that estimates the 2D location of hand joints (i.e by Gouidis et al or by Simon et al). On the final step, non-linear least-squares minimization fits a 3D model of the hand to the estimated 2D joint positions, recovering the 3D hand pose. Pipeline

Requirements

This work depends on a set (currently) closed source of C++ libraries developed at CVRL-FORTH. We provide Ubuntu 16.04 binaries for these libraries. Follow the instructions here to download them and set your environment properly.

You will need Python 3.x to run the scripts. The following python libraries are required:

sudo pip3 install numpy opencv-python

If you use the provided pretrained network for 2D Joint estimation (by Goudis et al) you will also need to istall tensorflow.

pip3 install tensorflow-gpu

NOTE: The script was tested with tensorflow 1.12.0 and CUDA 9.0

If you use the 2D joint estimator of Simon et al you will need to install Openpose and PyOpenPose. Follow the installation instructions on these projects.

Hand detector

On our paper we use a retrained YOLO detector to detect hands (left, right) and heads in the input image. The codebase in this project does not include that part of the pipeline. The example scripts use an initial bounding box and tracking to crop the user's hand in the images and pass it to the 2D joint estimator.

Usage

You can use the 3D hand pose estimation with any 2D joint estimator. We provide two different example scripts:

handpose.py

The handpose.py script uses the 2D hand joint estimator of Gouidis et al.

handpose_simon_backend.py

This script uses the 2D hand joint estimator by Simon et al. You will need to properly install Openpose and PyOpenPose before running this script.

Docker

Since this is a (very) old project the only good way to test it on modern linux distros is using docker.

You need to download cudnn7 deb packages from nvidia (requires registration) and place them in the cudnn folder. See here for details.

Finally go to the openpose_models folder and run the getModels.sh script to download the required openpose models.

You can use the devcontainer with vscode or build it on CLI with docker-compose. This will create an image with ubuntu16.04 and all required libraries to test the project. You can build and run it from CLI using the following commands:

docker-compose build
docker-compose up -d
xhost + 
docker exec -it devcontainer_dev_1  python3 handpose_simon_backend.py 

monocularrgb_3d_handpose_wacv18's People

Contributors

gstavrinos avatar padeler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monocularrgb_3d_handpose_wacv18's Issues

description on jt.FindPeaks

It looks like the source of PyJointTools is not open and I couldn't make it work on my Python 3.7 platform. Since in my case I only care about 2D estimation, I am interested in how the FindPeaks function is working. Is it possible to describe it a little bit? From my understanding, the simplest idea is for each joint, find the maximum value and its index in the heat-map, then compute the coordinates. Is there any other tricky thing happening? Thank you.

Python argument types did not match C++ signature

Hello,
when I run handpose.py met

Traceback (most recent call last):
File "handpose.py", line 214, in
mono_hand_loop(acq, (640,480), config, track=True, with_renderer=True)
File "handpose.py", line 113, in mono_hand_loop
rgbKp = IK.Observations(IK.ObservationType.COLOR, clb, keypoints)
Boost.Python.ArgumentError: Python argument types in
Observations.init(Observations, ObservationType, CameraMeta, numpy.ndarray)
did not match C++ signature:
init(_object*, ObservationType, MBV::Core::CameraMeta, cv::Mat, float, int)
init(_object*, ObservationType, MBV::Core::CameraMeta, cv::Mat)
init(_object*)
init(_object*)

my environment:
python 3.5
numpy 1.16
opencv-python 3.4.3

Error in compiling

I'm getting this error when I run handpose.py

error

My PC configuration is

  • Ubuntu 16.04

  • CUDA 9.0

  • Python 3.5 and TensorFlow 1.9

Support for python3.6

I uploaded and updated the library and necessary packages in Ubuntu18.

My environment is Python3.6.

However, it reports:

Traceback (most recent call last):
File "handpose.py", line 18, in <module>
import PyCeresIK as IK
ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or director

Looks like cannot be run.

I am not sure, whether the higher version support is enabled.

To run the boost library, I have installed boost.58 with lower version.

I am not sure whether python 3.6 and python 3.7 is enabled and supported, since python 3.5 is an older version.

Can a new library package be available?

Thanks

Feature Request - Docker Image

As the setup for the libraries you require, and binaries that were tested only on one version of Ubuntu, and one version of tensorflow and CUDA, do you mind writing a Dockerfile with all the necessary commands to make the model work?

Windows for 2D

I'm looking to checkout the 2D work here without the IK and 3D projection. How challenging do you think it would be to get this going on Windows just in 2D?

Can I implement this project on centos system?

hello,I'm very interested in this project.you provide Ubuntu 16.04 binaries for these libraries.Could you please tell me whether centos7 system can realize this project? I could not download the project when I relied on it according to the instructions.
By the way ,when I run the script,the error is :
image

I don't know what that means.I sincerely hope to get your reply

Details about how to solve IK

I want to know how to compute jacobi of the loss function (the rotation mat may not orthogonal after optimize )
how to do the fingers rotation angle limit.
It will be nice to have some details of self.ba.solve(obs_vec, Core.ParamVector(self.model.init_pose)) (best to have src code of this part)
thanks.

problem with libCore.so

Traceback (most recent call last):
File "handpose.py", line 18, in
import PyCeresIK as IK
ImportError: /home/zy/code/MonocularRGB_3D_Handpose_WACV18/lib/libCore.so: undefined symbol: _ZN2cv3Mat20updateContinuityFlagEv

I met this problem with libCore.so,how can I solve it?Thanks!

How to change the hand model size?

Hello, I find some parameters such as the low_bounds and high_bounds in the hand_skinned.xml. The current hand model is large for our case, I want to know how to change the size of the hand model (e.g finger length)? If I change the parameters in the local files, could the code still works?

[MBV::Core::Message*] = error while compiling file : 0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES

My system is Ubuntu16.04,a system created on top of a virtual machine(with cpu).I ran into this problem while executing the code.How can I fix this?
Thany you for your reply.

When I run the command glxinfo | grep "OpenGL version",which is
image
Is OpenGL version 3.3 required for this project? Can I install OpenGL in my environment to realize this project?

About the result_pose

First, thanks for your sharing. But these days, I had been confused by the usage of result_pose that described in the paper with 27 parameters. Would you please to help me figure out the order of result_pose? I'm really want to verify the effect of this algorithm in the 3D space.

Some questions about the functions in the PyCeresIK

Hello, I have some questions about the functions in the PyCeresIK.

compute kp using model initial pose

points2d = pose_estimator.ba.decodeAndProject(pose_estimator.model.init_pose, clb)
In the function, the joints angles are used for forward kinematics model and then 3d positions are projected to the image. I want to know the whether the forward kinematics model is included in PyCeresIK and how to use the forward/inverse kinematics model.
rgbKp = IK.Observations(IK.ObservationType.COLOR, clb, keypoints)
obsVec = IK.ObservationsVector([rgbKp, ])
Could you give a description for the two functions?
In your paper, the camera parameters are known and used. When you test your method in the youtube vedio, the generic calibration parameters are metioned, I want to know what does the generic calibration parameters mean?
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.