This repository contains the PyTorch implementation of TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation published at the ESWA journal. It contains easy instructions to replicate the results reported in the paper.
Download the repository:
makeReposit = [/the/directory/as/you/wish]
mkdir -p $makeReposit/; cd $makeReposit/
git clone https:https://github.com/mrezaei92/infrustructure_HPE.git
-
NYU dataset
Download and extract the dataset from the link provided below
Copy the content of the folder data/NYU to where the dataset is located
-
ICVL dataset
Download the file test.pickle from here
Download and extract the training set from the link provided below
Navigate to the folder data/ICVL. Run the following command to get a file named train.pickle:
python prepareICVL_train.py ICVLpath/Training
Here, ICVLpath represents the address where the training set is extractedPlace both test.pickle and train.pickle in one folder. This folder will serve as the ICVL dataset folder
-
MSRA dataset
Download and extract the dataset from the link provided below (dataset original author), extract P1,...P8 to data/MSRA/ (don't create a separate folder)
Download and extract data/MSRA.tar.xz and copy its content to where the dataset is located. New Update: should use text files from this guy mrezaei92#2 https://drive.proton.me/urls/87MJVDWANW#GhV94ErapWsh
Before running the experiment, first set the value ”datasetpath” in the corresponding .yaml file located in the folder configs. This value should be set to the address of the corresponding dataset. Then open a terminal and run the corresponding command.
Also set env var like ICVL_PATH, NYU_PATH, MSRA_PATH to save checkpoints by export VARNAME="my value"
After running each command, training is first done, and then the resulting models will be evaluated on the corresponding test set.
The results will be saved in a file named ”results.txt”.
-
NYU
bash train_eval_NYU.bash
-
ICVL
bash train_eval_ICVL.bash
-
MSRA
bash train_eval_MSRA.bash
This repo supports using the following dataset for training and testing:
- ICVL Hand Poseture Dataset [link] [paper]
- NYU Hand Pose Dataset [link] [paper]
- MSRA Hand Pose Dataset [link] [paper]
The table below shows the predicted labels on ICVL, NYU and MSRA dataset. All labels are in the format of (u, v, d) where u and v are pixel coordinates.
Dataset | Predicted Labels |
---|---|
ICVL | Download |
NYU | Download |
MSRA | Download |
Changing the config would lead you to some better results ICVL (5.68 vs 5.73 on paper), MSRA (7.05 vs 7.13 on paper)
This work no longer is SOTA (due to journal long process so accepted in 2023) but actual SOTA (at 2024-03-14th) is this one with source
Adaptive wingloss isn't helpful tho. (maybe not done enough training)
If you use this paper for your research or projects, please cite TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation.
@article{rezaei2023trihorn,
title={TriHorn-Net: A model for accurate depth-based 3D hand pose estimation},
author={Rezaei, Mohammad and Rastgoo, Razieh and Athitsos, Vassilis},
journal={Expert Systems with Applications},
pages={119922},
year={2023},
publisher={Elsevier}
}