xinghaochen / awesome-hand-pose-estimation Goto Github PK
View Code? Open in Web Editor NEWAwesome work on hand pose estimation/tracking
Home Page: https://xinghaochen.github.io/awesome-hand-pose-estimation/
Awesome work on hand pose estimation/tracking
Home Page: https://xinghaochen.github.io/awesome-hand-pose-estimation/
Hi, xinghao, I have found there are a lot of paper form Hands 2017 challenge. Could you be able to download those papers? Thanks!!!
Thanks for your great job!
I need the dataset but it demands a email, and they don't accept the requirement from individual researchers or students.
Great work! And thanks a lot.
Here, I want to ask if there any evaluation and comparison code for RGB hands datasets such as the code for the NYU/ICVL/MSRA datasets?
Hi,
thanks for your nice work.Do you know any dataset for two interact hands?Thanks!
Many of papers are using 3d images with depth, any resources using 2d ?
Hello Xinghao,
Thanks for maintaining this useful repo. There seems to be a typo for SynHand5M dataset.
In the list of datasets, the number of joints for the SynHand5M dataset is listed as 21. However, upon further looking, I noticed they provide 1193 3D hand mesh vertices and couldn't find a mapping to extract the location of hand joints from the provided hand mesh vertices. Are you aware of the mapping between the two?
Thanks!
A dataset called Obman Dataset is omitted
Its properties are as follows:
Is there a method or database with labels on visibility of annotated joints, as there are many cases that joints are annotated but not visible (enclosed by other objects). The labeling in Coco data format allows such labeling as "not labeled", "labeled but invisible", and "labeled and visible" types of annotations.
Hi i think there is a bug in MSRA evaluation code
The groundtruth file and the result of REN files has 76375 lines while there are 76391 jpg files in MSRA dataset.
Can you check?
Also, are there no additional results of other methods on MSRA dataset? Only two results are provided.
Project page: https://zhengdiyu.github.io/ACR-page/
Code repo: https://github.com/ZhengdiYu/Arbitrary-Hands-3D-Reconstruction
arXiv: https://arxiv.org/abs/2303.05938
Thanks,
@xinghaochen Hello! Thank you very much for your sharing. Do you have the evaluation of RGB data set and the summary of related papers?
Hi Xinghao,
Thanks for adding my paper entitled "3D Hand Pose Estimation using Simulation and Partial-Supervision with a Shared Latent Space" to your list.
Just wondering if you could please update it to Oral?
Cheers.
Sorry for this kind of boring question. But I cannot open the website of icvl dataset, and cannot find it anywhere.
2019 CVPR
3D Hand Shape and Pose Estimation from a Single RGB Image
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnZWxpdWhhb250dXxneDo3ZjE0ZjY3OWUzYjJkYjA2
I think it would be helpful to mention the license, with the update of the license date , reference to the license file.
What do you think ?
I can start adding license for the rgb datasets to begin with.
handpose_x
full demo in dpcas
Hi, Thank you for your collections on hand pose estimation. I find there is your paper in the list of the arXiv papers, "Pose Guided Structured Region Ensemble Network for Cascaded Hand Pose Estimation". Can you share the related codes on the Github. Thank you again for your kindness!
regards,
weiguo
ECCV2018
Weakly-supervised 3D Hand Pose Estimation from Monocular RGB Images
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yujun_Cai_Weakly-supervised_3D_Hand_ECCV_2018_paper.pdf
CMU Panoptic HandDB provides 21 keypoints, not 20
The pdf of paper called "Weakly-supervised Domain Adaptation via GAN and Mesh Model for Estimating 3D Hand Poses Interacting Objects. " is now open source.
Hi, thank you for the collection of papers! This is more of suggestion than an issue. Could you also provide a comprehensive list of datasets and challenges available for hand pose estimation. Thank again for this page, it is helpful!
Like the title suggests, I would like to propose to split the dataset section into RGB / D / RGB+D and add a column if the dataset comes with mesh annotations.
see H2O
Hi,
My name is Ahmed Aboukhadra from the Augmented Vision group at DFKI, I recently published a WACV paper about hand object reconstruction. Could you please add our WACV23 paper to your amazing repo on hand pose estimation? The name of the paper is THOR-Net and it's published here:
There is a Github Repo for the code as well: https://github.com/ATAboukhadra/THOR-Net
Let me know if anything is missing.
Kind regards,
Ahmed
Hi , thanks for making such repo.
I have one question here:
Why do you mark "HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation. " as MM20 paper. I could not find the citation format -Bib Tex in Google Schoolar.
Could u explain it ? Thanks a lot.
Hello,
This paper will be in the proceedings of ICCVW and it will be presented in ACVR workshop oral.
Could you please add it to the ICCV 2023 list? Below are the required links
SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction
Anilkumar Swamy, Vincent Leroy, Philippe Weinzaepfel, Fabien Baradel, Salma Galaaoui, Romain Bregier, Matthieu Armando, Jean-Sebastien Franco, Gregory Rogez
[Paper] [Project Page] [Code] [Dataset]
Thank you in Advance!
Hi, here you can find the link to the dataset that has been released for A Multi-View Video-Based 3D Hand Pose Estimation paper, that would be great if you could add it.
https://github.com/LeylaKhaleghi/MuViHand
PeCLR: Self-Supervised 3D Hand Pose Estimation from monocular RGB viaContrastive Learning
def get_param(dataset):
if dataset == 'icvl':
return 240.99, 240.96, 160, 120
elif dataset == 'nyu':
return 588.03, -587.07, 320, 240
elif dataset == 'msra':
return 241.42, 241.42, 160, 120
Hi Xinghao,
Thanks for maintaining this repo. It is very helpful for researchers. I want to correct two errors in your citation of my papers.
ECCV 2018: I typed the wrong tile on my homepage. The correct title is: "Point-to-Point Regression PointNet for 3D Hand Pose Estimation".
CVPR 2018: The correct title and author list are as follow:
Hand PointNet: 3D Hand Pose Estimation using Point Sets
Liuhao Ge, Yujun Cai, Junwu Weng, Junsong Yuan
Thank you!
Hello!Thank you for sharing your great job! I am trying to predict with the CrossInfoNet on a kinect2 online, but I don't konw how to get these values
Hello,
Can you release the code of paper SHPR-Net: Deep Semantic Hand Pose Regression From Point Clouds
. I am especially interesting about the dataset preprocessed that convert from hand depth map to point cloud data. I am looking forward to your reply! Thank you very much!
Hi, Depth-Based Hand Pose Estimation: Methods, Data, and Challenges is not from IJCV2018.
I find the "msra_test_list.txt" contain 76375 file, could you provide train list? What training files should I use?I found in many papers that they all said that they use theLeave one method, so they tend to train multiple models, so which model should be used?
Hi, the results of hands'17 competition are greatly improved now:
https://competitions.codalab.org/competitions/17356#results
could you please help to find the papers in the head of the list? Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.