Code Monkey home page Code Monkey logo

kypt_transformer's People

Contributors

shreyashampali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kypt_transformer's Issues

Error on the InterHand2.6M test

When I used your method to test on InterHand2.6M, I got the following results:
MRRPE: 29.422908
MPJPE for each joint:
r_thumb4: 13.68, r_thumb3: 11.06, r_thumb2: 8.14, r_thumb1: 6.28, r_index4: 12.91, r_index3: 11.43, r_index2: 10.24, r_index1: 8.82, r_middle4: 14.65, r_middle3: 11.72, r_middle2: 10.40, r_middle1: 8.52, r_ring4: 13.78, r_ring3: 11.41, r_ring2: 10.19, r_ring1: 8.35, r_pinky4: 13.48, r_pinky3: 11.12, r_pinky2: 9.98, r_pinky1: 8.43, r_wrist: 0.00, l_thumb4: 13.52, l_thumb3: 10.79, l_thumb2: 7.80, l_thumb1: 6.38, l_index4: 13.14, l_index3: 11.48, l_index2: 10.34, l_index1: 8.75, l_middle4: 15.83, l_middle3: 12.60, l_middle2: 10.66, l_middle1: 8.43, l_ring4: 14.11, l_ring3: 11.80, l_ring2: 10.48, l_ring1: 8.55, l_pinky4: 14.23, l_pinky3: 12.20, l_pinky2: 10.74, l_pinky1: 8.91, l_wrist: 0.00,
MPJPE for all hand sequences: 10.36

MPJPE for each joint:
r_thumb4: 11.46, r_thumb3: 9.42, r_thumb2: 7.17, r_thumb1: 5.36, r_index4: 10.95, r_index3: 9.80, r_index2: 8.94, r_index1: 7.74, r_middle4: 11.75, r_middle3: 10.38, r_middle2: 9.38, r_middle1: 7.55, r_ring4: 11.83, r_ring3: 10.37, r_ring2: 9.39, r_ring1: 7.44, r_pinky4: 11.82, r_pinky3: 10.06, r_pinky2: 8.95, r_pinky1: 7.43, r_wrist: 0.00, l_thumb4: 10.87, l_thumb3: 8.76, l_thumb2: 6.73, l_thumb1: 5.23, l_index4: 10.30, l_index3: 9.06, l_index2: 8.25, l_index1: 7.10, l_middle4: 11.43, l_middle3: 9.96, l_middle2: 8.77, l_middle1: 6.92, l_ring4: 11.18, l_ring3: 9.73, l_ring2: 8.76, l_ring1: 7.19, l_pinky4: 11.81, l_pinky3: 10.11, l_pinky2: 8.91, l_pinky1: 7.32, l_wrist: 0.00,
MPJPE for single hand sequences: 8.70

MPJPE for each joint:
r_thumb4: 15.48, r_thumb3: 12.38, r_thumb2: 9.21, r_thumb1: 7.35, r_index4: 14.54, r_index3: 12.75, r_index2: 11.28, r_index1: 9.68, r_middle4: 17.29, r_middle3: 12.82, r_middle2: 11.21, r_middle1: 9.27, r_ring4: 15.55, r_ring3: 12.24, r_ring2: 10.83, r_ring1: 9.08, r_pinky4: 14.96, r_pinky3: 11.98, r_pinky2: 10.79, r_pinky1: 9.21, r_wrist: 0.00, l_thumb4: 15.92, l_thumb3: 12.58, l_thumb2: 9.10, l_thumb1: 7.84, l_index4: 15.64, l_index3: 13.62, l_index2: 12.18, l_index1: 10.20,
"Why are both l_wrist and r_wrist 0?"
And the other results are different from the ones you provided

H2O-3D evaluation error on CodaLab

Hi. Thank you for your wonderful research.

When I evaluated trained model on CodaLab, I got some error as below:

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Traceback (most recent call last):
  File "/tmp/codalab/tmpl1jR0O/run/program/evaluate.py", line 19, in 
    import open3d as o3d
ModuleNotFoundError: No module named 'open3d'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/codalab/tmpl1jR0O/run/program/evaluate.py", line 22, in 
    import open3d as o3d
  File "/opt/conda/lib/python3.9/site-packages/open3d/__init__.py", line 9, in 
    from open3d.linux import *
  File "/opt/conda/lib/python3.9/site-packages/open3d/linux/__init__.py", line 7, in 
    globals().update(importlib.import_module('open3d.linux.open3d').__dict__)
  File "/opt/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: /opt/conda/lib/python3.9/site-packages/open3d/linux/open3d.so: undefined symbol: _Py_ZeroStruct

How can I solve this problem?

Path in config.py

Dear Authors, thank you such great work. I am trying to run your code for my research purpose. In config.py file what path should be set for 'object_models_dir' variable? It is not explained in Readme file. Thank you.

Can not find file InterHand2.6M_test_MANO.json

Hello,
I am trying to run the demo.py with InterHand2.6M dataset, but I can't find the necessary json file named InterHand2.6M_test_MANO.json from the homepage of InterHand2.6M. Could you please tell me where can I find it?

Thank you

Custom dataset

Hi,

Thanks for the great work. Is it possible to use a custom dataset?

Demo error

Thank you for your excellent research.

As a result of implementing demo.py using the checkpoint you provided (snapshot_4_2000.pth.tar), the following error occurred.

RuntimeError: Error(s) in loading state_dict for DataParallel:
	size mismatch for module.query_embed.weight: copying a param with shape torch.Size([34, 256]) from checkpoint, the shape in current model is torch.Size([43, 256]).

I think the problem is caused by the size difference between the loaded model and the current model,

so I wonder how to solve it.

Shouldn't PoseLoss normalizer value initialize to 0 instead of 32?

Hi,
in loss.py PoseLoss.forward(), normalizer is initialized to 32.
I see the logic is that for each hand, there are 16 joints, so you wanted to add 16 to the normalizer per hand.
However, since you initialized normalizer = 32, if you have "left" in cfg.hand_type, you'll end up with 48 for normalizer.

error for codalab

I used your code(python test.py --ckpt_path <path_to_h2o3d_ckpt>) and your checkpoint file for testing. Since the testing time for the object error was too long, and I found through the code that the generated .json file does not need information about objects, I delete the object code in the test and kept only the code to generate .json.

jointsNormalToManoMap = [20,
7, 6, 5,
11, 10, 9,
19, 18, 17,
15, 14, 13,
3, 2, 1,
0, 4, 8, 12, 16]
pred_verts_right_list = []
pred_verts_left_list = []
pred_joints_right_list = []
pred_joints_left_list = []
if np.sum(pred_verts) == 0:
pred_verts = None
for i in range(num_samples):
pred_joint_coord_cam = swap_coord_sys(pred_joints[i]/1000)
pred_joints_right_list.append(pred_joint_coord_cam[:21][jointsNormalToManoMap])
pred_joints_left_list.append(pred_joint_coord_cam[21:][jointsNormalToManoMap]+swap_coord_sys(pred_rel_trans[i]))
if pred_verts is not None:
pred_verts_right_list.append(swap_coord_sys(pred_verts[i][:, :3] / 1000))
pred_verts_left_list.append(swap_coord_sys(pred_verts[i][:, 3:] / 1000))
self.dump_for_challenge(osp.join(ckpt_dir, 'results_%s.json' % (ckpt_name)),
pred_joints_right_list, pred_joints_left_list, pred_verts_right_list, pred_verts_left_list)
However, when I compressed the .json into a zip file and submitted it to CodaLob, an error occurred, as shown in the image.
093173844df426a1debf0dfac095e22
Furthermore, I attempted to keep the object code without deletion, but the generated .json file remained unchanged, and the submission result remained the same.

Can't replicate training configuration of your Interhand2.6M / angle model

Hello

I tried training the model without changing anything other than the file paths, but the MPJPE for the left hand and the right hand show a huge difference. Below is the result txt file. I used 1/100 of the testing data to cut the evaluation short.

num samples: 8491
MRRPE: 0.047761
MPJPE for each joint: 
r_thumb4: 25.38, r_thumb3: 22.17, r_thumb2: 15.70, r_thumb1: 9.22, r_index4: 25.76, r_index3: 23.66, r_index2: 20.57, r_index1: 15.42, r_middle4: 26.69, r_middle3: 24.52, r_middle2: 20.08, r_middle1: 15.20, r_ring4: 26.70, r_ring3: 23.05, r_ring2: 20.34, r_ring1: 14.11, r_pinky4: 25.43, r_pinky3: 20.85, r_pinky2: 18.81, r_pinky1: 14.18, r_wrist: 0.00, l_thumb4: 114.42, l_thumb3: 90.29, l_thumb2: 64.40, l_thumb1: 39.50, l_index4: 140.77, l_index3: 125.55, l_index2: 108.85, l_index1: 79.17, l_middle4: 147.59, l_middle3: 127.54, l_middle2: 108.91, l_middle1: 79.32, l_ring4: 141.24, l_ring3: 116.13, l_ring2: 100.63, l_ring1: 72.92, l_pinky4: 117.32, l_pinky3: 103.12, l_pinky2: 89.21, l_pinky1: 67.93, l_wrist: 0.00, 
MPJPE for all hand sequences: 58.16
MPJPE for each joint: 
r_thumb4: 21.07, r_thumb3: 19.04, r_thumb2: 13.05, r_thumb1: 8.60, r_index4: 19.68, r_index3: 18.79, r_index2: 17.33, r_index1: 12.29, r_middle4: 21.44, r_middle3: 19.96, r_middle2: 16.78, r_middle1: 12.34, r_ring4: 21.12, r_ring3: 18.77, r_ring2: 17.27, r_ring1: 11.41, r_pinky4: 19.82, r_pinky3: 16.55, r_pinky2: 15.49, r_pinky1: 11.26, r_wrist: 0.00, l_thumb4: 178.92, l_thumb3: 141.20, l_thumb2: 101.19, l_thumb1: 55.72, l_index4: 223.16, l_index3: 200.08, l_index2: 174.57, l_index1: 127.71, l_middle4: 229.59, l_middle3: 204.86, l_middle2: 176.35, l_middle1: 128.81, l_ring4: 214.28, l_ring3: 189.22, l_ring2: 162.91, l_ring1: 119.30, l_pinky4: 185.26, l_pinky3: 167.21, l_pinky2: 145.44, l_pinky1: 110.57, l_wrist: 0.00, 
MPJPE for single hand sequences: 84.96
MPJPE for each joint: 
r_thumb4: 28.91, r_thumb3: 24.72, r_thumb2: 17.83, r_thumb1: 9.94, r_index4: 30.88, r_index3: 27.58, r_index2: 23.18, r_index1: 17.92, r_middle4: 31.67, r_middle3: 28.23, r_middle2: 22.74, r_middle1: 17.47, r_ring4: 31.85, r_ring3: 26.48, r_ring2: 22.80, r_ring1: 16.26, r_pinky4: 30.37, r_pinky3: 24.31, r_pinky2: 21.45, r_pinky1: 16.48, r_wrist: 0.00, l_thumb4: 58.33, l_thumb3: 46.70, l_thumb2: 33.18, l_thumb1: 19.06, l_index4: 70.49, l_index3: 61.24, l_index2: 52.54, l_index1: 37.89, l_middle4: 73.07, l_middle3: 61.44, l_middle2: 51.67, l_middle1: 37.21, l_ring4: 69.42, l_ring3: 55.75, l_ring2: 48.39, l_ring1: 33.79, l_pinky4: 59.49, l_pinky3: 51.00, l_pinky2: 43.32, l_pinky1: 32.75, l_wrist: 0.00, 
MPJPE for interacting hand sequences: 34.95

I'm not sure why I cannot reproduce your released snapshot result.

Also, in section 3.1 of the paper, you wrote

The keypoint detector is pre-trained before fine-tuning it jointly with the rest of the pipeline.

Is this in the codes? I wasn't sure where to look for it.

Error on Codalab when testing on H2O-3D

Hi, when I tested your checkpoint result (zipped from json result), there are some problems like this:

/opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
Could not find a version that satisfies the requirement open3d-python (from versions: )
No matching distribution found for open3d-python
Traceback (most recent call last):
File "/tmp/codalab/tmpkxNrRM/run/program/evaluate.py", line 22, in
import open3d as o3d
ImportError: No module named open3d

I think this problem might be caused by your server environment...? Thank you sooooo much if you can help me solve this!

Evaluation on the datasets HO-3D and H2O-3D

Hello, thank you for your great work. I have the following questions:

  1. Could you please provide checkpoints and scripts for the demo and evaluation of HO-3D v3?
  2. When I run the evaluation on H2O-3D dataset, I get the error message "KeyError: 'objCorners3D'". But I found that there is no 'objCorners3D' in the xxxx.pkl files in the evaluation folder of the dataset. Is this a problem with the dataset?

Thank you in advance.

Inference time is too long.

Thank you for your great work.

I trained with InterHand2.6M dataset using the code you provided,
and tested the model generated accordingly.

But it takes too long to infer,
so I wonder if it's supposed to be like this.

I am attaching a capture of the inference process, so please refer to it.

100%|████████████████████████████████████████████████████████████████████████████████████████| 849160/849160 [07:07<00:00, 1988.37it/s]
Number of annotations in single hand sequences: 488968
Number of annotations in interacting hand sequences: 360053
07-18 21:59:19 Load checkpoint from main/output/model_dump/train/snapshot_29_7542.pth.tar
INFO - 2022-07-18 21:59:19,119 - logger - Load checkpoint from main/output/model_dump/train/snapshot_29_7542.pth.tar
07-18 21:59:19 Creating graph...
INFO - 2022-07-18 21:59:19,119 - logger - Creating graph...
BackboneNet No. of Params = 23508032
decoder_net No. of Params = 12085601
transformer No. of Params = 9470208
Init of peak detector took 0.103300 s
WARNING: You are using a MANO model, with only 10 shape coefficients.
WARNING: You are using a MANO model, with only 10 shape coefficients.
WARNING: You are using a MANO model, with only 10 shape coefficients.
WARNING: You are using a MANO model, with only 10 shape coefficients.
Total No. of Params = 48979329
  0%|                                                                                                       | 0/424511 [00:00<?, ?it/s]/home/juneho0108/kypt_transformer/main/../common/nets/position_encoding.py:45: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
/home/juneho0108/miniconda3/envs/kypt_trans/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
███████████████████▋                                                                | 99194/424511 [47:36:46<196:55:01,  2.18s/it]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.