hukenovs / hagrid Goto Github PK
View Code? Open in Web Editor NEWHAnd Gesture Recognition Image Dataset
Home Page: https://arxiv.org/abs/2206.08219
HAnd Gesture Recognition Image Dataset
Home Page: https://arxiv.org/abs/2206.08219
Hello,I am downloading this Dataset, but now I can‘t download that seems these URL expired ?
Thank you
How to export SSDLite.pth to ONNX?
If I use the default config of the detector, no hand is detected but the landmarks are correct.
If I change the model.name to one of the pretrained detectors, e.g. FRCNNMobilenetV3LargeFPN
and set model.pretrained to True, I get the following error:
RuntimeError: Error(s) in loading state_dict for FasterRCNN: size mismatch for roi_heads.box_predictor.cls_score.weight: copying a param with shape torch.Size([91, 1024]) from checkpoint, the shape in current model is torch.Size([19, 1024]). size mismatch for roi_heads.box_predictor.cls_score.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([19]). size mismatch for roi_heads.box_predictor.bbox_pred.weight: copying a param with shape torch.Size([364, 1024]) from checkpoint, the shape in current model is torch.Size([76, 1024]). size mismatch for roi_heads.box_predictor.bbox_pred.bias: copying a param with shape torch.Size([364]) from checkpoint, the shape in current model is torch.Size([76]).
Hello,
I would like to ask if skeleton data of the finger coordinates are available, as I'm conducting skeleton data analysis. I saw in the annotations file that exist the bounding boxes fields - it would be very helpful to have another one with the finger keypoints. If they aren't available, how do you plot the keypoints in the README demo, may I ask?
Thank you!
Hello,
I would very simply like to train a model with the subsample you provide. I could get it to run by following the README.
At the root of the project I have:
ann_subsample/
subsample/
which I simply downloaded from the README and extracted. From the root of the project, I am running:
python -m classifier.run --command train --path_to_config classifier/config/default.yaml
At this point, I am able to train.
However, I would also like to train n-shot models, where I take n<5 samples per class and train with it. Then I would like to test this model. What would be the procedure for this? Because from what I see, no parameter to set train_test_percentage is provided in default.yaml. Ideally I would like to set train percentage at 1 in training (where the dataset is a subsample of let's say 3 images per gesture class) and then test it on another dataset (perhaps the test that you already provide)
How can I achieve this? Thanks!
I'm wondering if there's some missing annotation. I wanted to train a hand detection network that output bboxes + handedness classification (left v.s. right hand). In the paper I read that annotations include a leading_hand
label
I downloaded some of the annotation json, but could not find the handedness or leading hand annotation
Hi,
Did you use any other dataset to train hand detection model other than your own hagrid dataset
Hello
I noticed that the detector model library does not include the YOLOv7tiny model, but I see that an ONNX version is provided in the library. How can I use it?
Hello,
Could you please tell us what are different between full frame and non-full frame?
What is the input of the detectors? the 224x224 hand crop image?
Is the input of full frame classifiers is the whole image, it can handle all hands in that images? Or just one hand in that image.
"However, if you need a single gesture, you can use pre-trained full frame classifiers instead of detectors. To use full frame models, remove the no_gesture class ", could you give us example to show more info so that we can understand it?
Thanks!
-Scott
When clicking the link, it just gives Access Denied
xml file.
Are they same to the original one?
Thank you for your timely answer. I only trained the three types of gesture data (marked red boxes) in Figure 1 below, a total of 30,000 images, 10,000 images for each type. The following is the training result Figure 2
Then we used the data of all gestures in Figure 1 above to test. The results were many misjudgments. How can we distinguish similar data sets?
Figure 1
Figure 2
The misjudged data set is as shown below,
Hi,
I experienced the following error when I run demo.py:
Exception has occurred: ValueError
The parameter 'num_classes' expected value 91 but got 20 instead.
File "C:\doc\code_python\AA\hukenovs_hagrid\detector\models\fasterrcnn.py", line 10, in __init__
torchvision_model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(
File "C:\doc\code_python\AA\hukenovs_hagrid\detector\utils.py", line 138, in build_model
"FasterRCNN_mobilenet_large": FasterRCNN_Mobilenet_large(pretrained=pretrained, num_classes=num_classes),
File "C:\doc\code_python\AA\hukenovs_hagrid\demo.py", line 149, in <module>
model = build_model(
ValueError: The parameter 'num_classes' expected value 91 but got 20 instead
I run python demo.py -p MobileNetV3_large.pth
, got error, how can i fix it?
python demo.py -p MobileNetV3_large.pth
C:\anaconda\envs\gestures2\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
C:\anaconda\envs\gestures2\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
Traceback (most recent call last):
File "C:\Ascend\code\hagrid-master\demo.py", line 204, in <module>
model = _load_model(os.path.expanduser(args.path_to_model), args.device)
File "C:\Ascend\code\hagrid-master\demo.py", line 165, in _load_model
ssd_mobilenet.load_state_dict(model_path, map_location=device)
File "C:\Ascend\code\hagrid-master\detector\ssd_mobilenetv3.py", line 67, in load_state_dict
self.torchvision_model.load_state_dict(torch.load(checkpoint_path, map_location=map_location))
File "C:\anaconda\envs\gestures2\lib\site-packages\torch\nn\modules\module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SSD:
Missing key(s) in state_dict: "backbone.features.0.0.0.weight", "backbone.features.0.0.1.weight", "backbone.features.0.0.1.bias", "backbone.features.0.0.1.running_mean", "backbone.features.0.0.1.running_var", "backbone.features.0.1.block.0.0.weight", "backbone.features.0.1.block.0.1.weight", "backbone.features.0.1.block.0.1.bias", "backbone.features.0.1.block.0.1.running_mean", "backbone.features.0.1.block.0.1.running_var", "backbone.features.0.1.block.1.0.weight", "backbone.features.0.1.block.1.1.weight", "backbone.features.0.1.block.1.1.bias", "backbone.features.0.1.block.1.1.running_mean", "backbone.features.0.1.block.1.1.running_var", "backbone.features.0.2.block.0.0.weight", "backbone.features.0.2.block.0.1.weight", "backbone.features.0.2.block.0.1.bias", "backbone.features.0.2.block.0.1.running_mean", "backbone.features.0.2.block.0.1.running_var", "backbone.features.0.2.block.1.0.weight", "backbone.features.0.2.block.1.1.weight", "backbone.features.0.2.block.1.1.bias", "backbone.features.0.2.block.1.1.running_mean", "backbone.features.0.2.block.1.1.running_var", "backbone.features.0.2.block.2.0.weight", "backbone.features.0.2.block.2.1.weight", "backbone.features.0.2.block.2.1.bias", "backbone.features.0.2.block.2.1.running_mean", "backbone.features.0.2.block.2.1.running_var", "backbone.features.0.3.block.0.0.weight", "backbone.features.0.3.block.0.1.weight", "backbone.features.0.3.block.0.1.bias", "backbone.features.0.3.block.0.1.running_mean", "backbone.features.0.3.block.0.1.running_var", "backbone.features.0.3.block.1.0.weight", "backbone.features.0.3.block.1.1.weight", "backbone.features.0.3.block.1.1.bias", "backbone.features.0.3.block.1.1.running_mean", "backbone.features.0.3.block.1.1.running_var", "backbone.features.0.3.block.2.0.weight", "backbone.features.0.3.block.2.1.weight", "backbone.features.0.3.block.2.1.bias", "backbone.features.0.3.block.2.1.running_mean", "backbone.features.0.3.block.2.1.running_var", "backbone.features.0.4.block.0.0.weight", "backbone.features.0.4.block.0.1.weight", "backbone.features.0.4.block.0.1.bias", "backbone.features.0.4.block.0.1.running_mean", "backbone.features.0.4.block.0.1.running_var", "backbone.features.0.4.block.1.0.weight", "backbone.features.0.4.block.1.1.weight", "backbone.features.0.4.block.1.1.bias", "backbone.features.0.4.block.1.1.running_mean", "backbone.features.0.4.block.1.1.running_var", "backbone.features.0.4.block.2.fc1.weight", "backbone.features.0.4.block.2.fc1.bias", "backbone.features.0.4.block.2.fc2.weight", "backbone.features.0.4.block.2.fc2.bias", "backbone.features.0.4.block.3.0.weight", "backbone.features.0.4.block.3.1.weight", "backbone.features.0.4.block.3.1.bias", "backbone.features.0.4.block.3.1.running_mean", "backbone.features.0.4.block.3.1.running_var", "backbone.features.0.5.block.0.0.weight", "backbone.features.0.5.block.0.1.weight", "backbone.features.0.5.block.0.1.bias", "backbone.features.0.5.block.0.1.running_mean", "backbone.features.0.5.block.0.1.running_var", "backbone.features.0.5.block.1.0.weight", "backbone.features.0.5.block.1.1.weight", "backbone.features.0.5.block.1.1.bias", "backbone.features.0.5.block.1.1.running_mean", "backbone.features.0.5.block.1.1.running_var", "backbone.features.0.5.block.2.fc1.weight", "backbone.features.0.5.block.2.fc1.bias", "backbone.features.0.5.block.2.fc2.weight", "backbone.features.0.5.block.2.fc2.bias", "backbone.features.0.5.block.3.0.weight", "backbone.features.0.5.block.3.1.weight", "backbone.features.0.5.block.3.1.bias", "backbone.features.0.5.block.3.1.running_mean", "backbone.features.0.5.block.3.1.running_var", "backbone.features.0.6.block.0.0.weight", "backbone.features.0.6.block.0.1.weight", "backbone.features.0.6.block.0.1.bias", "backbone.features.0.6.block.0.1.running_mean", "backbone.features.0.6.block.0.1.running_var", "backbone.features.0.6.block.1.0.weight", "backbone.features.0.6.block.1.1.weight", "backbone.features.0.6.block.1.1.bias", "backbone.features.0.6.block.1.1.running_mean", "backbone.features.0.6.block.1.1.running_var", "backbone.features.0.6.block.2.fc1.weight", "backbone.features.0.6.block.2.fc1.bias", "backbone.features.0.6.block.2.fc2.weight", "backbone.features.0.6.block.2.fc2.bias", "backbone.features.0.6.block.3.0.weight", "backbone.features.0.6.block.3.1.weight", "backbone.features.0.6.block.3.1.bias", "backbone.features.0.6.block.3.1.running_mean", "backbone.features.0.6.block.3.1.running_var", "backbone.features.0.7.block.0.0.weight", "backbone.features.0.7.block.0.1.weight", "backbone.features.0.7.block.0.1.bias", "backbone.features.0.7.block.0.1.running_mean", "backbone.features.0.7.block.0.1.running_var", "backbone.features.0.7.block.1.0.weight", "backbone.features.0.7.block.1.1.weight", "backbone.features.0.7.block.1.1.bias", "backbone.features.0.7.block.1.1.running_mean", "backbone.features.0.7.block.1.1.running_var", "backbone.features.0.7.block.2.0.weight", "backbone.features.0.7.block.2.1.weight", "backbone.features.0.7.block.2.1.bias", "backbone.features.0.7.block.2.1.running_mean", "backbone.features.0.7.block.2.1.running_var", "backbone.features.0.8.block.0.0.weight", "backbone.features.0.8.block.0.1.weight", "backbone.features.0.8.block.0.1.bias", "backbone.features.0.8.block.0.1.running_mean", "backbone.features.0.8.block.0.1.running_var", "backbone.features.0.8.block.1.0.weight", "backbone.features.0.8.block.1.1.weight", "backbone.features.0.8.block.1.1.bias", "backbone.features.0.8.block.1.1.running_mean", "backbone.features.0.8.block.1.1.running_var", "backbone.features.0.8.block.2.0.weight", "backbone.features.0.8.block.2.1.weight", "backbone.features.0.8.block.2.1.bias", "backbone.features.0.8.block.2.1.running_mean", "backbone.features.0.8.block.2.1.running_var", "backbone.features.0.9.block.0.0.weight", "backbone.features.0.9.block.0.1.weight", "backbone.features.0.9.block.0.1.bias", "backbone.features.0.9.block.0.1.running_mean", "backbone.features.0.9.block.0.1.running_var", "backbone.features.0.9.block.1.0.weight", "backbone.features.0.9.block.1.1.weight", "backbone.features.0.9.block.1.1.bias", "backbone.features.0.9.block.1.1.running_mean", "backbone.features.0.9.block.1.1.running_var", "backbone.features.0.9.block.2.0.weight", "backbone.features.0.9.block.2.1.weight", "backbone.features.0.9.block.2.1.bias", "backbone.features.0.9.block.2.1.running_mean", "backbone.features.0.9.block.2.1.running_var", "backbone.features.0.10.block.0.0.weight", "backbone.features.0.10.block.0.1.weight", "backbone.features.0.10.block.0.1.bias", "backbone.features.0.10.block.0.1.running_mean", "backbone.features.0.10.block.0.1.running_var", "backbone.features.0.10.block.1.0.weight", "backbone.features.0.10.block.1.1.weight", "backbone.features.0.10.block.1.1.bias", "backbone.features.0.10.block.1.1.running_mean", "backbone.features.0.10.block.1.1.running_var", "backbone.features.0.10.block.2.0.weight", "backbone.features.0.10.block.2.1.weight", "backbone.features.0.10.block.2.1.bias", "backbone.features.0.10.block.2.1.running_mean", "backbone.features.0.10.block.2.1.running_var", "backbone.features.0.11.block.0.0.weight", "backbone.features.0.11.block.0.1.weight", "backbone.features.0.11.block.0.1.bias", "backbone.features.0.11.block.0.1.running_mean", "backbone.features.0.11.block.0.1.running_var", "backbone.features.0.11.block.1.0.weight", "backbone.features.0.11.block.1.1.weight", "backbone.features.0.11.block.1.1.bias", "backbone.features.0.11.block.1.1.running_mean", "backbone.features.0.11.block.1.1.running_var", "backbone.features.0.11.block.2.fc1.weight", "backbone.features.0.11.block.2.fc1.bias", "backbone.features.0.11.block.2.fc2.weight", "backbone.features.0.11.block.2.fc2.bias", "backbone.features.0.11.block.3.0.weight", "backbone.features.0.11.block.3.1.weight", "backbone.features.0.11.block.3.1.bias", "backbone.features.0.11.block.3.1.running_mean", "backbone.features.0.11.block.3.1.running_var", "backbone.features.0.12.block.0.0.weight", "backbone.features.0.12.block.0.1.weight", "backbone.features.0.12.block.0.1.bias", "backbone.features.0.12.block.0.1.running_mean", "backbone.features.0.12.block.0.1.running_var", "backbone.features.0.12.block.1.0.weight", "backbone.features.0.12.block.1.1.weight", "backbone.features.0.12.block.1.1.bias", "backbone.features.0.12.block.1.1.running_mean", "backbone.features.0.12.block.1.1.running_var", "backbone.features.0.12.block.2.fc1.weight", "backbone.features.0.12.block.2.fc1.bias", "backbone.features.0.12.block.2.fc2.weight", "backbone.features.0.12.block.2.fc2.bias", "backbone.features.0.12.block.3.0.weight", "backbone.features.0.12.block.3.1.weight", "backbone.features.0.12.block.3.1.bias", "backbone.features.0.12.block.3.1.running_mean", "backbone.features.0.12.block.3.1.running_var", "backbone.features.0.13.0.weight", "backbone.features.0.13.1.weight", "backbone.features.0.13.1.bias", "backbone.features.0.13.1.running_mean", "backbone.features.0.13.1.running_var", "backbone.features.1.0.1.0.weight", "backbone.features.1.0.1.1.weight", "backbone.features.1.0.1.1.bias", "backbone.features.1.0.1.1.running_mean", "backbone.features.1.0.1.1.running_var", "backbone.features.1.0.2.fc1.weight", "backbone.features.1.0.2.fc1.bias", "backbone.features.1.0.2.fc2.weight", "backbone.features.1.0.2.fc2.bias", "backbone.features.1.0.3.0.weight", "backbone.features.1.0.3.1.weight", "backbone.features.1.0.3.1.bias", "backbone.features.1.0.3.1.running_mean", "backbone.features.1.0.3.1.running_var", "backbone.features.1.1.block.0.0.weight", "backbone.features.1.1.block.0.1.weight", "backbone.features.1.1.block.0.1.bias", "backbone.features.1.1.block.0.1.running_mean", "backbone.features.1.1.block.0.1.running_var", "backbone.features.1.1.block.1.0.weight", "backbone.features.1.1.block.1.1.weight", "backbone.features.1.1.block.1.1.bias", "backbone.features.1.1.block.1.1.running_mean", "backbone.features.1.1.block.1.1.running_var", "backbone.features.1.1.block.2.fc1.weight", "backbone.features.1.1.block.2.fc1.bias", "backbone.features.1.1.block.2.fc2.weight", "backbone.features.1.1.block.2.fc2.bias", "backbone.features.1.1.block.3.0.weight", "backbone.features.1.1.block.3.1.weight", "backbone.features.1.1.block.3.1.bias", "backbone.features.1.1.block.3.1.running_mean", "backbone.features.1.1.block.3.1.running_var", "backbone.features.1.2.block.0.0.weight", "backbone.features.1.2.block.0.1.weight", "backbone.features.1.2.block.0.1.bias", "backbone.features.1.2.block.0.1.running_mean", "backbone.features.1.2.block.0.1.running_var", "backbone.features.1.2.block.1.0.weight", "backbone.features.1.2.block.1.1.weight", "backbone.features.1.2.block.1.1.bias", "backbone.features.1.2.block.1.1.running_mean", "backbone.features.1.2.block.1.1.running_var", "backbone.features.1.2.block.2.fc1.weight", "backbone.features.1.2.block.2.fc1.bias", "backbone.features.1.2.block.2.fc2.weight", "backbone.features.1.2.block.2.fc2.bias", "backbone.features.1.2.block.3.0.weight", "backbone.features.1.2.block.3.1.weight", "backbone.features.1.2.block.3.1.bias", "backbone.features.1.2.block.3.1.running_mean", "backbone.features.1.2.block.3.1.running_var", "backbone.features.1.3.0.weight", "backbone.features.1.3.1.weight", "backbone.features.1.3.1.bias", "backbone.features.1.3.1.running_mean", "backbone.features.1.3.1.running_var", "backbone.extra.0.0.0.weight", "backbone.extra.0.0.1.weight", "backbone.extra.0.0.1.bias", "backbone.extra.0.0.1.running_mean", "backbone.extra.0.0.1.running_var", "backbone.extra.0.1.0.weight", "backbone.extra.0.1.1.weight", "backbone.extra.0.1.1.bias", "backbone.extra.0.1.1.running_mean", "backbone.extra.0.1.1.running_var", "backbone.extra.0.2.0.weight", "backbone.extra.0.2.1.weight", "backbone.extra.0.2.1.bias", "backbone.extra.0.2.1.running_mean", "backbone.extra.0.2.1.running_var", "backbone.extra.1.0.0.weight", "backbone.extra.1.0.1.weight", "backbone.extra.1.0.1.bias", "backbone.extra.1.0.1.running_mean", "backbone.extra.1.0.1.running_var", "backbone.extra.1.1.0.weight", "backbone.extra.1.1.1.weight", "backbone.extra.1.1.1.bias", "backbone.extra.1.1.1.running_mean", "backbone.extra.1.1.1.running_var", "backbone.extra.1.2.0.weight", "backbone.extra.1.2.1.weight", "backbone.extra.1.2.1.bias", "backbone.extra.1.2.1.running_mean", "backbone.extra.1.2.1.running_var", "backbone.extra.2.0.0.weight", "backbone.extra.2.0.1.weight", "backbone.extra.2.0.1.bias", "backbone.extra.2.0.1.running_mean", "backbone.extra.2.0.1.running_var", "backbone.extra.2.1.0.weight", "backbone.extra.2.1.1.weight", "backbone.extra.2.1.1.bias", "backbone.extra.2.1.1.running_mean", "backbone.extra.2.1.1.running_var", "backbone.extra.2.2.0.weight", "backbone.extra.2.2.1.weight", "backbone.extra.2.2.1.bias", "backbone.extra.2.2.1.running_mean", "backbone.extra.2.2.1.running_var", "backbone.extra.3.0.0.weight", "backbone.extra.3.0.1.weight", "backbone.extra.3.0.1.bias", "backbone.extra.3.0.1.running_mean", "backbone.extra.3.0.1.running_var", "backbone.extra.3.1.0.weight", "backbone.extra.3.1.1.weight", "backbone.extra.3.1.1.bias", "backbone.extra.3.1.1.running_mean", "backbone.extra.3.1.1.running_var", "backbone.extra.3.2.0.weight", "backbone.extra.3.2.1.weight", "backbone.extra.3.2.1.bias", "backbone.extra.3.2.1.running_mean", "backbone.extra.3.2.1.running_var", "head.classification_head.module_list.0.0.0.weight", "head.classification_head.module_list.0.0.1.weight", "head.classification_head.module_list.0.0.1.bias", "head.classification_head.module_list.0.0.1.running_mean", "head.classification_head.module_list.0.0.1.running_var", "head.classification_head.module_list.0.1.weight", "head.classification_head.module_list.0.1.bias", "head.classification_head.module_list.1.0.0.weight", "head.classification_head.module_list.1.0.1.weight", "head.classification_head.module_list.1.0.1.bias", "head.classification_head.module_list.1.0.1.running_mean", "head.classification_head.module_list.1.0.1.running_var", "head.classification_head.module_list.1.1.weight", "head.classification_head.module_list.1.1.bias", "head.classification_head.module_list.2.0.0.weight", "head.classification_head.module_list.2.0.1.weight", "head.classification_head.module_list.2.0.1.bias", "head.classification_head.module_list.2.0.1.running_mean", "head.classification_head.module_list.2.0.1.running_var", "head.classification_head.module_list.2.1.weight", "head.classification_head.module_list.2.1.bias", "head.classification_head.module_list.3.0.0.weight", "head.classification_head.module_list.3.0.1.weight", "head.classification_head.module_list.3.0.1.bias", "head.classification_head.module_list.3.0.1.running_mean", "head.classification_head.module_list.3.0.1.running_var", "head.classification_head.module_list.3.1.weight", "head.classification_head.module_list.3.1.bias", "head.classification_head.module_list.4.0.0.weight", "head.classification_head.module_list.4.0.1.weight", "head.classification_head.module_list.4.0.1.bias", "head.classification_head.module_list.4.0.1.running_mean", "head.classification_head.module_list.4.0.1.running_var", "head.classification_head.module_list.4.1.weight", "head.classification_head.module_list.4.1.bias", "head.classification_head.module_list.5.0.0.weight", "head.classification_head.module_list.5.0.1.weight", "head.classification_head.module_list.5.0.1.bias", "head.classification_head.module_list.5.0.1.running_mean", "head.classification_head.module_list.5.0.1.running_var", "head.classification_head.module_list.5.1.weight", "head.classification_head.module_list.5.1.bias", "head.regression_head.module_list.0.0.0.weight", "head.regression_head.module_list.0.0.1.weight", "head.regression_head.module_list.0.0.1.bias", "head.regression_head.module_list.0.0.1.running_mean", "head.regression_head.module_list.0.0.1.running_var", "head.regression_head.module_list.0.1.weight", "head.regression_head.module_list.0.1.bias", "head.regression_head.module_list.1.0.0.weight", "head.regression_head.module_list.1.0.1.weight", "head.regression_head.module_list.1.0.1.bias", "head.regression_head.module_list.1.0.1.running_mean", "head.regression_head.module_list.1.0.1.running_var", "head.regression_head.module_list.1.1.weight", "head.regression_head.module_list.1.1.bias", "head.regression_head.module_list.2.0.0.weight", "head.regression_head.module_list.2.0.1.weight", "head.regression_head.module_list.2.0.1.bias", "head.regression_head.module_list.2.0.1.running_mean", "head.regression_head.module_list.2.0.1.running_var", "head.regression_head.module_list.2.1.weight", "head.regression_head.module_list.2.1.bias", "head.regression_head.module_list.3.0.0.weight", "head.regression_head.module_list.3.0.1.weight", "head.regression_head.module_list.3.0.1.bias", "head.regression_head.module_list.3.0.1.running_mean", "head.regression_head.module_list.3.0.1.running_var", "head.regression_head.module_list.3.1.weight", "head.regression_head.module_list.3.1.bias", "head.regression_head.module_list.4.0.0.weight", "head.regression_head.module_list.4.0.1.weight", "head.regression_head.module_list.4.0.1.bias", "head.regression_head.module_list.4.0.1.running_mean", "head.regression_head.module_list.4.0.1.running_var", "head.regression_head.module_list.4.1.weight", "head.regression_head.module_list.4.1.bias", "head.regression_head.module_list.5.0.0.weight", "head.regression_head.module_list.5.0.1.weight", "head.regression_head.module_list.5.0.1.bias", "head.regression_head.module_list.5.0.1.running_mean", "head.regression_head.module_list.5.0.1.running_var", "head.regression_head.module_list.5.1.weight", "head.regression_head.module_list.5.1.bias".
Unexpected key(s) in state_dict: "state_dict", "optimizer_state_dict", "epoch", "config".
When I test the pretrained resnet18 with gesture images not in the test dataest,it seems that the model can't even recognize the simplest gestures like One,Ok and Six.
Here are the outputs of the model when I put into a new gesture picture into it.
{'gesture': tensor([[ 0.0327, 0.0073, 0.0662, -0.0611, 0.0772, 0.1809, 0.0938, 0.1073, -0.0719, -0.1703, 0.2205, -0.1140, -0.1982, 0.1579, -0.3652, -0.2734, 0.1343, -0.2151, 0.5438]], grad_fn=<AddmmBackward0>), 'leading_hand': tensor([[ 0.6901, -0.6721]], grad_fn=<AddmmBackward0>)}
the image sets are so big ,do you have a resized version or hand cropped part images?
(hands_ai_proj) PS C:\Users\kauti\Desktop\hands_ai_projv2\hands_ai_proj\hagrid-master> python .\demo.py -p .\SSDLite_MobilenetV3_small.pth --landmarks
Traceback (most recent call last):
File "C:\Users\kauti\Desktop\hands_ai_projv2\hands_ai_proj\hagrid-master\demo.py", line 148, in <module>
conf = OmegaConf.load(args.path_to_config)
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\omegaconf\omegaconf.py", line 184, in load
obj = yaml.load(f, Loader=get_yaml_loader())
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\yaml\__init__.py", line 79, in load
loader = Loader(stream)
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\yaml\loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\yaml\reader.py", line 85, in __init__
self.determine_encoding()
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\yaml\reader.py", line 124, in determine_encoding
self.update_raw()
File "C:\Users\kauti\.virtualenvs\hands_ai_proj-Ab44cTRZ\lib\site-packages\yaml\reader.py", line 178, in update_raw
data = self.stream.read(size)
File "C:\Users\kauti\miniconda3\envs\threenine\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
IM running this in Windows 11 on CPU in a conda environment
I'm new here.I want to use yolov7 to training my model,but i don't know how to transform those json to yolo(.txt)
Hello,
first of all I wanted to thank you for the amazing work you did so far!
I open this issue to signal a problem occurring to me while trying to use the pretrained resnet-based models which weighs you made available for download. In particular the predictions made by the Linear model applied after the model "backbone" seem random, this because while the various components of the model backbone have their own pretrained weights, the final Linear layer is seemingly instantiated at random - I'm referring to the lines 68-70 found in "classifier/models/resnet.py".
Am I doing anything wrong or am I missing/misunderstanding something, or is it an oversight on your side?
Thanks for your attention.
Hi, I am trying to access the pre-trained models but it seems that all links are broken.
The intersection of v1 and v2 is vert large.The whole dataset,which is 723Gb, is so big that too long to download.
I have v1 data.So where could I download the new added image?
@hukenovs
The link to download the subsample of images is broken (https://sc.link/AO5l). The download maxes out at 1 GB. Is there any way to fix this?
Is it necessary to have landmarks when training new data?
Getting this error while downloading dataset
Need help, training always stops at step 22 and it shows this error below:
File "/usr/local/lib/python3.7/dist-packages/matplotlib/cm.py", line 477, in set_array
raise TypeError(f"Image data of dtype {A.dtype} cannot be "
TypeError: Image data of dtype object cannot be converted to float
Also, I'm still new to machine learning and was wondering where I should put the.pth file if I choose to train the datasets using a pretrained model.
the issue in details is as follows:
[LINE:84] INFO [2023-07-23 02:56:07,016] Current device: cuda
[LINE:223] INFO [2023-07-23 02:56:07,486] Epoch: 0
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\Pytorch\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\ProgramData\Anaconda3\envs\Pytorch\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\HaGRID\hagrid\classifier\run.py", line 105, in
_run_train(args.path_to_config)
File "D:\HaGRID\hagrid\classifier\run.py", line 85, in _run_train
TrainClassifier.train(model, conf, train_dataset, test_dataset)
File "D:\HaGRID\hagrid\classifier\train.py", line 224, in train
TrainClassifier.run_epoch(
File "D:\HaGRID\hagrid\classifier\train.py", line 124, in run_epoch
add_params_to_tensorboard(writer, optimizer_params, epoch, "optimizer", {"params"})
File "D:\HaGRID\hagrid\classifier\utils.py", line 59, in add_params_to_tensorboard
writer.add_scalar(f"{obj}/{param}", value, epoch)
File "C:\ProgramData\Anaconda3\envs\Pytorch\lib\site-packages\torch\utils\tensorboard\writer.py", line 387, in add_scalar
summary = scalar(
File "C:\ProgramData\Anaconda3\envs\Pytorch\lib\site-packages\torch\utils\tensorboard\summary.py", line 279, in scalar
scalar = make_np(scalar)
File "C:\ProgramData\Anaconda3\envs\Pytorch\lib\site-packages\torch\utils\tensorboard_convert_np.py", line 24, in make_np
raise NotImplementedError(
NotImplementedError: Got <class 'NoneType'>, but numpy array, torch tensor, or caffe2 blob name are expected.
How should I deal with it? I am looking forward to your reply. Thanks for your contributions.
Hi,
Do you happen to have the training script for retraining the SSDLite classification head? I'd like to add another class.
For training, what should the targets consist of? Thanks!
Hi. I try to train the model by your command "python -m detector.run --command 'train' --path_to_config ". But there comes some bug when I try to train on gpu(device: 'cuda'). Finally, I fix it by changing some code in "hagrid-master/detector/train.py"
line 128: model.to(conf.device)
I have my own dataset which I want to retrain using the provided pretrianed model. I set the path of pretrained model downloaded from the Reame.md but I am getitng this error -
[LINE:118] INFO [2023-09-26 11:31:35,803] Database for no_gesture not found [LINE:118] INFO [2023-09-26 11:31:35,958] Database for no_gesture not found [LINE:83] INFO [2023-09-26 11:31:35,966] Current device: cpu Traceback (most recent call last): File "/home/hitech/anaconda3/envs/gestures/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hitech/anaconda3/envs/gestures/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/hitech/Programming/hagrid/detector/run.py", line 104, in <module> _run_train(args.path_to_config) File "/home/hitech/Programming/hagrid/detector/run.py", line 84, in _run_train TrainDetector.train(model, conf, train_dataset, test_dataset) File "/home/hitech/Programming/hagrid/detector/train.py", line 125, in train params = [p for p in model.parameters() if p.requires_grad] AttributeError: 'NoneType' object has no attribute 'parameters'
Hello,
Thanks for the great and extensive data set. You talk about combining 18+1 pre-existing classes of gestures to create dynamic gestures like swipe. What I am wondering is, is there an idea or a solution that could be feasible to extend the 18 existing classes, by using the trained models?
Assume that I introduce a 19th gesture and add training samples for it. Would it be possible to create a new classifier which takes the last layers of your model as input? I believe in such a case I would have to disregard the last layer as it maps to either of the 19 classes thus creating a bottleneck. But I still believe the last N-1 to N-x layers still hold some intrinsic information about the positioning of the hand to take as input for my new classifier.
Looking forward to any ideas for this task.
usage: Convert Hagrid annotations to Yolo annotations format [--bbox_format BBOX_FORMAT] [--cfg CFG]
Convert Hagrid annotations to Yolo annotations format: error: unrecognized arguments: --path_to_config D:\deep\hagrid-master\ann_test
Hello, Hagrid is a very great job.
I noticed that the 21 key points of hagrid v1 were annotated through MediaPipe. Has there been any manual calibration? Because I found that some key points were labeled incorrectly or in a wrong order, as shown in the figure.
Did the annotations in v2 delete landmarks?
Hi, according to the paper the step decay for the detection expeirment is every 3 epochs. Is this supposed to be 30 epochs? Otherwise, wouldn't the learning rate become almost negligible?
Thanks.
Hello, I want to know the mAP value of the subsample dataset to verify whether I run the code correctly, thanks
Hi,
In your readme, you mentioned:
We provide some pre-trained models as the baseline with the classic backbone architectures and two output heads - for gesture classification and leading hand classification.
I am trying to run resnet18 for hand classification, how to config the yaml file?
is the ResNet18 pth model file trained by you for hand classification?
I tried to modify the yaml file, and use ResNet18 for hand classification, but the code report error.
Hi. I wonder if you can release your yolov7 checkpoint for us to finetune the detection model, because I can only get onnx file from your link. And appreciate that if you can tell us how to run the train on yolov7. Thanks a lot!!
Hello,
Thank you for building a large scale dataset for hand gesture recognition. Appreciate the effort. I wanted to request you to please take a look at the training error I am receiving. I am using the ResNet18 model and have modified the paths in the config file accordingly.
The training starts well but runs into the dataloader issue where it is not able to get the annotations for the training dataloader. I have the error stack in the picture. Any help would be appreciated.
Thank you.
Hello,
Could you please provide the Pytorch pth model for the yolov7 tiny? We need to add batch proccess.
Many thanks!
-Scott
Request for the onnx file of the SSDLite.pth detector provided in the description... Thanks
Can't download any of the dataset. Getting Operation timed out
HAGRID's license is [Creative Commons Attribution-ShareAlike 4.0 International License], right?
How about every single image's license? all images are under CC BY SA 4?
os: win10
shell: cmd
python: python 3.11
description:
I have tried "python demo.py -p .\configs\SSDLiteMobileNetV3Small.yaml --landmarks", but there is only the backbone of my hands without any gesture detection. How to fix it?
English isn't my native language, pardon me for grammar and spelling mistakes. Thanks a lot.
Hi! Which preprocessing should I use for non-fullframe classifiers? Do you have any examples of their usage in your demo?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.