Code Monkey home page Code Monkey logo

keypoint_communities's People

Contributors

ak391 avatar duncanzauss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keypoint_communities's Issues

Questions regarding the application of this paper/code.

Hello Thank you for the open-source code and great work!
I had the following questions -

1)Can we apply this code to a use-case as follows-
A static camera is observing 2-3 moving robots in its FOV,and if I retrain the network with the images of the robot,with its corresponding ground-truth keypoints,the network can still predict the 2d keypoints of the moving robot right?

2)For this does the robot have to be at a particular distance from the camera,so that the keypoint estimation is accurate enough?Meaning is the network's accuracy dependent on "the distance the object is from the camera"?

3)Also can the network be used in a case where the camera observing the scene is moving as well as the object whose pose is to be estimated is also moving?Will the network's accuracy be effected in this case?

Any suggestions/replies are greatly appreciated!
Thank you

can't run the demo

Hi there, I can't run the webcam and the demo with image source either. Not sure what's happening.

Working Environment 
OS: macOS 12.1 21C52 x86_64 
Host: iMac19,1
Kernel: 21.2.0
CPU: Intel i5-8500 (6) @ 3.00GHz
GPU: Radeon Pro 570X

I used miniconda to install this repo. Not sure whether it needs cuda support

Prediction result for soccer.jpeg is not as good as the shown image

Screenshot from 2021-11-09 23-33-13
I use the same command 'python -m openpifpaf.predict docs/soccer.jpeg --checkpoint=shufflenetv2k30-wholebody --line-width=2 --show
' to run the prediction but the hands of the front person are not aligned as your shown image.
Did you use a different weight? Or is there any other problems?

Common model for human and car keypoint

HI, thanks for sharing!
Can you please elaborate more, in your second demo video we can see you detected human and vehicle key points in each frame? Was it two different models of a single model? If it is a single model please let me know where to get it.
Thank you in advance.

Problems during operation

Hello author, I'm trying to re-implement your code, but in the process, I'm encountering problems.
First I need to state that I successfully installed the environment and ran the steps: python Compute_edge_weights.py
Secondly, After running this command (python Compute_training_weights.py) the following error is reported:

Traceback (most recent call last):
File "/home/hcb/Keypoint_Communities/src/Compute_training_weights.py", line 314, in
create_weights_wholebody()
File "/home/hcb/Keypoint_Communities/src/Compute_training_weights.py", line 192, in create_weights_wholebody
draw_skeletons_wb(WHOLEBODY_STANDING_POSE, inverse_normalize(w_harm_cl_euclid, kps=kps),
File "/home/hcb/Keypoint_Communities/src/Compute_training_weights.py", line 60, in draw_skeletons_wb
from openpifpaf.annotation import Annotation # pylint: disable=import-outside-toplevel
File "/home/hcb/anaconda3/envs/keypoint/lib/python3.8/site-packages/openpifpaf/init.py", line 11, in
cpp_extension.register_ops()
File "/home/hcb/anaconda3/envs/keypoint/lib/python3.8/site-packages/openpifpaf/cpp_extension.py", line 26, in register_ops
torch.ops.load_library(ext_specs.origin)
File "/home/hcb/anaconda3/envs/keypoint/lib/python3.8/site-packages/torch/_ops.py", line 104, in load_library
ctypes.CDLL(path)
File "/home/hcb/anaconda3/envs/keypoint/lib/python3.8/ctypes/init.py", line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/hcb/anaconda3/envs/keypoint/lib/python3.8/site-packages/openpifpaf/_cpp.so: undefined symbol: ZN5torch6detail10class_baseC2ERKSsS3_SsRKSt9type_infoS6
directory

(I only changed the path, no other code was touched. The code can generate a folder named docs_wb, but it is empty.My directory is shown in the figure)

cheers

Error while installing openpifpaf

While running the requirements.txt I got stuck on an issue regarding openpifpaf==0.13.0. It seems like the version cannot be found. Do you have a way to bypass that ?

Issues running webcam/videos

Getting the following errors when trying to use on webcam or video source

(kc) pablo@pablo-Alienware-Aurora-R7:~/0Dev/Keypoint_Communities$ python -m openpifpaf.video --source ../immersed-ganerated/data/iterim/test_vid3.webm --checkpoint=shufflenetv2k30-wholebody --line-width=2 --showINFO:__main__:neural network device: cuda (CUDA available: True, count: 1)
INFO:openpifpaf.decoder.factory:No specific decoder requested. Using the first one from:
  --decoder=cifcaf:0
  --decoder=posesimilarity:0
Use any of the above arguments to select one or multiple decoders and to suppress this message.
INFO:openpifpaf.predictor:neural network device: cuda (CUDA available: True, count: 1)
INFO:openpifpaf.show.animation_frame:video output = None
Traceback (most recent call last):
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/video.py", line 158, in <module>
    main()
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/video.py", line 129, in main
    for (ax, ax_second), (preds, _, meta) in \
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/predictor.py", line 112, in dataset
    yield from self.dataloader(dataloader)
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/predictor.py", line 149, in dataloader
    yield from self.enumerated_dataloader(enumerate(dataloader))
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/predictor.py", line 115, in enumerated_dataloader
    for batch_i, item in enumerated_dataloader:
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
    raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
    data.append(next(self.dataset_iter))
  File "/home/pablo/miniconda3/envs/kc/lib/python3.8/site-packages/openpifpaf/stream.py", line 119, in __iter__
    capture = cv2.VideoCapture(self.source)
AttributeError: 'NoneType' object has no attribute 'VideoCapture'

I have no problem running the demo on images as shown in the readme, its only with videos

PytorchStreamReader failed reading zip archive: failed finding central directory

When i run the demo, i meet the error like this:

INFO:__main__:neural network device: cuda (CUDA available: True, count: 2)
Traceback (most recent call last):
  File "/root/miniconda3/envs/key_com/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/envs/key_com/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/openpifpaf/predict.py", line 128, in <module>
    main()
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/openpifpaf/predict.py", line 103, in main
    predictor = Predictor(
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/openpifpaf/predictor.py", line 29, in __init__
    self.model_cpu, _ = network.Factory().factory(head_metas=head_metas)
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/openpifpaf/network/factory.py", line 302, in factory
    net_cpu, epoch = self.from_checkpoint()
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/openpifpaf/network/factory.py", line 366, in from_checkpoint
    checkpoint = torch.hub.load_state_dict_from_url(
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/torch/hub.py", line 590, in load_state_dict_from_url
    return torch.load(cached_file, map_location=map_location)
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/torch/serialization.py", line 600, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
  File "/root/miniconda3/envs/key_com/lib/python3.8/site-packages/torch/serialization.py", line 242, in __init__
    super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

My torch version is 1.10.0.

I can‘t run the demo

Hey, why I run python3 -m openpifpaf.predict docs/11790.jpg --checkpoint=shufflenetv2k30-wholebody --line-width=2 --show, it shows:
Traceback (most recent call last):
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/runpy.py", line 111, in _get_module_details
import(pkg_name)
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/site-packages/openpifpaf/init.py", line 11, in
cpp_extension.register_ops()
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/site-packages/openpifpaf/cpp_extension.py", line 26, in register_ops
torch.ops.load_library(ext_specs.origin)
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/site-packages/torch/_ops.py", line 104, in load_library
ctypes.CDLL(path)
File "/home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/ctypes/init.py", line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/jiawen/anaconda3/envs/KPTCOM/lib/python3.9/site-packages/openpifpaf/_cpp.so: undefined symbol: ZN5torch6detail10class_baseC2ERKSsS3_SsRKSt9type_infoS6
Could u pls help me

Weird prediction results on custom 24 kps car dataset

I have a dataset with 91 trannning & 9 validation images where 24 car kps are annotated and annotations are transformed to coco format as apollo where keypoints array is of size 24*3.

I trained the shufflenetv2k16 24kps model. Trainning seems correct as shown below

INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 1, 'batch': 0, 'n_batches': 18, 'time': 0.471, 'data_time': 4.657, 'lr': 2e-05, 'loss': 3540.623, 'head_losses': [3.702, 310.73, 0.799, 205.145, 3015.488, 4.761]}
INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 1, 'batch': 11, 'n_batches': 18, 'time': 0.434, 'data_time': 0.0, 'lr': 2e-05, 'loss': 3779.676, 'head_losses': [8.102, 372.573, 0.679, 149.955, 3243.949, 4.419]}
INFO:openpifpaf.network.trainer:applying ema
INFO:openpifpaf.network.trainer:{'type': 'train-epoch', 'epoch': 2, 'loss': 3874.88336, 'head_losses': [2.99886, 334.91886, 0.84728, 231.23997, 3298.75681, 6.1216], 'time': 14.2, 'n_clipped_grad': 0, 'max_norm': 0.0}
INFO:openpifpaf.network.trainer:restoring params from before ema

...

INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 100, 'batch': 0, 'n_batches': 18, 'time': 0.625, 'data_time': 2.206, 'lr': 2e-05, 'loss': 2648.88, 'head_losses': [-40.191, 469.212, 0.059, 123.052, 2095.671, 1.078]}
INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 100, 'batch': 11, 'n_batches': 18, 'time': 0.427, 'data_time': 0.0, 'lr': 2e-05, 'loss': 2616.817, 'head_losses': [-36.641, 429.025, 0.159, 120.99, 2102.774, 0.509]}
INFO:openpifpaf.network.trainer:applying ema
INFO:openpifpaf.network.trainer:{'type': 'train-epoch', 'epoch': 101, 'loss': 3276.90862, 'head_losses': [-45.2643, 351.73443, 0.60137, 196.80784, 2769.34305, 3.68621], 'time': 14.2, 'n_clipped_grad': 0, 'max_norm': 0.0}
INFO:openpifpaf.network.trainer:restoring params from before ema

...

INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 199, 'batch': 0, 'n_batches': 18, 'time': 0.46, 'data_time': 4.563, 'lr': 2e-06, 'loss': 3563.018, 'head_losses': [-61.656, 281.741, 0.614, 203.755, 3135.36, 3.203]}
INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 199, 'batch': 11, 'n_batches': 18, 'time': 0.435, 'data_time': 0.0, 'lr': 2e-06, 'loss': 3455.956, 'head_losses': [-54.041, 327.122, 0.471, 192.066, 2987.241, 3.097]}
INFO:openpifpaf.network.trainer:applying ema
INFO:openpifpaf.network.trainer:{'type': 'train-epoch', 'epoch': 200, 'loss': 3159.92377, 'head_losses': [-52.87015, 321.86455, 0.44325, 169.9907, 2718.06543, 2.42999], 'time': 20.8, 'n_clipped_grad': 0, 'max_norm': 0.0}
INFO:openpifpaf.network.trainer:model written: outputs/shufflenetv2k16-211021-024942-apollo.pkl.epoch200

...

INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 299, 'batch': 0, 'n_batches': 18, 'time': 0.614, 'data_time': 6.553, 'lr': 2e-07, 'loss': 3374.045, 'head_losses': [-54.347, 392.751, 0.522, 170.645, 2860.723, 3.751]}
INFO:openpifpaf.network.trainer:{'type': 'train', 'epoch': 299, 'batch': 11, 'n_batches': 18, 'time': 0.443, 'data_time': 0.0, 'lr': 2e-07, 'loss': 2769.724, 'head_losses': [-43.393, 481.309, 0.127, 131.402, 2198.701, 1.578]}
INFO:openpifpaf.network.trainer:applying ema
INFO:openpifpaf.network.trainer:{'type': 'train-epoch', 'epoch': 300, 'loss': 3054.62056, 'head_losses': [-52.68392, 345.90691, 0.38009, 155.51899, 2603.45154, 2.04698], 'time': 17.1, 'n_clipped_grad': 0, 'max_norm': 0.0}
INFO:openpifpaf.network.trainer:model written: outputs/shufflenetv2k16-211021-024942-apollo.pkl.epoch300

However, all predictions are empty array.

image

The prediction codes are shown below

import os
import numpy as np
from PIL import Image
import cv2
import openpifpaf

if __name__ == "__main__":

   src_img_folder = './images/train'
   dst_img_folder = './res/train'
   weights = './outputs/shufflenetv2k16-211021-024942-apollo.pkl.epoch200' 
 
   predictor = openpifpaf.Predictor(checkpoint=weights)

   img_names = os.listdir(src_img_folder)
   img_names = sorted(img_names)

   for name_idx,name_ in enumerate(img_names):

       I = Image.open(os.path.join(src_img_folder,name_)).convert('RGB')

       predictions, gt_anns, image_meta = predictor.pil_image(I)

       print(f"========== predictions: {predictions} ===============")
       print(f"========== image_meta: {image_meta} ==============")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.