Code Monkey home page Code Monkey logo

alphapose_yolovx's Introduction

a google colab link to test https://colab.research.google.com/drive/1o9RhThxyxHr4P3n6a19UGmc5LTRn5zfp?usp=sharing

how to run:

follow my csdn: https://blog.csdn.net/qq_35975447/article/details/114940943

yolov5

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir examples/demo/ --vis --showbox --save_img --pose_track --sp --vis_fast --detector yolov5

yolov4

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir examples/demo/ --vis --showbox --save_img --pose_track --sp --vis_fast --detector yolov4

yolov3

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir examples/demo/ --vis --showbox --save_img --pose_track --sp --vis_fast --detector yolov3

or

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir examples/demo/ --vis --showbox --save_img --pose_track --sp --vis_fast --detector yolo

News!

  • Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints!
  • Dec 2019: v0.3.0 version of AlphaPose is released! Smaller model, higher accuracy!
  • Apr 2019: MXNet version of AlphaPose is released! It runs at 23 fps on COCO validation set.
  • Feb 2019: CrowdPose is integrated into AlphaPose Now!
  • Dec 2018: General version of PoseFlow is released! 3X Faster and support pose tracking results visualization!
  • Sep 2018: v0.2.0 version of AlphaPose is released! It runs at 20 fps on COCO validation set (4.6 people per image on average) and achieves 71 mAP!

AlphaPose

AlphaPose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!


COCO 17 keypoints

Halpe 26 keypoints + tracking

Halpe 136 keypoints + tracking

Results

Pose Estimation

Results on COCO test-dev 2015:

Method AP @0.5:0.95 AP @0.5 AP @0.75 AP medium AP large
OpenPose (CMU-Pose) 61.8 84.9 67.5 57.1 68.2
Detectron (Mask R-CNN) 67.0 88.0 73.1 62.2 75.6
AlphaPose 73.3 89.2 79.1 69.0 78.6

Results on MPII full test set:

Method Head Shoulder Elbow Wrist Hip Knee Ankle Ave
OpenPose (CMU-Pose) 91.2 87.6 77.7 66.8 75.4 68.9 61.7 75.6
Newell & Deng 92.1 89.3 78.9 69.8 76.2 71.6 64.7 77.5
AlphaPose 91.3 90.5 84.0 76.4 80.3 79.9 72.4 82.1

More results and models are available in the docs/MODEL_ZOO.md.

Pose Tracking

Please read trackers/README.md for details.

CrowdPose

Please read docs/CrowdPose.md for details.

Installation

Please check out docs/INSTALL.md

Model Zoo

Please check out docs/MODEL_ZOO.md

Quick Start

  • Inference: Inference demo
./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
  • Training: Train from scratch
./scripts/train.sh ${CONFIG} ${EXP_ID}
  • Validation: Validate your model on MSCOCO val2017
./scripts/validate.sh ${CONFIG} ${CHECKPOINT}

Examples:

Demo using FastPose model.

./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
#or
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/

Train FastPose on mscoco dataset.

./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose

More detailed inference options and examples, please refer to GETTING_STARTED.md

Common issue & FAQ

Check out faq.md for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is maintained by Jiefeng Li*, Hao-shu Fang*, Yuliang Xiu and Chao Xu.

The main contributors are listed in doc/contributors.md.

TODO

  • Multi-GPU/CPU inference
  • 3D pose
  • add tracking flag
  • PyTorch C++ version
  • Add MPII and AIC data
  • dense support
  • small box easy filter
  • Crowdpose support
  • Speed up PoseFlow
  • Add stronger/light detectors and the mobile pose
  • High level API

We would really appreciate if you can offer any help and be the contributor of AlphaPose.

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@article{li2018crowdpose,
  title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},
  author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
  journal={arXiv preprint arXiv:1812.00324},
  year={2018}
}

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.

alphapose_yolovx's People

Contributors

gmt710 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

alphapose_yolovx's Issues

error: ‘AT_CHECK’ was not declared in this scope

hi,when i take "python setup.py build develop --user",there is an error like that:

detector/nms/src/nms_cuda.cpp:4:23: error: ‘AT_CHECK’ was not declared in this scope
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")

i dont konw how to do ?can you help me?

error in deformable_im2col: invalid device function 段错误 (核心已转储)

Hello, thanks for your nice work!
my command is python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x-dcn.yaml --checkpoint pretrained_models/fast_dcn_res50_256x192.pth --indir test_img --outdir examples/demo/ --showbox --save_img --pose_track --vis_fast --detector yolov5
but something is wrong said: error in deformable_im2col: invalid device function 段错误 (核心已转储)
Could someone do me a favor?

No module named 'detector'

Thanks for your work!When i run with the command of sh ./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME},a error occur as follow:
Traceback (most recent call last):
File "scripts/demo_inference.py", line 13, in
from detector.apis import get_detector
ModuleNotFoundError: No module named 'detector'
Do you know how to solve this question?

运行yolov4 和 yolov5 失败

(1)使用yolov3检测没有问题,使用yolov4检测出现出错误

(torch) D:\Code\AlphaPose_yolovx-master>python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/ --detector yolov4  --vis --showbox
Traceback (most recent call last):
  File "scripts/demo_inference.py", line 175, in <module>
    det_loader = DetectionLoader(input_source, get_detector(args), cfg, args, batchSize=args.detbatch, mode=mode, queueSize=args.qsize)
  File "D:\Code\AlphaPose_yolovx-master\detector\apis.py", line 17, in get_detector
    from detector.yolov4_api import YOLOV4Detector as det
  File "D:\Code\AlphaPose_yolovx-master\detector\yolov4_api.py", line 15, in <module>
    from yolo_v4.detect import Detector
  File "D:\Code\AlphaPose_yolovx-master\detector\yolo_v4\detect.py", line 4, in <module>
    from yolov4.tool.class_names import COCO_NAMES
ModuleNotFoundError: No module named 'yolov4'

(2)使用yolov5检测出现出错误

(torch) D:\Code\AlphaPose_yolovx-master>python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_2x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/ --detector yolov5  --vis --showbox
Loading YOLOV5 model..
Loading pose model from pretrained_models/fast_res50_256x192.pth...
  0%|                                                                                           | 0/16 [00:00<?, ?it/s]Fusing layers...
Exception in thread Thread-2:
Traceback (most recent call last):
  File "D:\learn\Anaconda3\envs\torch\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "D:\learn\Anaconda3\envs\torch\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Code\AlphaPose_yolovx-master\alphapose\utils\detector.py", line 223, in image_detection
    dets = self.detector.images_detection(imgs, im_dim_list)
  File "D:\Code\AlphaPose_yolovx-master\detector\yolov5_api.py", line 102, in images_detection
    dets = self.dynamic_write_results(prediction, self.confidence,
  File "D:\Code\AlphaPose_yolovx-master\detector\yolov5_api.py", line 122, in dynamic_write_results
    dets = self.write_results(prediction.clone(), confidence, num_classes, nms, nms_conf)
  File "D:\Code\AlphaPose_yolovx-master\detector\yolov5_api.py", line 218, in write_results
    ious = bbox_iou(max_detections[-1], image_pred_class[1:], x1y1x2y2=False, CIoU=True)
  File "D:\Code\AlphaPose_yolovx-master\detector\yolov5\utils\general.py", line 210, in bbox_iou
    b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
IndexError: index 2 is out of bounds for dimension 0 with size 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.