Code Monkey home page Code Monkey logo

lighttrack's Introduction

LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking

Update 4/19/2020:

Paper will appear in CVPR 2020 Workshop on Towards Human-Centric Image/Video Synthesis and the 4th Look Into Person (LIP) Challenge.

Update 5/16/2019: Add Camera Demo

[Project Page] [Paper] [Github] PWC

With the provided code, you can easily:

  • Perform online pose tracking on live webcam.
  • Perform online pose tracking on arbitrary videos.
  • Replicate ablation study experiments on PoseTrack'18 Validation Set.
  • Train models on your own.
  • Replace pose estimators or improve data association modules for future research.

Real-life Application Scenarios:

Table of Contents

Overview

LightTrack is an effective light-weight framework for human pose tracking, truly online and generic for top-down pose tracking. The code for the paper includes LightTrack framework as well as its replaceable component modules, including detector, pose estimator and matcher, the code of which largely borrows or adapts from Cascaded Pyramid Networks [1], PyTorch-YOLOv3, st-gcn and OpenSVAI [3].

Overview

In contrast to Visual Object Tracking (VOT) methods, in which the visual features are implicitly represented by kernels or CNN feature maps, we track each human pose by recursively updating the bounding box and its corresponding pose in an explicit manner. The bounding box region of a target is inferred from the explicit features, i.e., the human keypoints. Human keypoints can be considered as a series of special visual features. The advantages of using pose as explicit features include:

  • (1) The explicit features are human-related and interpretable, and have very strong and stable relationship with the bounding box position. Human pose enforces direct constraint on the bounding box region.

  • (2) The task of pose estimation and tracking requires human keypoints be predicted in the first place. Taking advantage of the predicted keypoints is efficient in tracking the ROI region, which is almost free. This mechanism makes the online tracking possible.

  • (3) It naturally keeps the identity of the candidates, which greatly alleviates the burden of data association in the system. Even when data association is necessary, we can re-use the pose features for skeleton-based pose matching. (Here we adopt Siamese Graph Convolutional Networks (SGCN) for efficient identity association.)

Single Pose Tracking (SPT) and Single Visual Object Tracking (VOT) are thus incorporated into one unified functioning entity, easily implemented by a replaceable single-person human pose estimation module. Below is a simple step-by-step explanation of how the LightTrack framework works.

Example 1

(1). Detection only at the 1st Frame. Blue bboxes indicate tracklets inferred from keypoints.

Example 0

(2). Detection at every other 10 Frames. Red bbox indicates keyframe detection.

Example 2

(3). Detection at every other 10 Frames for multi-person:

  • At non-keyframes, IDs are naturally kept for each person;
  • At keyframes, IDs are associated via spatial consistency.

For more technical details, please refer to our arXiv paper.

Prerequisites

  • Set up a Python3 environment with provided anaconda environment file.
    # This anaconda environment should contain everything needed, including tensorflow, pytorch, etc.
    conda env create -f environment.yml

(Optional: set up the environment on your own)

  • Install PyTorch 1.0.0 (or higher) and TorchVision. (Siamese Graph Convolutional Network)
  • Install Tensorflow 1.12. Tensorflow v2.0 is not tested yet. (Human Pose Estimator)
  • Install some other packages:
    pip install cython opencv-python pillow matplotlib

Getting Started

  • Clone this repository and enter the dragon lighttrack folder:
    git clone https://github.com/Guanghan/lighttrack.git;
    
    # build some necessities
    cd lighttrack/lib;
    make;
    
    cd ../graph/torchlight;
    python setup.py install
    
    # enter lighttrack
    cd ../../
    
  • If you'd like to train LightTrack, download the COCO dataset and the PoseTrack dataset first. Note that this script will take a while and dump 21gb of files into ./data/coco. For PoseTrack dataset, you can replicate our ablation experiment results on the validation set. You will need to register at the official website and create entries in order to submit your test results to the server.
    sh data/download_coco.sh
    sh data/download_posetrack17.sh
    sh data/download_posetrack18.sh

Demo on Live Camera

PoseTracking Framework Keyframe Detector Keyframe ReID Module Pose Estimator FPS
LightTrack YOLOv3 Siamese GCN MobileNetv1-Deconv 220* / 15
  • Download weights.

    cd weights;
    bash ./download_weights.sh  # download weights for backbones (only for training), detectors, pose estimators, pose matcher, etc.
    cd -;
  • Perform pose tracking demo on your Webcam.

    # access virtual environment
    source activate py36;
    
    # Perform LightTrack demo (on camera) with light-weight detector and pose estimator
    python demo_camera_mobile.py

Demo on Arbitrary Videos

PoseTracking Framework Keyframe Detector Keyframe ReID Module Pose Estimator FPS
LightTrack YOLOv3 Siamese GCN MobileNetv1-Deconv 220* / 15
  • Download demo video.

    cd data/demo;
    bash ./download_demo_video.sh  # download the video for demo; you could later replace it with your own video for fun
    cd -;
  • Perform online tracking demo.

    # access virtual environment
    source activate py36;
    
    # Perform LightTrack demo (on arbitrary video) with light-weight detector and pose estimator
    python demo_video_mobile.py
  • After processing, pose tracking results are stored in standardized OpenSVAI format JSON files, located at [data/demo/jsons/].

  • Visualized images and videos have been output at [data/demo/visualize/] and [data/demo/videos/]. Note that the video is by default output with the actual average framerate. You can hardcode it to be faster or slower for different purposes.

  • Some statistics will also be reported, including FPS, number of persons encountered, etc. Below is the statistics with the provided video, using YOLOv3 as detector and MobileNetv1-Deconv as the pose estimator.

total_time_ALL: 19.99s
total_time_DET: 1.32s
total_time_POSE: 18.63s
total_time_LIGHTTRACK: 0.04s
total_num_FRAMES: 300
total_num_PERSONS: 600

Average FPS: 15.01fps
Average FPS excluding Pose Estimation: 220.08fps
Average FPS excluding Detection: 16.07fps
Average FPS for framework only: 7261.90fps

You can replace the demo video with your own for fun. You can also try different detectors or pose estimators.

Validate on PoseTrack 2018

Pose estimation models have been provided. It should have been downloaded to ./weights folder while running ./download_weights.sh script. We provide alternatives of CPN101 and MSRA152, pre-trained with ResNet101 and Res152, respectively.

Image Size Pose Estimator Weights
384x288 CPN101 [1] CPN_snapshot_293.ckpt
384x288 MSRA152 [2] MSRA_snapshot_285.ckpt

Detections for PoseTrack'18 validation set have been pre-computed. We use the same detections from [3] in our experiments. Two options are available, including deformable versions of FPN and RFCN, as illustrated in the paper.
Here we provide the detections by FPN, which renders higher performance.

Detector Jsons
ResNet101_Deformable_FPN_RCNN [6] DeformConv_FPN_RCNN_detect.zip
ResNet101_Deformable_RFCN [6] DeformConv_RFCN_detect.zip
Ground Truth Locations GT_detect.zip
  • Download pre-computed detections and unzip them in ./data directory.

    cd data;
    bash ./download_dets.sh   
    cd -;
  • Perform LightTrack on PoseTrack 2018 validation with our detection results using deformable FPN.

    python process_posetrack18_with_lighttrack_MSRA152.py
    # or
    python process_posetrack18_with_lighttrack_CPN101.py
  • Or perform LightTrack on PoseTrack 2018 validation with ground truth locations.

    python process_posetrack18_with_lighttrack_MSRA152_gt.py
    # or
    python process_posetrack18_with_lighttrack_CPN101_gt.py
  • After processing, pose tracking results are stored in standardized OpenSVAI format JSON files, located at [data/Data_2018/posetrack_results/lighttrack/results_openSVAI/].

  • Visualized images and videos have been output at [data/Data_2018/videos/].

Evaluation on PoseTrack 2018

  • If you'd like to evaluate LightTrack predicted using detection results.
    # Convert tracking results into PoseTrack format before evaluation
    source activate py36;
    python jsonformat_std_to_posetrack18.py -e 0.4 -d lighttrack -m track -f 17 -r 0.80;  # validation set. For DET locations
    
    # Evaluate Task 1/2 + 3: using official poseval tool
    source deactivate;
    cd data/Data_2018/poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH;
    python evaluate.py \
     --groundTruth=/export/LightTrack/data/Data_2018/posetrack_data/annotations/val/ \
     --predictions=/export/LightTrack/data/Data_2018/predictions_lighttrack/ \
     --evalPoseTracking \
     --evalPoseEstimation;

For mAP, two values are given: the mean average precision before and after keypoint dropping. For FPS, * means excluding pose inference time. Our LightTrack in true online mode runs at an average of 0.8 fps on PoseTrack'18 validation set.

[LightTrack_CPN101] and [LightTrack_MSRA152] are both trained with [COCO + PoseTrack'17] dataset; [LightTrack_MSRA152 + auxiliary] is trained with [COCO + PoseTrack'18 + ChallengerAI] dataset.

Methods Det Mode FPS mAP MOTA MOTP
LightTrack_CPN101 online-DET-2F 47* / 0.8 76.0 / 70.3 61.3 85.2
LightTrack_MSRA152 online-DET-2F 48* / 0.7 77.2 / 72.4 64.6 85.3
LightTrack_MSRA152 + auxiliary online-DET-2F 48* / 0.7 77.7 / 72.7 65.4 85.1
  • If you'd like to evaluate LightTrack predicted using ground truth locations. Note that for ground truth locations, not every frame is annotated. If a keyframe is not annotated, the estimation is missing. In order to successfully evaluate the performance, we generate ground truth jsons (gt_locations) specially for predictions using ground truth locations.
    # Convert tracking results into PoseTrack format before evaluation
    source activate py36;
    python jsonformat_std_to_posetrack18.py -e 0.4 -d lighttrack -m track -f 17 -r 0.70;  # validation set. For GT locations
    
    # Evaluate Task 1/2 + 3: using official poseval tool
    source deactivate;
    cd data/Data_2018/poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH;
    python evaluate.py \
     --groundTruth=/export/LightTrack/data/Data_2018/gt_lighttrack/ \
     --predictions=/export/LightTrack/data/Data_2018/predictions_lighttrack/ \
     --evalPoseTracking \
     --evalPoseEstimation;
Methods Det Mode FPS mAP MOTA MOTP
LightTrack_CPN101 online-GT-2F 47* / 0.8 - / 70.1 73.5 94.7
LightTrack_MSRA152 online-GT-2F 48* / 0.7 - / 73.1 78.0 94.8

Qualitative Results

Some gifs exhibiting qualitative results:

  • (1) PoseTrack test sequence
PoseTracking Framework Keyframe Detector Keyframe ReID Module Pose Estimator
LightTrack Deformable FPN (heavy) Siamese GCN MSRA152 (heavy)

Demo 1

  • (2) Potential Applications (Surveillance, Sport Analytics, etc.)
PoseTracking Framework Keyframe Detector Keyframe ReID Module Pose Estimator
LightTrack YOLOv3 (light) Siamese GCN MobileNetv1-Deconv (light)

Demo 2 Demo 3

Quantitative Results on PoseTrack

Challenge 3: Multi-Person Pose Tracking

Methods Mode FPS mAP MOTA
LightTrack (offline-ensemble) batch - 66.65 58.01
HRNet [4], CVPR'19 batch - 74.95 57.93
FlowTrack [2], ECCV'18 batch - 74.57 57.81
LightTrack (online-3F) online 47* / 0.8 66.55 55.15
PoseFlow [5], BMVC'18 online 10* / - 62.95 50.98

For FPS, * means excluding pose inference time and - means not applicable. Our LightTrack in true online mode runs at an average of 0.8 fps on PoseTrack'18 validation set. (In total, 57,928 persons are encountered. An average of 6.54 people are tracked for each frame.)

Models are trained with [COCO + PoseTrack'17] dataset.

Training

1) Pose Estimation Module

  • To train, grab an imagenet-pretrained model and put it in ./weights.
    • For Resnet101, download resnet101.ckpt from here.
    • For Resnet152, download resnet152.ckpt from here.
  • Run the training commands below.
# Train with COCO+PoseTrack'17
python train_PoseTrack_COCO_17_CPN101.py -d 0-3 -c   # Train CPN-101
# or
python train_PoseTrack_COCO_17_MSRA152.py -d 0-3 -c  # Train MSRA-152
# or
python train_PoseTrack_COCO_17_mobile_deconv.py -d 0-3 -c  # Train MobileNetv1-Deconv

2) Pose Matching Module

  • Run the training commands below.
# Download training and validation data
cd graph/unit_test;
bash download_data.sh;
cd -;

# Train the siamese graph convolutional network
cd graph;
python main.py processor_siamese_gcn -c config/train.yaml

In order to perform ablation studies on the pose matching module, the simplest way without modifying existing code is to set the pose matching threshold to a value smaller than zero, which will nullify the pose matching module. The performance on PoseTrack'18 validation will then deteriorate.

Methods Det Mode Pose Match (thresh) mAP MOTA MOTP
LightTrack_MSRA152 online DET No (0) 77.2 / 72.4 63.3 85.3
LightTrack_MSRA152 online DET Yes (1.0) 77.2 / 72.4 64.6 85.3
LightTrack_CPN101 online DET No (0) 76.0 / 70.3 60.0 85.2
LightTrack_CPN101 online DET Yes (1.0) 76.0 / 70.3 61.3 85.2
  • Since the siamese graph convolution module impacts the identity association process alone, only the MOTA score is influenced.

  • Specifically, SGCN helps reduce scenarios of identity mismatch or identity lost, in the face of swift camera zooming or sudden camera shifting, where human drifting occurs and spatial consistency is no more reliable.

  • Without SGCN, when identity is lost, a new id will be assigned, which causes identity mismatch with ground truth.

Limitations

Currently, the LightTrack framework does not handle well the identity switch/lose in occlusion scenarios, which is due to several reasons: (1) only one frame history is considered during data association; (2) only skeleton-based features are used. However, these problems are not natural drawbacks of the LightTrack framework. In future research, spatiotemporal pose matching can be further explored to mitigate the occlusion problem. A longer history of poses might improve the performance; a combination of visual features and skeleton-based features may further contribute to the robustness of the data association module.

Citation

If you find LightTrack helpful or use this framework in your work, please consider citing:

@article{ning2019lighttrack,
  author    = {Ning, Guanghan and Huang, Heng},
  title     = {LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking,
  journal   = {Proceedings of CVPRW 2020 on Towards Human-Centric Image/Video Synthesis and the 4th Look Into Person (LIP) Challenge},
  year      = {2020},
}

Also consider citing the following works if you use CPN101/MSRA152 models:

@inproceedings{xiao2018simple,
    author={Xiao, Bin and Wu, Haiping and Wei, Yichen},
    title={Simple Baselines for Human Pose Estimation and Tracking},
    booktitle = {ECCV},
    year = {2018}
}
@article{Chen2018CPN,
    Author = {Chen, Yilun and Wang, Zhicheng and Peng, Yuxiang and Zhang, Zhiqiang and Yu, Gang and Sun, Jian},
    Title = {{Cascaded Pyramid Network for Multi-Person Pose Estimation}},
    Conference = {CVPR},
    Year = {2018}
}

Reference

[1] Chen, Yilun, et al. "Cascaded pyramid network for multi-person pose estimation." CVPR (2018).

[2] Xiao, Bin, Haiping Wu, and Yichen Wei. "Simple baselines for human pose estimation and tracking." ECCV (2018).

[3] Ning, Guanghan, et al. "A top-down approach to articulated human pose estimation and tracking". ECCVW (2018).

[4] Sun, Ke, et al. "Deep High-Resolution Representation Learning for Human Pose Estimation." CVPR (2019).

[5] Xiu, Yuliang, et al. "Pose flow: efficient online pose tracking." BMVC (2018).

[6] Dai, Jifeng, et al. "Deformable convolutional networks." ICCV (2017).

Contact

For questions about our paper or code, please contact Guanghan Ning.

Credits

LOGO by: Hogen

lighttrack's People

Contributors

guanghan avatar tobias-fischer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lighttrack's Issues

while running demo_video_mobile.py, I came across this problem.

Detector YOLOv3 options: Namespace(batch_size=1, checkpoint_model=None, conf_thres=0.8, config_path='detector/config/yolov3.cfg', img_size=416, n_cpu=8, nms_thres=0.4, weights_path='weights/YOLOv3/yolov3.weights')
PoseTrack order.
/home/vdai/peng/code/lighttrack-master/graph/gcn_utils/io.py:43: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
default_arg = yaml.load(f)
[08.14.19|11:47:23] Load weights from ./weights/GCN/epoch210_model.pt.
[08.14.19|11:47:23] Load weights [A].
[08.14.19|11:47:23] Load weights [data_bn.weight].
[08.14.19|11:47:23] Load weights [data_bn.bias].
[08.14.19|11:47:23] Load weights [data_bn.running_mean].
[08.14.19|11:47:23] Load weights [data_bn.running_var].
[08.14.19|11:47:23] Load weights [data_bn.num_batches_tracked].
[08.14.19|11:47:23] Load weights [st_gcn_networks.0.gcn.conv.weight].
[08.14.19|11:47:23] Load weights [st_gcn_networks.0.gcn.conv.bias].
[08.14.19|11:47:23] Load weights [st_gcn_networks.1.gcn.conv.weight].
[08.14.19|11:47:23] Load weights [st_gcn_networks.1.gcn.conv.bias].
[08.14.19|11:47:23] Load weights [edge_importance.0].
[08.14.19|11:47:23] Load weights [edge_importance.1].
[08.14.19|11:47:23] Load weights [fcn.weight].
[08.14.19|11:47:23] Load weights [fcn.bias].
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torchlight-1.0-py3.5.egg/torchlight/io.py", line 82, in load_weights
doc = _io._TextIOBase.doc
File "/home/vdai/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Model:
Unexpected key(s) in state_dict: "data_bn.num_batches_tracked".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/vdai/peng/code/lighttrack-master/demo_video_mobile.py", line 32, in
from graph import visualize_pose_matching
File "/home/vdai/peng/code/lighttrack-master/graph/visualize_pose_matching.py", line 297, in
pose_matcher = Pose_Matcher()
File "/home/vdai/peng/code/lighttrack-master/graph/visualize_pose_matching.py", line 162, in init
self.load_weights()
File "/home/vdai/peng/code/lighttrack-master/graph/gcn_utils/io.py", line 79, in load_weights
self.arg.ignore_weights)
File "/usr/local/lib/python3.5/dist-packages/torchlight-1.0-py3.5.egg/torchlight/io.py", line 89, in load_weights

File "/home/vdai/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Model:
Unexpected key(s) in state_dict: "data_bn.num_batches_tracked".

Evaluation on PoseTrack 2018 ?

Thanks for your great work.
I try to evaluation on PoseTrack 2018 with jsonformat_std_to_posetrack18.py -e 0.4 -d lighttrack -m track -f 17 -r 0.80, but, why the option "-f" is 17?

Replace pose estimator with another CPU-based only

Hi,

I would like to replace the pose estimator with a lightweight implementation of OpenPose, based only on CPU inference since my computer does not support CUDA.

However when I try to compile the 'lib' folder with 'make' I get an error due to the missing CUDA support. Is it possible to do the replacement explained above or CUDA is a prerequisite for other parts of Lighttrack too?

Thank you,
Cataldo

Validation set size inconsistent with PoseTrack2018

Hi,

Thank you for releasing this quality work!

I notices in your paper you mentioned that validation set of PoseTrack18 has 74 sequences, while the official PoseTrack dataset has 170 validation sequences. I assume all the results regarding the validation set here and in the paper correspond to 74 sequences, not the full 170 sequences? Can you provide a way to evaluate your models (detection, pose estimation, matching, etc.) on full validation set?

thanks!

Best performance config?

How should I make a proper configuration with "MSRA" pose estimitor and "FPN-101" dector in "offline" mode to achieve the best performance ?

在运行demo_video_mobile.py时报错

(py36) liuzhuxian@liuzhuxian-Inspiron-5559:~/lighttrack$ python demo_video_mobile.py
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/liuzhuxian/anaconda3/envs/py36/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /home/liuzhuxian/lighttrack/HPE/../lib/nets/mobilenet_v1.py:440: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-3
OMP: Info #156: KMP_AFFINITY: 4 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 2 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 1 thread 1
OMP: Info #250: KMP_AFFINITY: pid 2591 tid 2591 thread 0 bound to OS proc set 0
Detector YOLOv3 options: Namespace(batch_size=1, checkpoint_model=None, conf_thres=0.8, config_path='detector/config/yolov3.cfg', img_size=416, n_cpu=8, nms_thres=0.4, weights_path='weights/YOLOv3/yolov3.weights')
OMP: Info #250: KMP_AFFINITY: pid 2591 tid 2604 thread 1 bound to OS proc set 1
Traceback (most recent call last):
File "demo_video_mobile.py", line 28, in
from nms.gpu_nms import gpu_nms
ModuleNotFoundError: No module named 'nms.gpu_nms'

ImportError: gpu_nmscpython-36m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration

Hi,
I am facing the following error when I run demo_video_mobile.py .

Traceback (most recent call last): File "demo_video_mobile.py", line 28, in <module> from nms.gpu_nms import gpu_nms ImportError: /home/peeterson/Desktop/lighttrack/HPE/../lib/nms/gpu_nms.cpython-36m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration

module versions inside py36 environment:

_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_llvm conda-forge
_tflow_180_select 1.0 gpu anaconda
absl-py 0.2.2 py36_0 anaconda
astor 0.6.2 py36_0 anaconda
blas 1.0 mkl conda-forge
bleach 1.5.0 py36_0 conda-forge
bzip2 1.0.6 1 conda-forge
ca-certificates 2020.12.5 ha878542_0 conda-forge
cairo 1.14.12 h77bcde2_0 anaconda
certifi 2020.12.5 py36h5fab9bb_1 conda-forge
cloudpickle 0.5.3 pypi_0 pypi
cmake 3.12.0 h307fef2_0 anaconda
cudatoolkit 9.0 h13b8566_0 anaconda
cudnn 7.1.2 cuda9.0_0 anaconda
cupti 9.0.176 0 anaconda
cycler 0.10.0 py36_0 conda-forge
cython 0.28.3 pypi_0 pypi
dask 0.17.5 pypi_0 pypi
dataclasses 0.8 pypi_0 pypi
datetime 4.2 pypi_0 pypi
dbus 1.13.0 h3a4f0e9_0 conda-forge
decorator 4.3.0 pypi_0 pypi
easydict 1.7 pypi_0 pypi
expat 2.2.5 0 conda-forge
ffmpeg 3.2.4 3 conda-forge
fontconfig 2.13.0 0 conda-forge
freetype 2.8.1 0 conda-forge
gast 0.2.0 py36_0 anaconda
giflib 5.1.4 0 conda-forge
glib 2.53.6 h5d9569c_2 anaconda
graphite2 1.3.11 0 conda-forge
grpcio 1.12.0 py36hdbcaa40_0 anaconda
gst-plugins-base 1.12.4 h33fb286_0 anaconda
gstreamer 1.12.4 hb53b477_0 anaconda
h5py 2.9.0 pypi_0 pypi
harfbuzz 1.7.6 hc5b324e_0 anaconda
hdf5 1.10.2 0 conda-forge
html5lib 0.9999999 py36_0 conda-forge
icu 58.2 0 conda-forge
imageio 2.3.0 py36_0 conda-forge
jasper 1.900.1 4 conda-forge
jpeg 9b h024ee3a_2
kiwisolver 1.0.1 py36_1 conda-forge
libcurl 7.61.0 h1ad7b7a_0 anaconda
libedit 3.1.20170329 h6b74fdf_2 anaconda
libffi 3.2.1 hd88cf55_4 anaconda
libgcc 7.2.0 h69d50b8_2 conda-forge
libgcc-ng 7.2.0 hdf63c60_3 anaconda
libgfortran 3.0.0 1 conda-forge
libgfortran-ng 7.2.0 hdf63c60_3 anaconda
libiconv 1.15 h63c8f33_5 anaconda
libopenblas 0.2.20 h9ac9557_7 anaconda
libpng 1.6.34 0 conda-forge
libprotobuf 3.5.2 h6f1eeef_0 anaconda
libssh2 1.8.0 h9cfc8f7_4 anaconda
libstdcxx-ng 7.2.0 hdf63c60_3 anaconda
libtiff 4.0.9 he6b73bb_1 conda-forge
libuuid 1.0.3 h1bed415_2 anaconda
libwebp 0.5.2 7 conda-forge
libxcb 1.13 0 conda-forge
libxml2 2.9.8 h26e45fe_1 anaconda
llvm-openmp 11.0.1 h4bd325d_0 conda-forge
markdown 2.6.11 py36_0 anaconda
matplotlib 2.2.2 py36h0e671d2_0 anaconda
mkl 2020.4 h726a3e6_304 conda-forge
msgpack 0.5.6 pypi_0 pypi
msgpack-numpy 0.4.3 pypi_0 pypi
munkres 1.0.12 pypi_0 pypi
ncurses 6.1 hf484d3e_0 anaconda
networkx 2.1 pypi_0 pypi
numpy 1.14.3 py36h28100ab_1 anaconda
numpy-base 1.14.3 py36h0ea5e3f_1 anaconda
olefile 0.45.1 py36_0 conda-forge
openblas 0.2.20 8 conda-forge
opencv 3.4.1 py36h6fd60c2_1 anaconda
opencv-python 3.4.1.15 pypi_0 pypi
openssl 1.0.2o 0 conda-forge
pcre 8.41 1 conda-forge
pillow 5.1.0 py36_0 conda-forge
pip 21.0.1 pypi_0 pypi
pixman 0.34.0 2 conda-forge
protobuf 3.5.2 py36hf484d3e_0 anaconda
pyarrow 0.9.0 pypi_0 pypi
pycocotools 2.0.0 pypi_0 pypi
pyparsing 2.2.0 py36_0 conda-forge
pyqt 5.6.0 py36_5 conda-forge
pyqt5 5.12.1 pypi_0 pypi
pyqt5-sip 4.19.15 pypi_0 pypi
python 3.6.5 hc3d631a_2 anaconda
python-dateutil 2.7.3 py_0 conda-forge
python_abi 3.6 1_cp36m conda-forge
pytz 2018.4 py_0 conda-forge
pywavelets 0.5.2 pypi_0 pypi
pyyaml 3.12 pypi_0 pypi
pyzmq 17.0.0 pypi_0 pypi
qt 5.6.2 hd25b39d_14 anaconda
readline 7.0 ha6073c6_4 anaconda
rhash 1.3.6 hb7f436b_0 anaconda
scikit-image 0.14.0 pypi_0 pypi
scipy 1.1.0 pypi_0 pypi
setproctitle 1.1.10 pypi_0 pypi
setuptools 39.2.0 py36_0 conda-forge
shapely 1.6.4.post1 pypi_0 pypi
sip 4.18 py36_1 conda-forge
six 1.11.0 py36h372c433_1 anaconda
sqlite 3.23.1 he433501_0 anaconda
tabulate 0.8.2 pypi_0 pypi
tensorboard 1.8.0 py36hf484d3e_0 anaconda
tensorflow 1.8.0 hb11d968_0 anaconda
tensorflow-base 1.8.0 py36hc1a7637_0 anaconda
tensorflow-gpu 1.8.0 h7b35bdc_0 anaconda
termcolor 1.1.0 py36_1 conda-forge
tk 8.6.7 hc745277_3 anaconda
toolz 0.9.0 pypi_0 pypi
torch 1.0.1 pypi_0 pypi
torchvision 0.2.2 pypi_0 pypi
tornado 5.0.2 py36_0 conda-forge
tqdm 4.19.9 pypi_0 pypi
typing-extensions 3.7.4.3 pypi_0 pypi
webencodings 0.5.1 py_1 conda-forge
werkzeug 0.14.1 py36_0 anaconda
wheel 0.31.1 py36_0 conda-forge
x264 20131218 0 conda-forge
xorg-libxau 1.0.8 3 conda-forge
xorg-libxdmcp 1.1.2 3 conda-forge
xz 5.2.4 h14c3975_4 anaconda
zlib 1.2.11 ha838bed_2 anaconda
zope-interface 4.5.0 pypi_0 pypi

I have installed the same cuda and cudnn versions above in my system.

I have exported the path as well.
export PATH=/usr/local/cuda-9.0/bin export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/extras/CUPTI/lib64:/lib/nccl/cuda-9

I also tried to use an earlier version of gcc as per this:
[https://stackoverflow.com/questions/6622454/cuda-incompatible-with-my-gcc-version]

I still get the same error. could someone suggest how to fix this?

thanks.

How can I get the key points in the inference phase?

Thanks for your great job first! In my task, I need to extract the key points in the video, so which files should I read? Or does any utils in this program offer for the reader to extract key points? Thank you very much.

求教pip安装torchlight失败

pip install torchlight==1.0出现
Could not find a version that satisfies the requirement torchlight==1.0 (from versions: 0.0.1)
No matching distribution found for torchlight==1.0
python demo_video_mobile.py出现
cannot import name 'str2bool'

FileNotFoundError: [Errno 2] No such file or directory: './graph/GCN/epoch210_model.pt'

Hello Guanghan,

First of all, thanks for this amazing repo. I followed the instructions to run the demo and run into this error:
FileNotFoundError: [Errno 2] No such file or directory: './graph/GCN/epoch210_model.pt'
The solution is as simple as changing './graph/GCN/epoch210_model.pt' for './weights/GCN/epoch210_model.pt' in lighttrack/graph/config/inference.yaml

Hope it helps. Cheers,
Dunai

how to prepare pickle data for train Pose Matching Module

To train Pose Matching Module, we need two pickle data, which are

 data_path: ./unit_test/posetrack_train_data.pickle
 data_neg_path: ./unit_test/posetrack_train_data_negative.pickle

I think test_keypoints_to_graph.py can make the pickle, but it need to read json files from 'data/Data_2018/posetrack_data/gcn_openSVAI/train/*.json'. My question is, where these json files come from? Am I right? Look forward to your reply.

How to run the camera inference in online mode?

Hi Guanghan,

Thanks for your work. Now in demo_video_mobile.py, we read an video and process with online mode (one by one frame.)

Do you have scripts with run the model with a camera? You will get one frame each time without knowing the total number of frames.

Thank you.

dockerfile

Thank for this project.

Can you provide a dockerfile to easily test lighttrack ?

undefined symbol: __cudaRegisterFatBinaryEnd

Hi @Guanghan

I got an error when the python demo_video_mobile.py is implemented.

Traceback (most recent call last):
File "demo_video_mobile.py", line 28, in
from nms.gpu_nms import gpu_nms
ImportError: /home/lighttrack/HPE/../lib/nms/gpu_nms.cpython-36m-x86_64-linux-gnu.so: undefined symbol: __cudaRegisterFatBinaryEnd

how can this issue be solved?

Ran out of input

i met a below problem, when i train the pose match. it seems that there are something wrong with the file--"posetrack_train_data_negative.pickle".

 In [3]: with open("posetrack_train_data_negative.pickle", "rb") as f: 
   ...:     a = pickle.load(f) 
   ...:      
   ...:                                                                           
---------------------------------------------------------------------------
EOFError                                  Traceback (most recent call last)
<ipython-input-3-853959a245df> in <module>
      1 with open("posetrack_train_data_negative.pickle", "rb") as f:
----> 2     a = pickle.load(f)
      3 
      4 

EOFError: Ran out of input

Merged dataset training

Pose track datasets have 15 keypoints while COCO has 17, how to merge the dataset and train Jointly?

ModuleNotFoundError: No module named 'tfflat'

now I have a problem is that : "ModuleNotFoundError: No module named 'tfflat'",and I searched a lot and couldn't find a solution

(torchg) D:\desktop\myFramework\lighttrack>python demo_video_mobile.py
Traceback (most recent call last):
File "demo_video_mobile.py", line 17, in
from network_mobile_deconv import Network
File "D:\desktop\myFramework\lighttrack\network_mobile_deconv.py", line 14, in
from HPE.config import cfg
File "D:\desktop\myFramework\lighttrack\HPE\config.py", line 109, in
from tfflat.utils import add_pypath, make_link, make_dir
ModuleNotFoundError: No module named 'tfflat'

Download link expired

The two links are expired, would you please update the links?

Training

  1. Pose Estimation Module
    To train, grab an imagenet-pretrained model and put it in ./weights.
    For Resnet101, download resnet101.ckpt from here.
    For Resnet152, download resnet152.ckpt from here.

ImportError: /lib/nms/gpu_nms.so: undefined symbol: _Py_ZeroStruct

Error when running the demo_video_mobile.py

Detector YOLOv3 options: Namespace(batch_size=1, checkpoint_model=None, conf_thres=0.8, config_path='detector/config/yolov3.cfg', img_size=416, n_cpu=8, nms_thres=0.4, weights_path='weights/YOLOv3/yolov3.weights')
Traceback (most recent call last):
  File "demo_video_mobile.py", line 28, in <module>
    from nms.gpu_nms import gpu_nms
ImportError: /home/icarus/oliver/lighttrack/HPE/../lib/nms/gpu_nms.so: undefined symbol: _Py_ZeroStruct

I did run the make in lib, the make output is:

#python3.5 setup.py build_ext --inplace
python setup.py build_ext --inplace
running build_ext
skipping 'utils/bbox.c' Cython extension (up-to-date)
building 'utils.cython_bbox' extension
creating build
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/utils
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c utils/bbox.c -o build/temp.linux-x86_64-2.7/utils/bbox.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/utils/bbox.o -o /home/icarus/oliver/lighttrack/lib/utils/cython_bbox.so
skipping 'utils/nms.c' Cython extension (up-to-date)
building 'utils.cython_nms' extension
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c utils/nms.c -o build/temp.linux-x86_64-2.7/utils/nms.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/utils/nms.o -o /home/icarus/oliver/lighttrack/lib/utils/cython_nms.so
skipping 'nms/cpu_nms.c' Cython extension (up-to-date)
building 'nms.cpu_nms' extension
creating build/temp.linux-x86_64-2.7/nms
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c nms/cpu_nms.c -o build/temp.linux-x86_64-2.7/nms/cpu_nms.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/nms/cpu_nms.o -o /home/icarus/oliver/lighttrack/lib/nms/cpu_nms.so
skipping 'nms/gpu_nms.cpp' Cython extension (up-to-date)
building 'nms.gpu_nms' extension
{'gcc': ['-Wno-unused-function'], 'nvcc': ['-arch=sm_52', '--ptxas-options=-v', '-c', '--compiler-options', "'-fPIC'"]}
/usr/bin/nvcc -I/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include -I/usr/include/python2.7 -c nms/nms_kernel.cu -o build/temp.linux-x86_64-2.7/nms/nms_kernel.o -arch=sm_52 --ptxas-options=-v -c --compiler-options '-fPIC'
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function '_Z10nms_kernelifPKfPy' for 'sm_52'
ptxas info    : Function properties for _Z10nms_kernelifPKfPy
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 23 registers, 1280 bytes smem, 344 bytes cmem[0], 4 bytes cmem[2]
{'gcc': ['-Wno-unused-function'], 'nvcc': ['-arch=sm_52', '--ptxas-options=-v', '-c', '--compiler-options', "'-fPIC'"]}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include -I/usr/include/python2.7 -c nms/gpu_nms.cpp -o build/temp.linux-x86_64-2.7/nms/gpu_nms.o -Wno-unused-function
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/icarus/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
                 from /home/icarus/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                 from /home/icarus/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                 from nms/gpu_nms.cpp:346:
/home/icarus/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
 #warning "Using deprecated NumPy API, disable it by " \
  ^
c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/nms/nms_kernel.o build/temp.linux-x86_64-2.7/nms/gpu_nms.o -lcudart -o /home/icarus/oliver/lighttrack/lib/nms/gpu_nms.so
rm -rf build

PoseTrack download

Hi

I can't download the PoseTrack dataset, it seems that the PoseTrack official website is currently inaccessible.
Do you have any other accessible PoseTrack data download links to share?

Looking forward to your reply.
Thank you very much.

Error in running with flag_nms= True

When I run demo_video_mobile.py after setting flag_nms = True I get the following error :-
TypeError: list indices must be integers or slices, not tuple

For the following code inside the function def apply_nms :-
keep = np.where((cls_dets[:, 4] >= min_scores) &
((cls_dets[:, 3] - cls_dets[:, 1]) * (cls_dets[:, 2] - cls_dets[:, 0]) >= min_box_size))[0]

When I first convert cls_dets into numpy array by cls_dets = np.array(cls_dets) and then execute, the following error is shown :-
IndexError: too many indices for array

This is because the shape of cls_dets is (4,) while the code written is for a 2 dimensional array.

Please suggest a solution so that I can apply non max suppresion with this code.

About demo_video_mobile.py running

Hello, I am interested in your work, so I want to run demo_video_mobile.py.Flow your setps, I downloads all files, I meet a problem when I run demo_video_mobile.py. It is " No such file or directory: './graph/GCN/epoch210_model.pt'", I don't know how to deal with it, please help me, I'll be very grateful with it. Another, I run in windows.Thanks!

How to merge annotation json file?

Hi,@Guanghan thanks for your great work!
I have a problem, as the downloaded annotation files are separated json files, which way do you merge them into one file? Could you give the code to merge these files?
Thanks very much!

AttributeError: 'MSVCCompiler' object has no attribute 'compiler_so'

Hello,
I am trying to execute the 'make' command inside the lib folder and i get the following error:
`Traceback (most recent call last):
File "setup.py", line 151, in
cmdclass={'build_ext': custom_build_ext},

File "C:\Users\Chris\anaconda3\envs\studienarbeit_env\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Users\Chris\anaconda3\envs\studienarbeit_env\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "C:\Users\Chris\anaconda3\envs\studienarbeit_env\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\Chris\anaconda3\envs\studienarbeit_env\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "C:\Users\Chris\anaconda3\envs\studienarbeit_env\lib\distutils\command\build_ext.py", line 339, in run
self.build_extensions()
File "setup.py", line 106, in build_extensions
customize_compiler_for_nvcc(self.compiler)
File "setup.py", line 78, in customize_compiler_for_nvcc
default_compiler_so = self.compiler_so
AttributeError: 'MSVCCompiler' object has no attribute 'compiler_so'
make: *** [all] Fehler 1`

Can anybody help me with this?

Missing imports: logger, pprint, etc.

flake8 testing of https://github.com/Guanghan/lighttrack on Python 3.7.1

$ flake8 . --count --select=E9,F63,F72,F82 --show-source --statistics

./lib/tfflat/data_provider.py:359:21: F821 undefined name 'logger'
                    logger.exception("Cannot batch data. Perhaps they are of inconsistent shape?")
                    ^
./lib/tfflat/data_provider.py:361:29: F821 undefined name 'pprint'
                        s = pprint.pformat([x[k].shape for x in data_holder])
                            ^
./lib/tfflat/data_provider.py:362:25: F821 undefined name 'logger'
                        logger.error("Shape of all arrays to be batched: " + s)
                        ^
./HPE/dataset.py:17:19: F821 undefined name 'mask'
            rle = mask.frPyObjects([seg_ann], label.shape[0], label.shape[1])
                  ^
./HPE/dataset.py:21:13: F821 undefined name 'mask'
        m = mask.decode(rle) * 1
            ^
./HPE/dataset.py:231:23: F821 undefined name 'ori_img'
        seg = get_seg(ori_img.shape[0], ori_img.shape[1], d['segmentation'])
                      ^
./HPE/dataset.py:231:41: F821 undefined name 'ori_img'
        seg = get_seg(ori_img.shape[0], ori_img.shape[1], d['segmentation'])
                                        ^
./utils/standard_classes.py:23:20: F821 undefined name 'python_to_json'
        json_str = python_to_json(self.python_data)
                   ^
./utils/standard_classes.py:27:9: F821 undefined name 'write_json_to_file'
        write_json_to_file(self.python_data, output_json_path)
        ^
./utils/standard_classes.py:65:40: F821 undefined name 'pose_order'
        self.candidate["pose_order"] = pose_order
                                       ^
./utils/utils_convert_heatmap.py:138:44: E999 SyntaxError: invalid syntax
        print "[congrid] dimensions error. " \
                                           ^
./utils/utils_pose.py:197:44: E999 TabError: inconsistent use of tabs and spaces in indentation
	draw_heatmap(heatmap, joint_names[ith_map])
                                           ^
./utils/utils_io_file.py:95:16: F821 undefined name 'as_str'
        return as_str.index("1"), 63 - as_str.rindex("1")
               ^
./utils/utils_io_file.py:95:40: F821 undefined name 'as_str'
        return as_str.index("1"), 63 - as_str.rindex("1")
                                       ^
2     E999 SyntaxError: invalid syntax
12    F821 undefined name 'mask'
14

E901,E999,F821,F822,F823 are the "showstopper" flake8 issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.

  • F821: undefined name name
  • F822: undefined name name in __all__
  • F823: local variable name referenced before assignment
  • E901: SyntaxError or IndentationError
  • E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree

EnvironmentError: The CUDA lib64 path could not be located in /usr/lib64

Error occured during make.

#python3.5 setup.py build_ext --inplace
python setup.py build_ext --inplace
Traceback (most recent call last):
  File "setup.py", line 55, in <module>
    CUDA = locate_cuda()
  File "setup.py", line 52, in locate_cuda
    raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
EnvironmentError: The CUDA lib64 path could not be located in /usr/lib64
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1

download error 403

I meet 403 error when trying to download the pretrained model in your website. Is the link still valid?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.