Code Monkey home page Code Monkey logo

3d-vehicle-tracking's Introduction

Joint Monocular 3D Vehicle Detection and Tracking

Language grade: Python

We present a novel framework that jointly detects and tracks 3D vehicle bounding boxes. Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to obtain stable 3D estimation.

Joint Monocular 3D Vehicle Detection and Tracking
Hou-Ning Hu, Qi-Zhi Cai, Dequan Wang, Ji Lin, Min Sun, Philipp Krähenbühl, Trevor Darrell, Fisher Yu.
In ICCV, 2019.

Paper Website

Prerequisites

! NOTE: this repo is made for PyTorch 1.0+ compatible issue, the generated results might be changed.
  • Linux (tested on Ubuntu 16.04.4 LTS)
  • Python 3.6.9
    • 3.6.4 tested
    • 3.6.9 tested
  • PyTorch 1.3.1
    • 1.0.0 (with CUDA 9.0, torchvision 0.2.1)
    • 1.1.0 (with CUDA 9.0, torchvision 0.3.0)
    • 1.3.1 (with CUDA 10.1, torchvision 0.4.2)
  • nvcc 10.1
    • 9.0.176, 10.1 compiling and execution tested
    • 9.2.88 execution only
  • gcc 5.4.0
  • Pyenv or Anaconda

and Python dependencies list in 3d-tracking/requirements.txt

Quick Start

In this section, you will train a model from scratch, test our pretrained models, and reproduce our evaluation results. For more detailed instructions, please refer to DOCUMENTATION.md.

Installation

  • Clone this repo:
git clone -b pytorch1.0 --single-branch https://github.com/ucbdrive/3d-vehicle-tracking.git
cd 3d-vehicle-tracking/
  • Install PyTorch 1.0.0+ and torchvision from http://pytorch.org and other dependencies. You can create a virtual environment by the following:
# Add path to bashrc 
echo -e '\nexport PYENV_ROOT="$HOME/.pyenv"\nexport PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n  eval "$(pyenv init -)"\nfi' >> ~/.bashrc

# Install pyenv
curl -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash

# Restart a new terminal if "exec $SHELL" doesn't work
exec $SHELL

# Install and activate Python in pyenv
pyenv install 3.6.9
pyenv local 3.6.9
  • Install requirements, create folders and compile binaries for detection
cd 3DTracking
bash scripts/init.sh
cd ..

cd faster-rcnn.pytorch
bash init.sh

NOTE: For faster-rcnn-pytorch compiling problems [1], please compile COCO API and replace pycocotools.

NOTE: For object-ap-eval compiling problem. It only supports python 3.6+, need numpy, skimage, numba, fire. If you have Anaconda, just install cudatoolkit in anaconda. Otherwise, please reference to this page to set up llvm and cuda for numba.

Data Preparation

For a quick start, we suggest using GTA val set as a starting point. You can get all needed data via the following script.

# We recommand using GTA `val` set (using `mini` flag) to get familiar with the data pipeline first, then using `all` flag to obtain all the data
python loader/download.py mini

More details can be found in 3d-tracking.

Execution

For running a whole pipeline (2D proposals, 3D estimation and tracking):

# Generate predicted bounding boxes for object proposals
cd faster-rcnn.pytorch/

# Step 00 (Optional) - Training on GTA dataset
./run_train.sh

# Step 01 - Generate bounding boxes
./run_test.sh
# Given object proposal bounding boxes and 3D center from faster-rcnn.pytorch directory
cd 3d-tracking/

# Step 00 - Data Preprocessing
# Collect features into json files (check variables in the code)
python loader/gen_pred.py gta val

# Step 01 - 3D Estimation
# Running single task scripts mentioned below and training by yourself
# or alternatively, using multi-GPUs and multi-processes to run through all 100 sequences
python run_estimation.py gta val --session 616 --epoch 030

# Step 02 - 3D Tracking and Evaluation
# 3D helps tracking part. For tracking evaluation, 
# using multi-GPUs and multi-processes to run through all 100 sequences
python run_tracking.py gta val --session 616 --epoch 030

# Step 03 - 3D AP Evaluation
# Convert tracking output to evaluation format
python tools/convert_estimation_bdd.py gta val --session 616 --epoch 030
python tools/convert_tracking_bdd.py gta val --session 616 --epoch 030

# Evaluation of 3D Estimation
python tools/eval_dep_bdd.py gta val --session 616 --epoch 030

# 3D helps Tracking part
python tools/eval_mot_bdd.py --gt_path output/616_030_gta_val_set --pd_path output/616_030_gta_val_set/kf3doccdeep_age20_aff0.1_hit0_100m_803

# Tracking helps 3D part
cd tools/object-ap-eval/
python test_det_ap.py gta val --session 616 --epoch 030

Note: If facing ModuleNotFoundError: No module named 'utils' problem, please add PYTHONPATH=. before python {scripts} {arguments}.

Citation

If you find our code/models useful in your research, please cite our paper:

@inproceedings{Hu3DT19,
author = {Hu, Hou-Ning and Cai, Qi-Zhi and Wang, Dequan and Lin, Ji and Sun, Min and Krähenbühl, Philipp and Darrell, Trevor and Yu, Fisher},
title = {Joint Monocular 3D Vehicle Detection and Tracking},
journal = {ICCV},
year = {2019}
}

License

This work is licensed under BSD 3-Clause License. See LICENSE for details. Third-party datasets and tools are subject to their respective licenses.

Acknowledgements

We thank faster.rcnn.pytorch for the detection codebase, pymot for their MOT evaluation tool and kitti-object-eval-python for the 3D AP calculation tool.

3d-vehicle-tracking's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3d-vehicle-tracking's Issues

Where unzip the dataset.

I use the next command to download the dataset.

python loader/download.py mini

That download the next zip files

Screenshot from 2020-08-31 12-27-13

When I run the command

./faster-rcnn.pytorch/run_train.sh

and get the next error

Loaded dataset `gta_det_train` for training
Set proposal method: gt
Preparing training data...
done
before filtering, there are 0 images...
after filtering, there are 0 images...
0 roidb entries
Loading pretrained weights from data/pretrained_model/resnet101_caffe.pth
Traceback (most recent call last):
  File "trainval_net.py", line 445, in <module>
    epoch, step))
NameError: name 'step' is not defined
root : 201_14_18895 Finish!

This happened too when I try to run the gen_dataset.py

PYTHONPATH=. python loader/gen_dataset.py gta val

and this give the error:

AssertionError: Not label files found in /workspace/3d-vehicle-tracking/3d-tracking/data/gta5_tracking/val/label

I suppose that this happened because the dataset has not been unzipped. But I don't know where to do this.
I unzip the files gta_3d_tracking_val_image_0001.zip and gta_3d_tracking_val_label_0001.zip in the directory 3d-tracking/data/gta5_tracking/val/{image,label} but that problem continue.

Thanks.

tracker models training processing issue

Hi, thanks for your great work, but how can I train the tracker models, such as LSTM, LSTMKF? On the code, I only found 803_kitti_300_linear.pth checkpoint for LSTM, but no 723_linear.pth for LSTMKF. So I just wonder how I can train them by myself.

Besides, many tracking methods are mentioned as below, but it seems only LSTM method is available for just evaluation.

Many thanks for your help.

Error in Step 01 - Generate bounding boxes ./run_test.sh

Traceback (most recent call last):
File "test_net.py", line 30, in
from model.utils.net_utils import save_net, load_net, vis_detections
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/utils/net_utils.py", line 7, in
from model.roi_crop.functions.roi_crop import RoICropFunction
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/functions/roi_crop.py", line 3, in
from .._ext import roi_crop
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/init.py", line 3, in
from ._roi_crop import lib as _lib, ffi as _ffi
ImportError: /content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/_roi_crop.so: undefined symbol: __cudaPopCallConfiguration

root : 200_14_18895 Finish!
Copy output pkl file to dataset folder...
cp: cannot stat 'vis/faster_rcnn_200_14_18895/detections_val.pkl': No such file or directory
Done!!

video output result

Hi, I follow all of these steps such as

python loader/download.py mini
./run_test.sh
python loader/gen_pred.py gta val

and get a new .pkl and print lots of things.

How can I get a video result which look like your youtube video?

checkpoint issue

Hi, thanks for your great work, but I got a problem when I use your provided checkpoint 616_gta_checkpoint_030.pth.tar, the detailed error is as below, thanks for your help.

RuntimeError: Error(s) in loading state_dict for DataParallel:
Unexpected key(s) in state_dict: "module.base.base.base_layer.1.num_batches_tracked", "module.base.base.level0.1.num_batches_tracked", "module.base.base.level1.1.num_batches_tracked", "module.base.base.level2.tree1.bn1.num_batches_tracked", "module.base.base.level2.tree1.bn2.num_batches_tracked", "module.base.base.level2.tree2.bn1.num_batches_tracked", "module.base.base.level2.tree2.bn2.num_batches_tracked", "module.base.base.level2.root.bn.num_batches_tracked", "module.base.base.level2.project.1.num_batches_tracked", "module.base.base.level3.tree1.tree1.bn1.num_batches_tracked", "module.base.base.level3.tree1.tree1.bn2.num_batches_tracked", "module.base.base.level3.tree1.tree2.bn1.num_batches_tracked", "module.base.base.level3.tree1.tree2.bn2.num_batches_tracked", "module.base.base.level3.tree1.root.bn.num_batches_tracked", "module.base.base.level3.tree1.project.1.num_batches_tracked", "module.base.base.level3.tree2.tree1.bn1.num_batches_tracked", "module.base.base.level3.tree2.tree1.bn2.num_batches_tracked", "module.base.base.level3.tree2.tree2.bn1.num_batches_tracked", "module.base.base.level3.tree2.tree2.bn2.num_batches_tracked", "module.base.base.level3.tree2.root.bn.num_batches_tracked", "module.base.base.level3.project.1.num_batches_tracked", "module.base.base.level4.tree1.tree1.bn1.num_batches_tracked", "module.base.base.level4.tree1.tree1.bn2.num_batches_tracked", "module.base.base.level4.tree1.tree2.bn1.num_batches_tracked", "module.base.base.level4.tree1.tree2.bn2.num_batches_tracked", "module.base.base.level4.tree1.root.bn.num_batches_tracked", "module.base.base.level4.tree1.project.1.num_batches_tracked", "module.base.base.level4.tree2.tree1.bn1.num_batches_tracked", "module.base.base.level4.tree2.tree1.bn2.num_batches_tracked", "module.base.base.level4.tree2.tree2.bn1.num_batches_tracked", "module.base.base.level4.tree2.tree2.bn2.num_batches_tracked", "module.base.base.level4.tree2.root.bn.num_batches_tracked", "module.base.base.level4.project.1.num_batches_tracked", "module.base.base.level5.tree1.bn1.num_batches_tracked", "module.base.base.level5.tree1.bn2.num_batches_tracked", "module.base.base.level5.tree2.bn1.num_batches_tracked", "module.base.base.level5.tree2.bn2.num_batches_tracked", "module.base.base.level5.root.bn.num_batches_tracked", "module.base.base.level5.project.1.num_batches_tracked", "module.base.dla_up.ida_0.proj_1.1.num_batches_tracked", "module.base.dla_up.ida_0.node_1.1.num_batches_tracked", "module.base.dla_up.ida_1.proj_1.1.num_batches_tracked", "module.base.dla_up.ida_1.proj_2.1.num_batches_tracked", "module.base.dla_up.ida_1.node_1.1.num_batches_tracked", "module.base.dla_up.ida_1.node_2.1.num_batches_tracked", "module.dim.1.num_batches_tracked", "module.dim.4.num_batches_tracked", "module.dim.7.num_batches_tracked", "module.rot.1.num_batches_tracked", "module.rot.4.num_batches_tracked", "module.rot.7.num_batches_tracked", "module.dep.1.num_batches_tracked", "module.dep.4.num_batches_tracked", "module.dep.7.num_batches_tracked".

GTA 3d data download

Hi! I am unable to download the gta 3d images. Server is taking too long to respond. Is there any other link to download it?

hi,now i am facing a problem when i ran ./run_test.sh

File "test_net.py", line 304, in
RCNN_loss_center, rois_label = fasterRCNN(im_data, im_info, gt_boxes,num_boxes, fixed_center)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/ld/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/faster_rcnn/faster_rcnn.py", line 46, in forward
base_feat = self.RCNN_base(im_data)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/ld/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/faster_rcnn/resnet.py", line 86, in forward
out = self.conv1(x)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/ld/.conda/envs/dave/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory

Run KITTI test error: FileNotFound

Hi,

Thanks for sharing your impressive job.

I'm trying to reproduce your result on KITTI test dataset.
But when I run :
PYTHONPATH=. python loader/gen_dataset.py kitti test --kitti_task track

I got this error:
FileNotFoundError: [Errno 2] No such file or directory: '/media/huxi/DATA/inf_master/Semester-4/lecture/absolute/code/JM3DDT/3d-vehicle-tracking/3d-tracking/data/kitti_tracking/testing/label_02/0000/000000.json'
But as I know, the KITTI didn't offer the label for test label, label_02 usually means label for training data.

And I also checked the detection file (kitti_test_trk_detections_RRC.pkl), it seems result from faster-rcnn, so how can we convert it into json type?

Thank you!

Custiom detector

How can I use results from some other detector in kitti format?

Dataset Details

Hi,
Can you please specify what all ground truth data is available in the database you've collected? I'm looking for a synthetic or graphically rendered database with ground truth depth, camera pose and intrinsic. Are these available in your dataset?

Can't get GTA val set

When I'm in Data Preparation step, after I run the following code:
python loader/download.py mini
I got a result:

Connecting to dl.yf.io (dl.yf.io)|128.32.162.150|:80... --2022-10-27 17:46:01--  (try: 3)  http://dl.yf.io/bdd-data/3d-vehicle-tracking/label/gta_3d_tracking_val_label_0001.zip
Connecting to dl.yf.io (dl.yf.io)|128.32.162.150|:80... failed: Unknown error.
Retrying.

failed: Unknown error.
Retrying.

failed: Unknown error.
Retrying.

Is there something wrong with my Internet or VPN? How can I get this GTA dataset?

cudaCheckError() failed : no kernel image is available for execution on the device

This is the configuration of my system:
python3.5.2
cuda 9.0
pytorch 0.4.1
Tesla V100

Following the README.md, I finish everything until # Step 01 - 3D Estimation, i.e. running the following command:

python run_estimation.py gta val --session 616 --epoch 030

It reported the following error message

cudaCheckError() failed : no kernel image is available for execution on the device

I learned that it was possibly caused by the configuration of the makefile. Since when I was running the test case in the faster-rcnn.pytorch folder, the same error happened. And by adding -gencode arch=compute_70,code=sm_70 to the lib/make.sh, I solved this problem.

However, in 3d-tracking, I did the same thing but the problem was not solved.

How shall I deal with this case?

Thanks!

The full printed message for this error is as follows:

0 2 python mono_3d_estimation.py gta test --data_split val --resume ./checkpoint/616_gta_checkpoint_030.pth.tar --json_path ./data/gta5_tracking/val/label/rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json --track_name 616_030_gta_val_set/616_030_rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd_roipool_output.pkl --session 616 -j 4 -b 1 --n_box_limit 300 --start_epoch 030
0.4.1
mono_3d_estimation.py gta test --data_split val --resume ./checkpoint/616_gta_checkpoint_030.pth.tar --json_path ./data/gta5_tracking/val/label/rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json --track_name 616_030_gta_val_set/616_030_rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd_roipool_output.pkl --session 616 -j 4 -b 1 --n_box_limit 300 --start_epoch 030
Using RoIAlign
=> Loading checkpoint './checkpoint/616_gta_checkpoint_030.pth.tar'
=> Successfully loaded checkpoint './checkpoint/616_gta_checkpoint_030.pth.tar' (epoch 30)
Reading ./data/gta5_tracking/val/label/rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json ...
Load single json file
Sequences [463] with total 463 frames
Input images are not normalized
Number of image to test: 463
cudaCheckError() failed : no kernel image is available for execution on the device

The output of run_test.sh

Hello, I have followed the Quick Start until Execution-# Step 01 - Generate bounding boxes.
I skipped the # Step 00 (Optional) - Training on GTA dataset part, then run ./run_test.sh, It gave me the output like :

Called with args:
Namespace(anno='val', cfg_file='cfgs/res101.yml', checkepoch=14, checkpoint=18895, checksession=200, class_agnostic=False, cuda=True, dataset='gta_det', imdb_name='gta_det_train', imdbval_name='gta_det_val', large_scale=False, load_dir='models', mGPUs=False, net='res101', parallel_type=0, set_cfgs=['ANCHOR_SCALES', '[2, 4, 8, 16]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '100'], vis=True)
Loaded dataset `gta_det_val` for training
Set proposal method: gt
Preparing training data...
done
0 roidb entries
load checkpoint models/res101/gta_det/faster_rcnn_200_14_18895.pth
load model successfully!
Evaluating detections
test time: 0.0008s

my_pc : 200_14_18895 Finish!
Copy output pkl file to dataset folder...
Done!!

I have no idea how to show the output of detection? I think the problem might be the nothing input, could someone give me help?

If this problem fixed, what's the next step i should to do to see the output?
Thanks!

3d tracking benchmarking

I am able to achieve published results for 2d tracking. But, when I try to use 3d coordinates for 3d benchmarking using 3d iou in camera frame, I get too many false positives and mota becomes negative. can you point any issue with using 3d tracks?

I used 3d benchmark provided by https://github.com/xinshuoweng/AB3DMOT

numba.errors.TypingError: Failed at nopython (nopython frontend)

Hi, I was running this step for testing, and I had a problem. I don't know how to solve it, so can anyone help me solve it ? Thanks!
python run_tracking.py gta val --session 616 --epoch 030

The bug :
0 --gpu 0 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/none_age20_aff0.1_hit0_100m_803 --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 0
SKIP running. Generated file output/616_030_gta_val_set/none_age20_aff0.1_hit0_100m_803 Found
0 --gpu 1 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf2d_age20_aff0.1_hit0_100m_803 --kf2d --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 1
SKIP running. Generated file output/616_030_gta_val_set/kf2d_age20_aff0.1_hit0_100m_803 Found
0 --gpu 2 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf2ddeep_age20_aff0.1_hit0_100m_803 --kf2d --deep --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 2
SKIP running. Generated file output/616_030_gta_val_set/kf2ddeep_age20_aff0.1_hit0_100m_803 Found
1 --gpu 0 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf3d_age20_aff0.1_hit0_100m_803 --kf3d --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 0
1 --gpu 1 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf3ddeep_age20_aff0.1_hit0_100m_803 --kf3d --deep --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 1
Load lists of pkl files

  • Number of sequence: 1
    => Building gt & hypo...
    1 --gpu 2 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf3docc_age20_aff0.1_hit0_100m_803 --kf3d --occ --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 2
    Load lists of pkl files
  • Number of sequence: 1
    => Building gt & hypo...
    Traceback (most recent call last):
    File "mono_3d_tracking.py", line 268, in
    main()
    File "mono_3d_tracking.py", line 257, in main
    tracker.run_app()
    File "mono_3d_tracking.py", line 127, in run_app
    disable=not self.args.verbose))
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
    if self.dispatch_one_batch(iterator):
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
    self.results = batch()
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
    for func, args, kwargs in self.items]
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
    for func, args, kwargs in self.items]
    File "mono_3d_tracking.py", line 198, in run_parallel
    frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
    File "mono_3d_tracking.py", line 219, in run_seq
    trackers = mot_tracker.update(data)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
    self.cam_pose)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
    projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
    center_pt, cam_dir, p1, p2)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
    error_rewrite(e, 'typing')
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
    reraise(type(e), e, None)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
    raise value.with_traceback(tb)
    numba.errors.TypingError: Failed at nopython (nopython frontend)
    Internal error at <numba.typeinfer.ArgConstraint object at 0x7f973a4d9710>:
    --%<----------------------------------------------------------------------------
    Traceback (most recent call last):
    File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
    yield
    File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
    assert ty.is_precise()
    AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Traceback (most recent call last):
File "mono_3d_tracking.py", line 268, in
main()
File "mono_3d_tracking.py", line 257, in main
tracker.run_app()
File "mono_3d_tracking.py", line 127, in run_app
disable=not self.args.verbose))
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
if self.dispatch_one_batch(iterator):
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
result = ImmediateResult(func)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
self.results = batch()
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
for func, args, kwargs in self.items]
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
for func, args, kwargs in self.items]
File "mono_3d_tracking.py", line 198, in run_parallel
frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
File "mono_3d_tracking.py", line 219, in run_seq
trackers = mot_tracker.update(data)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
self.cam_pose)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
center_pt, cam_dir, p1, p2)
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
reraise(type(e), e, None)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
raise value.with_traceback(tb)
numba.errors.TypingError: Failed at nopython (nopython frontend)
Internal error at <numba.typeinfer.ArgConstraint object at 0x7fd557770748>:
--%<----------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
yield
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
assert ty.is_precise()
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Load lists of pkl files

  • Number of sequence: 1
    => Building gt & hypo...
    Traceback (most recent call last):
    File "mono_3d_tracking.py", line 268, in
    main()
    File "mono_3d_tracking.py", line 257, in main
    tracker.run_app()
    File "mono_3d_tracking.py", line 127, in run_app
    disable=not self.args.verbose))
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
    if self.dispatch_one_batch(iterator):
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
    self.results = batch()
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
    for func, args, kwargs in self.items]
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
    for func, args, kwargs in self.items]
    File "mono_3d_tracking.py", line 198, in run_parallel
    frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
    File "mono_3d_tracking.py", line 219, in run_seq
    trackers = mot_tracker.update(data)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
    self.cam_pose)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
    projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
    center_pt, cam_dir, p1, p2)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
    error_rewrite(e, 'typing')
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
    reraise(type(e), e, None)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
    raise value.with_traceback(tb)
    numba.errors.TypingError: Failed at nopython (nopython frontend)
    Internal error at <numba.typeinfer.ArgConstraint object at 0x7fafdb8de748>:
    --%<----------------------------------------------------------------------------
    Traceback (most recent call last):
    File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
    yield
    File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
    assert ty.is_precise()
    AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

2 --gpu 0 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/kf3doccdeep_age20_aff0.1_hit0_100m_803 --kf3d --occ --deep --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 0
2 --gpu 1 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/lstmoccdeep_age20_aff0.1_hit0_100m_803 --lstm --occ --deep --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 1
Load lists of pkl files

  • Number of sequence: 1
    => Building gt & hypo...
    2 --gpu 2 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/lstmdeep_age20_aff0.1_hit0_100m_803 --lstm --deep --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 2
    Traceback (most recent call last):
    File "mono_3d_tracking.py", line 268, in
    main()
    File "mono_3d_tracking.py", line 257, in main
    tracker.run_app()
    File "mono_3d_tracking.py", line 127, in run_app
    disable=not self.args.verbose))
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
    if self.dispatch_one_batch(iterator):
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
    self.results = batch()
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
    for func, args, kwargs in self.items]
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
    for func, args, kwargs in self.items]
    File "mono_3d_tracking.py", line 198, in run_parallel
    frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
    File "mono_3d_tracking.py", line 219, in run_seq
    trackers = mot_tracker.update(data)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
    self.cam_pose)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
    projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
    center_pt, cam_dir, p1, p2)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
    error_rewrite(e, 'typing')
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
    reraise(type(e), e, None)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
    raise value.with_traceback(tb)
    numba.errors.TypingError: Failed at nopython (nopython frontend)
    Internal error at <numba.typeinfer.ArgConstraint object at 0x7f86375095f8>:
    --%<----------------------------------------------------------------------------
    Traceback (most recent call last):
    File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
    yield
    File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
    assert ty.is_precise()
    AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Load lists of pkl files

  • Number of sequence: 1
    => Building gt & hypo...
    Load lists of pkl files
  • Number of sequence: 1
    => Building gt & hypo...
    Traceback (most recent call last):
    File "mono_3d_tracking.py", line 268, in
    main()
    File "mono_3d_tracking.py", line 257, in main
    tracker.run_app()
    File "mono_3d_tracking.py", line 127, in run_app
    disable=not self.args.verbose))
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
    if self.dispatch_one_batch(iterator):
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
    self.results = batch()
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
    for func, args, kwargs in self.items]
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
    for func, args, kwargs in self.items]
    File "mono_3d_tracking.py", line 198, in run_parallel
    frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
    File "mono_3d_tracking.py", line 219, in run_seq
    trackers = mot_tracker.update(data)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
    self.cam_pose)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
    projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
    center_pt, cam_dir, p1, p2)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
    error_rewrite(e, 'typing')
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
    reraise(type(e), e, None)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
    raise value.with_traceback(tb)
    numba.errors.TypingError: Failed at nopython (nopython frontend)
    Internal error at <numba.typeinfer.ArgConstraint object at 0x7f2208752a20>:
    --%<----------------------------------------------------------------------------
    Traceback (most recent call last):
    File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
    yield
    File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
    assert ty.is_precise()
    AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Traceback (most recent call last):
File "mono_3d_tracking.py", line 268, in
main()
File "mono_3d_tracking.py", line 257, in main
tracker.run_app()
File "mono_3d_tracking.py", line 127, in run_app
disable=not self.args.verbose))
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
if self.dispatch_one_batch(iterator):
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
result = ImmediateResult(func)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
self.results = batch()
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
for func, args, kwargs in self.items]
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
for func, args, kwargs in self.items]
File "mono_3d_tracking.py", line 198, in run_parallel
frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
File "mono_3d_tracking.py", line 219, in run_seq
trackers = mot_tracker.update(data)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
self.cam_pose)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
center_pt, cam_dir, p1, p2)
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
reraise(type(e), e, None)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
raise value.with_traceback(tb)
numba.errors.TypingError: Failed at nopython (nopython frontend)
Internal error at <numba.typeinfer.ArgConstraint object at 0x7fae8012d9b0>:
--%<----------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
yield
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
assert ty.is_precise()
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

3 --gpu 0 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/lstm_age20_aff0.1_hit0_100m_803 --lstm --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 0
3 --gpu 1 python mono_3d_tracking.py gta --path output/616_030_gta_val_set/ --out_path output/616_030_gta_val_set/lstmocc_age20_aff0.1_hit0_100m_803 --lstm --occ --max_age 20 --affinity_thres 0.1 --min_hits 0 --max_depth 100 -j 1 --gpu 1
Load lists of pkl files

  • Number of sequence: 1
    => Building gt & hypo...
    Load lists of pkl files
  • Number of sequence: 1
    => Building gt & hypo...
    Traceback (most recent call last):
    File "mono_3d_tracking.py", line 268, in
    main()
    File "mono_3d_tracking.py", line 257, in main
    tracker.run_app()
    File "mono_3d_tracking.py", line 127, in run_app
    disable=not self.args.verbose))
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
    if self.dispatch_one_batch(iterator):
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
    self.results = batch()
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
    for func, args, kwargs in self.items]
    File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
    for func, args, kwargs in self.items]
    File "mono_3d_tracking.py", line 198, in run_parallel
    frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
    File "mono_3d_tracking.py", line 219, in run_seq
    trackers = mot_tracker.update(data)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
    self.cam_pose)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
    projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
    File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
    center_pt, cam_dir, p1, p2)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
    error_rewrite(e, 'typing')
    File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
    reraise(type(e), e, None)
    File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
    raise value.with_traceback(tb)
    numba.errors.TypingError: Failed at nopython (nopython frontend)
    Internal error at <numba.typeinfer.ArgConstraint object at 0x7f3c3ef0f978>:
    --%<----------------------------------------------------------------------------
    Traceback (most recent call last):
    File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
    yield
    File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
    assert ty.is_precise()
    AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Traceback (most recent call last):
File "mono_3d_tracking.py", line 268, in
main()
File "mono_3d_tracking.py", line 257, in main
tracker.run_app()
File "mono_3d_tracking.py", line 127, in run_app
disable=not self.args.verbose))
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 917, in call
if self.dispatch_one_batch(iterator):
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 182, in apply_async
result = ImmediateResult(func)
File "/home/hy/.local/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 549, in init
self.results = batch()
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in call
for func, args, kwargs in self.items]
File "/home/hy/.local/lib/python3.6/site-packages/joblib/parallel.py", line 225, in
for func, args, kwargs in self.items]
File "mono_3d_tracking.py", line 198, in run_parallel
frames_anno, frames_hypo = self.run_seq(mot_tracker, seq)
File "mono_3d_tracking.py", line 219, in run_seq
trackers = mot_tracker.update(data)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/model/tracker_3d.py", line 148, in update
self.cam_pose)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 349, in construct2dlayout
projpoints = draw_box(cam_calib, pose, points3d, cam_near_clip)
File "/media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py", line 475, in draw_box
center_pt, cam_dir, p1, p2)
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/hy/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
reraise(type(e), e, None)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
raise value.with_traceback(tb)
numba.errors.TypingError: Failed at nopython (nopython frontend)
Internal error at <numba.typeinfer.ArgConstraint object at 0x7fd2640deb00>:
--%<----------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
yield
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 199, in call
assert ty.is_precise()
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/hy/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 200, in call
typeinfer.add_type(self.dst, ty, loc=self.loc)
File "/home/hy/anaconda3/envs/3d-vehicle-tracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/hy/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/hy/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.InternalError:
[1] During: typing of argument at /media/hy/DATA3/.dave/3d-vehicle-tracking/3d-tracking/utils/tracking_utils.py (951)
--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 951:
def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):

# get the intersection point of two 3D points and a plane
c1 = center_pt[0]
^

This error may have been caused by the following argument(s):

  • argument 2: Unsupported array dtype: object
  • argument 3: Unsupported array dtype: object

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new

Not found file.

Hi:
When I run:

cd faster-rcnn.pytorch
bash init.sh

it's echo

+ echo 'Please move faster_rcnn_200_14_18895.pth to models/res101/gta_det/'
Please move faster_rcnn_200_14_18895.pth to models/res101/gta_det/
+ echo 'Please move faster_rcnn_300_100_175.pth to models/res101/kitti/'
Please move faster_rcnn_300_100_175.pth to models/res101/kitti/

but i don't have these files. where i can find theme?

thank youy !

hello,import error:undefined symbol: PyCObject_Type

(base) lzw@resplendent-star:/resplendent_code/3d-vehicle-tracking-master/faster-rcnn.pytorch$ ./run_train.sh
Traceback (most recent call last):
File "trainval_net.py", line 24, in
from roi_data_layer.roidb import combined_roidb
File "/home/lzw/resplendent_code/3d-vehicle-tracking-master/faster-rcnn.pytorch/lib/roi_data_layer/roidb.py", line 9, in
from datasets.factory import get_imdb
File "/home/lzw/resplendent_code/3d-vehicle-tracking-master/faster-rcnn.pytorch/lib/datasets/factory.py", line 19, in
from datasets.kitti import kitti
File "/home/lzw/resplendent_code/3d-vehicle-tracking-master/faster-rcnn.pytorch/lib/datasets/kitti.py", line 11, in
import cv2
ImportError: /home/lzw/resplendent_code/voxblox_pp_ws/devel/lib/python2.7/dist-packages/cv2.so: undefined symbol: PyCObject_Type
lzw : 201_14_18895 Finish!
(base) lzw@resplendent-star:
/resplendent_code/3d-vehicle-tracking-master/faster-rcnn.pytorch$


one more thing:the voxblox_pp_ws is another ros code package,but I dont konw why it appear here

Error in # Step 01 - Generate bounding boxes ./run_test.sh

Traceback (most recent call last):
File "test_net.py", line 30, in
from model.utils.net_utils import save_net, load_net, vis_detections
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/utils/net_utils.py", line 7, in
from model.roi_crop.functions.roi_crop import RoICropFunction
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/functions/roi_crop.py", line 3, in
from .._ext import roi_crop
File "/content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/init.py", line 3, in
from ._roi_crop import lib as _lib, ffi as _ffi
ImportError: /content/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/_roi_crop.so: undefined symbol: __cudaPopCallConfiguration

root : 200_14_18895 Finish!
Copy output pkl file to dataset folder...
cp: cannot stat 'vis/faster_rcnn_200_14_18895/detections_val.pkl': No such file or directory
Done!!

run run mono_3d_tracking.py TypeError: __init__() missing 2 required positional arguments: 'doc' and 'pos'

hi ,I run mono_3d_tracking.py
/home/work/3d-vehicle-tracking-master/3d-tracking/mono_3d_estimation.py gta test --data_split val --json_path /home/gejun/work/3d-vehicle-tracking-master/3d-tracking/data/gta5_tracking/val/label/rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json -j 2 -b 1 --n_box_limit 300 --resume ./checkpoint/616_gta_checkpoint_030.pth.tar
Using RoIAlign
=> Loading checkpoint './checkpoint/616_gta_checkpoint_030.pth.tar'
=> Successfully loaded checkpoint './checkpoint/616_gta_checkpoint_030.pth.tar' (epoch 30)
Reading /home/gejun/work/3d-vehicle-tracking-master/3d-tracking/data/gta5_tracking/val/label/rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json ...
Load single json file
Sequences [463] with total 463 frames
Input images are not normalized
Number of image to test: 463
0
1
Traceback (most recent call last):
File "/home/gejun/work/3d-vehicle-tracking-master/3d-tracking/mono_3d_estimation.py", line 539, in
main()
File "/home/gejun/work/3d-vehicle-tracking-master/3d-tracking/mono_3d_estimation.py", line 535, in main
test_model(model, args)
File "/home/gejun/work/3d-vehicle-tracking-master/3d-tracking/mono_3d_estimation.py", line 426, in test_model
for i, (image, box_info) in enumerate(iter(test_loader)):
File "/home/gejun/anaconda3/envs/pytorch041/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 336, in next
return self._process_next_batch(batch)
File "/home/gejun/anaconda3/envs/pytorch041/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
TypeError: init() missing 2 required positional arguments: 'doc' and 'pos'

Process finished with exit code 1

I find the code can iter 2 times normally ,then it broken ,can you give me some advice to solve it???
for i, (image, box_info) in enumerate(iter(test_loader)):
# measure data loading time
data_time.update(time.time() - end)
end = time.time()

    with torch.no_grad():
        box_output, targets = model(image, box_info, args.device, 'test')

    batch_time.update(time.time() - end)

=======================================================

how to apply the code into gta train dataset?

Hi, I'm a totally newb here.
I want to run your model on gta train set.
However, it is too big! 300G+!
So,it there any way to train the model without RAM>300G?
appreciate a lot for your work!

Error in Step 0- Training on GTA dataset ./run_train.sh

File "test_net.py", line 30, in
from model.utils.net_utils import save_net, load_net, vis_detections
File "/home/ld/code/Mono-3D/3d-vehicle-tracking-master.2/faster-rcnn.pytorch/lib/model/utils/net_utils.py", line 7, in
from model.roi_crop.functions.roi_crop import RoICropFunction
File "/home/ld/code/Mono-3D/3d-vehicle-tracking-master.2/faster-rcnn.pytorch/lib/model/roi_crop/functions/roi_crop.py", line 3, in
from .._ext import roi_crop
File "/home/ld/code/Mono-3D/3d-vehicle-tracking-master.2/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/init.py", line 3, in
from ._roi_crop import lib as _lib, ffi as _ffi
ImportError: /home/ld/code/Mono-3D/3d-vehicle-tracking-master.2/faster-rcnn.pytorch/lib/model/roi_crop/_ext/roi_crop/_roi_crop.so: undefined symbol: __cudaRegisterFatBinaryEnd
how can i slove it??? anybody knows ?

Errnor in Step 01 - 3D Estimation

in Step 01 - 3D Estimation
when I run:
" python run_estimation.py gta val --session 616 --epoch 030"
I occur the problems :
[Errno 2] No such file or directory: 'output/616_030_gta_val_set/616_030_rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd_roipool_output.pkl'

I can not find this file in my computer, what should I do?
thank you !

Some problems about input data

Hi! I got some problems about the input data of 3d estimation.

gen_pred.py extract features from pkl file to json file in pred folder
DATASET.PRED_PATH = DATASET.PRED_PATH.replace('train', args.split)
where PRED_PATH is defined in config.py
cfg.GTA.TRACKING.PRED_PATH = join(cfg.OUTPUT_PATH, 'gta5_tracking', 'train', 'pred')
cfg.OUTPUT_PATH = join(cfg.ROOT, 'output')
But those json files are not applied. In run_estimation.py, the input data are from label folder.
if args.set == 'gta': JSON_ROOT = './data/gta5_tracking/{}/label/'.format(args.split)
So the data for 3d estimation are from label, rather than detection result. And I did not see any application with detection result in pred folder. That does not make sense.

I am confused about that. Thanks for your patience!

How about the 3d detection AP?

It is a good work, accurate and efficient!

How about the 3d detection AP?

I want to know the relationship between the 3d tracking metric and the 3d detection metric.
As we can see, the threshold of 3d tracking iou is 0.25.

I want to know how much 3d detection leverage could be got from tracking.
As we can see, camera has surpassed lidar on 3d tracking task totally.

gen_dataset.py fails with processing GTA data

I need to regenerate the dataset because my dataset is stored on a different path. gen_dataset.gta_label expects the key 'dset_name' in the label JSON. But the JSON doesn't have such a key. Looks like a bug?

Sample GTA label JSON:
{"name": "rec_10090911_clouds_21h53m_x-968y-1487tox2523y214/1539101548424_final.jpg", "videoName": "rec_10090911_clouds_21h53m_x-968y-1487tox2523y214", "reso lution": {"width": 1920, "height": 1080}, "attributes": {"weather": "clouds", "scene": "undefined", "timeofday": "night"}, "intrinsics": {"focal": [935.30743 60871937, 935.3074360871937], "center": [960, 540], "fov": 60.0, "nearClip": 0.15, "cali": [[935.3074360871937, 0, 960, 0], [0, 935.3074360871937, 540, 0], [ 0, 0, 1, 0]]}, "extrinsics": {"location": [542.5374139999999, 638.261108, 24.595841999999998], "rotation": [0.07860163590185047, 0.04926174360461475, -1.5720 14182640353]}, "timestamp": 1539101548424, "frameIndex": 267, "labels": [{"id": 312067, "category": "Car", "manualShape": false, "manualAttributes": false, " attributes": {"occluded": 1, "truncated": 0, "ignore": false}, "box2d": {"x1": 1344, "y1": 582, "x2": 1507, "y2": 625, "confidence": 1.0}, "box3d": {"alpha": 2.6717959948834453, "orientation": 3.1285669162153136, "location": [13.92370725222956, 1.9204142545872491, 28.332843371904413], "dimension": [1.414723087054 686, 1.986859393486255, 5.1919703417856296]}}, {"id": 346115, "category": "Car", "manualShape": false, "manualAttributes": false, "attributes": {"occluded": 1, "truncated": 2, "ignore": false}, "box2d": {"x1": 0, "y1": 672, "x2": 216, "y2": 822, "confidence": 1.0}, "box3d": {"alpha": -0.7139769182849092, "orienta tion": -1.5710088270105829, "location": [-6.546893716188748, 1.1920684201530107, 5.670235020892589], "dimension": [1.4147235458536747, 1.9867095159721255, 5. 191999838206263]}}]}

error with numba when running gen_dataset.py

Hi,
Thanks for your generous sharing~
when I try to run this project on Kitti-tracking-dataset, at STEP#00:
python loader/gen_dataset.py kitti train
met some problems with numba, it is sure that oxt_file is correctly loaded.

Traceback (most recent call last):
File "loader/gen_dataset.py", line 607, in
main()
File "loader/gen_dataset.py", line 602, in main
ds = Dataset() # load images/labels === filter valid data
File "loader/gen_dataset.py", line 146, in init
self.data_label = self.kitti_tracking_label() # write pkl file
File "loader/gen_dataset.py", line 313, in kitti_tracking_label
poses = [tu.KittiPoseParser(fields[i]) for i in range(len(fields))]
File "loader/gen_dataset.py", line 313, in
poses = [tu.KittiPoseParser(fields[i]) for i in range(len(fields))]
File "/home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py", line 96, in init
self.set_oxt(fields)
File "/home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py", line 107, in set_oxt
rotation = angle2rot(np.array([self.roll, self.pitch, self.yaw]))
File "/home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py", line 113, in angle2rot
return rotate(np.eye(3), rotation, inverse=inverse)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 349, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 316, in error_rewrite
reraise(type(e), e, None)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/six.py", line 658, in reraise
raise value.with_traceback(tb)
numba.errors.TypingError: Failed at nopython (nopython frontend)
Internal error at <numba.typeinfer.CallConstraint object at 0x7f887fe3b240>:
--%<----------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/errors.py", line 577, in new_error_context
yield
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 254, in lower_block
self.lower_inst(inst)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 303, in lower_inst
val = self.lower_assign(ty, inst)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 449, in lower_assign
return self.lower_expr(ty, value)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 921, in lower_expr
return self.context.build_list(self.builder, resty, castvals)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/targets/cpu.py", line 110, in build_list
return listobj.build_list(self, builder, list_type, items)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/targets/listobj.py", line 449, in build_list
inst = ListInstance.allocate(context, builder, list_type, nitems)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/targets/listobj.py", line 317, in allocate
ok, self = cls.allocate_ex(context, builder, list_type, nitems)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/targets/listobj.py", line 266, in allocate_ex
self.zfill(self.size.type(0), nitems)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/targets/listobj.py", line 220, in zfill
cgutils.memset(builder, base, size, ir.IntType(8)(0))
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/cgutils.py", line 847, in memset
builder.call(fn, [ptr, value, size, int32_t(0), bool_t(0)])
File "/home/ubuntu/.local/lib/python3.6/site-packages/llvmlite/ir/builder.py", line 851, in call
cconv=cconv, tail=tail, fastmath=fastmath)
File "/home/ubuntu/.local/lib/python3.6/site-packages/llvmlite/ir/instructions.py", line 84, in init
raise TypeError(msg)
TypeError: Type of #4 arg mismatch: i1 != i32

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 142, in propagate
constraint(typeinfer)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 423, in call
self.resolve(typeinfer, typevars, fnty)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 450, in resolve
literals=literals)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/typeinfer.py", line 1173, in resolve_call
literals=literals)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/typing/context.py", line 205, in resolve_function_type
return func.get_call_type_with_literals(self, args, kws, literals)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/types/abstract.py", line 268, in get_call_type_with_literals
return self.get_call_type(context, args, kws)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/types/functions.py", line 255, in get_call_type
template, pysig, args, kws = self.dispatcher.get_call_template(args, kws)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 273, in get_call_template
self.compile(tuple(args))
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 653, in compile
cres = self._compiler.compile(args, return_type)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/dispatcher.py", line 83, in compile
pipeline_class=self.pipeline_class)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 873, in compile_extra
return pipeline.compile_extra(func)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 367, in compile_extra
return self._compile_bytecode()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 804, in _compile_bytecode
return self._compile_core()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 791, in _compile_core
res = pm.run(self.status)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 253, in run
raise patched_exception
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 245, in run
stage()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 678, in stage_nopython_backend
self._backend(lowerfn, objectmode=False)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 628, in _backend
lowered = lowerfn()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 615, in backend_nopython_mode
self.flags)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/compiler.py", line 992, in native_lowering_stage
lower.lower()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 173, in lower
self.lower_normal_function(self.fndesc)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 214, in lower_normal_function
entry_block_tail = self.lower_function_body()
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 239, in lower_function_body
self.lower_block(block)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/lowering.py", line 254, in lower_block
self.lower_inst(inst)
File "/home/ubuntu/anaconda3/envs/3dtracking/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/errors.py", line 585, in new_error_context
six.reraise(type(newerr), newerr, tb)
File "/home/ubuntu/.local/lib/python3.6/site-packages/numba/six.py", line 659, in reraise
raise value
numba.errors.LoweringError: Failed at nopython (nopython mode backend)
Type of #4 arg mismatch: i1 != i32

File "utils/tracking_utils.py", line 833:
def rot_axis(angle, axis):

if axis == 0: # X
v = [0, 4, 5, 7, 8]
^
[1] During: lowering "$28.6 = build_list(items=[Var($const28.1, /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833)), Var($const28.2, /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833)), Var($const28.3, /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833)), Var($const28.4, /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833)), Var($const28.5, /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833))])" at /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (833)
[2] During: resolving callee type: type(CPUDispatcher(<function rot_axis at 0x7f88828ec378>))
[3] During: typing of call at /home/ubuntu/3d_tracking/3d-tracking/utils/tracking_utils.py (862)

--%<----------------------------------------------------------------------------

File "utils/tracking_utils.py", line 862:
def rotate(vector, angle, inverse=False):

# Rotation matrices around the X (gamma), Y (beta), and Z (alpha) axis
RX = rot_axis(gamma, 0)
^

This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.

To see Python/NumPy features supported by the latest release of Numba visit:
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
and
http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html

For more information about typing errors and how to debug them visit:
http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#my-code-doesn-t-compile

If you think your code should work with Numba, please report the error message
and traceback, along with a minimal reproducer at:
https://github.com/numba/numba/issues/new


How could I solve this problem?

what are prerequisites and steps for running fastercnn.pytorch?

initially, I planned to run faster-rcnn.pytorch firstly before testing your whole project. Then I download all required datasets(KITTI_Tracking), put them into corresponding dirs and generate soft links.

  1. run ./init.sh,some new dir and links were built.
  2. modify some options in ./run_train.sh(for example, DATASET="kitti")
    3. sh ./run_train.sh got an error as follows:
    Screenshot from 2020-02-02 16-41-05
    as I guess, it means no inputs have been loaded. However, I don't make any modifications to faster-rcnn. if possible, any cues would be highly appreciated.

AssertaionError

Hi,
I am getting the following error:
Traceback (most recent call last): File "mono_3d_tracking.py", line 274, in <module> main() File "mono_3d_tracking.py", line 271, in main te.eval_app() File "/home/husam/Carla_3D_Tracking/3d-tracking/tools/eval_mot_bdd.py", line 96, in eval_app result[i_s] = self.eval_parallel(seq_gt, seq_pd) File "/home/husam/Carla_3D_Tracking/3d-tracking/tools/eval_mot_bdd.py", line 171, in eval_parallel evaluator.evaluate() File "/home/husam/Carla_3D_Tracking/3d-tracking/tools/pymot/pymot.py", line 99, in evaluate self.evaluateFrame(frame) File "/home/husam/Carla_3D_Tracking/3d-tracking/tools/pymot/pymot.py", line 269, in evaluateFrame assert self.mappings_[gt_id] != hypo_id AssertionError

when running the following command:
python run_tracking.py gta val --session 616 --epoch 030

I wonder what could be the problem here?
I read the comments that you had for the AsserationError which was the following(in pymot.py):

      # Assert no known mappings have been added to hungarian, 
     # since keep correspondence should have 
     # this case.

but I couldn't understand it

hello,run init.sh time out(chinese ) :)

hi,I am a chinese,who run your code in china,unavoidable ,face the f**king speed problem when connect to amazon,what I can do? thank you~~~~

console problem:

cd data/pretrained_model

--2020-04-07 22:20:53-- (try time: 2) https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth
connecting s3.amazonaws.com (s3.amazonaws.com)|52.216.19.67|:443... connected。
send http request,... 200 OK
length: 178728960 (170M) [application/octet-stream]
saving to: “resnet101-5d3b4d8f.pth”

resnet101-5d3b4d8f.p 0%[ ] 84.63K 16.3KB/s 用时 5.2s

2020-04-07 22:26:23 (16.3 KB/s) - in occured error in 86661/178728960 byte retrying。

--2020-04-07 22:26:25-- (trytimes : 3) https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth
connecting s3.amazonaws.com (s3.amazonaws.com)|52.216.19.67|:443... failed time out

AttributeError in gen_pred.py

Hi, sorry I'm a freshman learning this. There are some errors occurred when I run this line:
"python loader/gen_pred.py gta val "
how can i fix it?

Load label file from path: /home/user/3d-vehicle-tracking/3d-tracking/loader/data/gta5_tracking/val/label
Traceback (most recent call last):
File "gen_pred.py", line 191, in
main()
File "gen_pred.py", line 186, in main
ds = Dataset()
File "gen_pred.py", line 110, in init
assert len(self.data_path) == self.det_result.shape[0],
AttributeError: 'Dataset' object has no attribute 'det_result

looking for your reply~

Using kitti dataset

Hello,

I'm trying to run the faster-rcnn part in Kitti dataset, but I'm not able to do so. I'm doing this so I can reproduce the full pipeline. I did it for GTA one and it worked well.

To run this on Kitti, I downloaded Kitti's left color images of tracking data set and placed it in the correct folder. I can see it running but when using --vis I get no car bounding box. I analyzed the output of the net thresholding it, and I saw that there is no bounding box for 'car' class above 0.5 thres. I tried changing configs in config.py limiting the size of the images and tried other stuff but it didn't work. I'm also using faster_rcnn_300_100_175.pth checkpoint file.

Do you have any clue on where is the mistake?

I was able to do the whole pipeline (from image to tracking) using GTA dataset, even rescaling the input images.

Thanks

GPS and IMU

I download the code and run the code according to the README. I have one concern:

In the paper, you assume ego-motion is given by GPS/IMU. However, I didn't find any information about GPS/IMU in the code and data sample. So how do you get the ego-motion of the ego-vehicle and build a true 3D coordinate for objects and the ego-vehicle? Without true 3D coordinates, how do you build the association matrix? And there are more consequences.

Thanks in advance.

How to apply code into kitti tracking test set?

Hi, thanks for sharing your great job.

I'm trying to apply code into kitti tracking test set, but it seems doesn't work, did I miss something?

Here is the steps I did:

  1. I put the calib, image, oxts file into folder: 3d-vehicle-tracking/3d-tracking/data/kitti_tracking/testing, and I create empty folder 0000, 0001, ... ,0028 in label_02. so the structure looks like:
calib/
calib/0000.txt
...
image_02/
image_02/0000/000000.png
...
label_02/
label_02/0000/            <== empty
...
oxts/0000.txt
...
  1. I downloaded the kitti_test_trk_detections_RCC.pkl, put it into folder: 3d-vehicle-tracking/3d-tracking/data/kitti_tracking/ and rename it as kitti_test_trk_detections.pkl

  2. run script:
    PYTHONPATH=. python loader/gen_dataset.py kitti test --kitti_task track --mode test
    Then I got empty labels in folder label_02

  3. run script:
    PYTHONPATH=. python loader/gen_pred.py kitti test
    Then I got result:

 1550 images.
 Frame 1550, GT: 0 Boxes, PD: 3 Boxes
 Frame 1551, GT: 0 Boxes, PD: 2 Boxes
 Frame 1552, GT: 0 Boxes, PD: 2 Boxes
 Frame 1553, GT: 0 Boxes, PD: 3 Boxes
 Frame 1554, GT: 0 Boxes, PD: 2 Boxes
 Frame 1555, GT: 0 Boxes, PD: 3 Boxes
 Frame 1556, GT: 0 Boxes, PD: 3 Boxes
 Frame 1557, GT: 0 Boxes, PD: 2 Boxes
 Frame 1558, GT: 0 Boxes, PD: 2 Boxes
 Frame 1559, GT: 0 Boxes, PD: 2 Boxes

Here I assume that GT is the label, because we use the empty as label, so it always return 0 BBoxes. And the PD means prediction result. So far I think everything works fine.

  1. run script:
    PYTHONPATH=. python run_estimation.py kitti test --session 623 --epoch 100
    Then I got msg:
GT is empty
GT is empty
GT is empty
GT is empty
GT is empty
GT is empty
GT is empty
GT is empty
GT is empty
Prediction is empty
GT is empty
Prediction is empty
GT is empty
Prediction is empty

Here I don't understand why the prediction is empty sometimes <== not every frame, mostly it only shows "GT is empty"
and it generate a new folder : 3d-tracking/output/623_100_kitti_test_set, and some ***output.pkl file

6. PYTHONPATH=. python run_tracking.py kitti test --session 623 --epoch 100

Writing to output/623_100_kitti_test_set/kf3d_age20_aff0.1_hit0_100m_803_pd.json
=> Begin evaluation...
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results
Empty results

I'm not sure if there is something wrong from here, because the file generated looks fine:
kf3d_age20_aff0.1_hit0_100m_803_pd.json

{"timestamp": 174, "num": 174, "im_path": ["/media/huxi/DATA/inf_master/Semester-4/lecture/absolute/code/JM3DDT/3d-vehicle-tracking/3d-tracking/data/kitti_tracking/testing/image_02/0028/000174.png"], "class": "frame", "hypotheses": [{"height": 26.0, "width": 28.0, "trk_box": [622.1823653168534, 178.12736144976256, 671.7941276141136, 205.80495628396002], "det_box": [632.0, 178.0, 660.0, 204.0, 0.18501399457454681], "id": 1272, "x": 646, "y": 191, "dim": [1.4942878484725952, 1.6439176797866821, 3.850013494491577], "alpha": -1.2112747430801392, "roty": -1.1458846171921906, "depth": 41.02299230376193}, {"height": 60.0, "width": 63.0, "trk_box": [638.676846346, 175.13908976573774, 729.9815307848789, 230.30443618814388], "det_box": [647.0, 171.0, 710.0, 231.0, 0.999983012676239], "id": 1271, "x": 678, "y": 201, "dim": [1.532240390777588, 1.670041561126709, 4.048956394195557], "alpha": 1.440821886062622, "roty": 1.5270784997306373, "depth": 20.613992359586092}]}
  1. PYTHONPATH=. python tools/convert_estimation_bdd.py kitti test --session 623 --epoch 100
    Here the result looks very weird
    0000_bdd_3d.json
{"id": -1, "category": "", "manualShape": false, "manualAttributes": false, "attributes": {"occluded": false, "truncated": false, "ignore": false}, "box2d": {"x1": 6, "y1": 223, "x2": 157, "y2": 369, "confidence": 0.177522}, "box3d": {"alpha": 0.0, "orientation": 0.0, "location": [0, 0, 0], "dimension": [0, 0, 0], "xc": -258, "yc": 360}}

the alpha is always 0.0 , the category is always empty

  1. PYTHONPATH=. python tools/convert_tracking_bdd.py kitti test --session 623 --epoch 100
    the same: the alpha is always 0.0 , the category is always empty
    623_100_kitti_test_set/kf2ddeep_age20_aff0.1_hit0_100m_803/data/0000_bdd_3d.json
{"id": 28, "category": "", "manualShape": false, "manualAttributes": false, "attributes": {"occluded": false, "truncated": false, "ignore": false}, "box2d": {"x1": 972, "y1": 183, "x2": 1240, "y2": 375, "confidence": 0.999973}, "box3d": {"alpha": 0.0, "orientation": 0.0, "location": [[4.390286208707697, 1.1490172121191522, 3.698746681213379]], "dimension": [0, 0, 0], "xc": 1466, "yc": 397}}

So are there bugs in convert_estimation_bdd.py and convert_tracking_bdd.py, or I missed something?

Thank you!

Prediction is empty

Hi, when I was testing on GTA validation set, I didn't get any prediction results.

Is there anything wrong in my implementation ?

Here's the details of my pipeline:

  • I download dataset and checkpoints using python loader/download.py mini and python loader/download.py checkpoint
  • I unzip 3d_tracking_checkpoint.zip and the directory looks like:
checkpoint
|-- 3d_tracking_checkpoint.zip
|-- 616_gta_checkpoint_030.pth.tar
|-- 623_kitti_checkpoint_100.pth.tar
|-- 803_kitti_300_linear.pth
`-- faster_rcnn_checkpoint.zip
  • I unzip gta_3d_tracking_val_image_0001.zip and gta_3d_tracking_val_label_0001.zip and the directory tree looks like:
data
|-- gta5_tracking
|   `-- val
|       |-- image
|       |   |-- rec_10090911_clouds_21h53m_x-968y-1487tox2523y214
|       |   |-- rec_10090913_thunder_15h7m_x-293y-199tox-2713y2301
|       |   |-- rec_10090915_thunder_9h11m_x383y-544tox2291y2832
|       |   |-- rec_10090917_thunder_20h0m_x-533y-1709tox-2718y1505
|       |   |-- rec_10090920_clearing_19h19m_x-647y-319tox1349y2684
|       |   |-- rec_10090922_overcast_9h52m_x34y-1009tox-2815y48
|       |   |-- rec_10090924_rain_10h52m_x-6y-1235tox2150y1328
|       |   |-- rec_10090926_overcast_6h56m_x-178y-1803tox-2358y1027
|       |   |-- rec_10090929_foggy_7h57m_x103y-183tox-2986y1982
|       |   `-- rec_10090934_rain_14h17m_x-324y-664tox1977y2572
|       `-- label
|           |-- rec_10090911_clouds_21h53m_x-968y-1487tox2523y214
|           |-- rec_10090911_clouds_21h53m_x-968y-1487tox2523y214_bdd.json
|           |-- rec_10090913_thunder_15h7m_x-293y-199tox-2713y2301
|           |-- rec_10090913_thunder_15h7m_x-293y-199tox-2713y2301_bdd.json
|           |-- rec_10090915_thunder_9h11m_x383y-544tox2291y2832
|           |-- rec_10090915_thunder_9h11m_x383y-544tox2291y2832_bdd.json
|           |-- rec_10090917_thunder_20h0m_x-533y-1709tox-2718y1505
|           |-- rec_10090917_thunder_20h0m_x-533y-1709tox-2718y1505_bdd.json
|           |-- rec_10090920_clearing_19h19m_x-647y-319tox1349y2684
|           |-- rec_10090920_clearing_19h19m_x-647y-319tox1349y2684_bdd.json
|           |-- rec_10090922_overcast_9h52m_x34y-1009tox-2815y48
|           |-- rec_10090922_overcast_9h52m_x34y-1009tox-2815y48_bdd.json
|           |-- rec_10090924_rain_10h52m_x-6y-1235tox2150y1328
|           |-- rec_10090924_rain_10h52m_x-6y-1235tox2150y1328_bdd.json
|           |-- rec_10090926_overcast_6h56m_x-178y-1803tox-2358y1027
|           |-- rec_10090926_overcast_6h56m_x-178y-1803tox-2358y1027_bdd.json
|           |-- rec_10090929_foggy_7h57m_x103y-183tox-2986y1982
|           |-- rec_10090929_foggy_7h57m_x103y-183tox-2986y1982_bdd.json
|           |-- rec_10090934_rain_14h17m_x-324y-664tox1977y2572
|           `-- rec_10090934_rain_14h17m_x-324y-664tox1977y2572_bdd.json
  • then run sh scripts/test_gta.sh 0 in 3d-vehicle-tracking/3d-tracking/

During the inference process, I got the log says Prediction is empty :

Load single json file
Sequences [461] with total 461 frames
Input images are not normalized
Number of image to test: 461
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
Prediction is empty
...

I also checked the output pkl file and find predicted result is empty in every frame

I tried change the L16 in test_gta.sh to test on other sequence such as rec_10090934_rain_14h17m_x-324y-664tox1977y2572 but still got no prediction results.

tensor (8) must match the existing size (0)

hello ,
when I run "./run_train.sh ",error came:
(base) pf@pf-System-Product-Name:~/3d-vehicle-tracking/faster-rcnn.pytorch$ ./run_train.sh
Called with args:
Namespace(batch_size=8, checkepoch=1, checkpoint=0, checkpoint_interval=10000, checksession=1, class_agnostic=False, cuda=True, dataset='gta_det', disp_interval=100, large_scale=False, lr=0.001, lr_decay_gamma=0.1, lr_decay_step=10, mGPUs=True, max_epochs=20, net='res101', num_workers=4, optimizer='adam', resume=False, save_dir='models', session=201, start_epoch=1, use_tfboard=False)
/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/utils/config.py:384: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))
Using config:
{'ANCHOR_RATIOS': [0.5, 1, 2],
'ANCHOR_SCALES': [2, 4, 8, 16],
'ANNO_PATH': 'train',
'BINARY_CLASS': True,
'CROP_RESIZE_WITH_MAX_POOL': False,
'CUDA': False,
'DATASET': 'gta',
'DATA_DIR': '/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/data',
'DEDUP_BOXES': 0.0625,
'EPS': 1e-14,
'EXP_DIR': 'res101',
'FEAT_STRIDE': [16],
'FOCAL': 935.3074360871937,
'GPU_ID': 0,
'MATLAB': 'matlab',
'MAX_NUM_GT_BOXES': 100,
'MOBILENET': {'DEPTH_MULTIPLIER': 1.0,
'FIXED_LAYERS': 5,
'REGU_DEPTH': False,
'WEIGHT_DECAY': 4e-05},
'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
'POOLING_MODE': 'align',
'POOLING_SIZE': 7,
'RESNET': {'FIXED_BLOCKS': 1, 'MAX_POOL': False},
'RNG_SEED': 3,
'ROOT_DIR': '/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch',
'RUN_POSE': True,
'TEST': {'BBOX_REG': True,
'HAS_RPN': True,
'MAX_SIZE': 1920,
'MODE': 'nms',
'NMS': 0.3,
'PROPOSAL_METHOD': 'gt',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 300,
'RPN_PRE_NMS_TOP_N': 6000,
'RPN_TOP_N': 5000,
'SCALES': [1080],
'SVM': False},
'TRAIN': {'ASPECT_GROUPING': False,
'BATCH_SIZE': 128,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.0,
'BIAS_DECAY': False,
'BN_TRAIN': False,
'DISPLAY': 20,
'DOUBLE_BIAS': False,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'GAMMA': 0.1,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'LEARNING_RATE': 0.001,
'MAX_SIZE': 1920,
'MOMENTUM': 0.9,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 8,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [1080],
'SNAPSHOT_ITERS': 5000,
'SNAPSHOT_KEPT': 3,
'SNAPSHOT_PREFIX': 'res101_faster_rcnn',
'STEPSIZE': [30000],
'SUMMARY_INTERVAL': 180,
'TRIM_HEIGHT': 1080,
'TRIM_WIDTH': 1920,
'TRUNCATED': False,
'USE_ALL_GT': True,
'USE_FLIPPED': False,
'USE_GT': False,
'WEIGHT_DECAY': 0.0001},
'USE_DEBUG_SET': False,
'USE_GPU_NMS': True}
Loaded dataset gta_det_train for training
Set proposal method: gt
Preparing training data...
done
before filtering, there are 0 images...
after filtering, there are 0 images...
0 roidb entries
Loading pretrained weights from data/pretrained_model/resnet101_caffe.pth
Traceback (most recent call last):
File "trainval_net.py", line 363, in
data_iter = iter(dataloader)
File "/home/pf/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "/home/pf/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 297, in init
self._put_indices()
File "/home/pf/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in _put_indices
indices = next(self.sample_iter, None)
File "/home/pf/.local/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 138, in iter
for idx in self.sampler:
File "trainval_net.py", line 137, in iter
self.batch_size) + self.range
RuntimeError: The expanded size of the tensor (8) must match the existing size (0) at non-singleton dimension 1


so I searched in the issues,and check my datasets .datasets is complete.
is it the wrong configuration in cuda?
by the way ,the following is my configuration information:

(base) pf@pf-System-Product-Name:~/3d-vehicle-tracking/faster-rcnn.pytorch$ bash init.sh
++ pwd

  • _PWD=/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
  • _DATA_GTA=/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/../3d-tracking/data/gta5_tracking/
  • _DATA_KITTI_T=/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/../3d-tracking/data/kitti_tracking/
  • _DATA_KITTI_D=/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/../3d-tracking/data/kitti_object/
  • cd lib/
  • ./make.sh
    running build_ext
    skipping 'model/utils/bbox.c' Cython extension (up-to-date)
    skipping 'pycocotools/_mask.c' Cython extension (up-to-date)
    Compiling nms kernels by nvcc...
    Including CUDA code.
    /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/nms
    ['/home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/nms/src/nms_cuda_kernel.cu.o']
    generating /tmp/tmp7dsxa0xo/_nms.c
    setting the current directory to '/tmp/tmp7dsxa0xo'
    running build_ext
    building '_nms' extension
    creating home
    creating home/pf
    creating home/pf/3d-vehicle-tracking
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c _roi_pooling.c -o ./_roi_pooling.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling_cuda.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling_cuda.o -std=c99
    gcc -pthread -shared -B /home/pf/anaconda3/compiler_compat -L/home/pf/anaconda3/lib -Wl,-rpath=/home/pf/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ -std=c99 ./_roi_pooling.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling_cuda.o /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_pooling/src/roi_pooling.cu.o -o ./_roi_pooling.so
    Compiling roi align kernels by nvcc...
    /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align
    Including CUDA code.
    generating /tmp/tmpkageu3dg/_roi_align.c
    setting the current directory to '/tmp/tmpkageu3dg'
    running build_ext
    building '_roi_align' extension
    creating home
    creating home/pf
    creating home/pf/3d-vehicle-tracking
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c _roi_align.c -o ./_roi_align.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align_cuda.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align_cuda.o -std=c99
    gcc -pthread -shared -B /home/pf/anaconda3/compiler_compat -L/home/pf/anaconda3/lib -Wl,-rpath=/home/pf/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ -std=c99 ./_roi_align.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align_cuda.o /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_align/src/roi_align_kernel.cu.o -o ./_roi_align.so
    Compiling roi crop kernels by nvcc...
    Including CUDA code.
    /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop
    generating /tmp/tmppw9uiuk4/_roi_crop.c
    setting the current directory to '/tmp/tmppw9uiuk4'
    running build_ext
    building '_roi_crop' extension
    creating home
    creating home/pf
    creating home/pf/3d-vehicle-tracking
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop
    creating home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c _roi_crop.c -o ./_roi_crop.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop.o -std=c99
    gcc -pthread -B /home/pf/anaconda3/compiler_compat -Wl,--sysroot=/ -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -std=c99 -fPIC -DWITH_CUDA -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/pf/.local/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/pf/anaconda3/include/python3.6m -c /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop_cuda.c -o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop_cuda.o -std=c99
    gcc -pthread -shared -B /home/pf/anaconda3/compiler_compat -L/home/pf/anaconda3/lib -Wl,-rpath=/home/pf/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ -std=c99 ./_roi_crop.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop.o ./home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop_cuda.o /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/lib/model/roi_crop/src/roi_crop_cuda_kernel.cu.o -o ./_roi_crop.so
  • cd /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
  • mkdir -p data/pretrained_model
  • cd data/pretrained_model
  • wget https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth
    --2019-11-28 06:46:12-- https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth
    Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.26.70
    Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.26.70|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 178728960 (170M) [application/octet-stream]
    Saving to: ‘resnet101-5d3b4d8f.pth’

resnet101-5d3b4d8f.pth 100%[========================================>] 170.45M 1.57MB/s in 1m 54s

2019-11-28 06:48:08 (1.50 MB/s) - ‘resnet101-5d3b4d8f.pth’ saved [178728960/178728960]

  • mv resnet101-5d3b4d8f.pth resnet101_caffe.pth
  • cd /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
  • cd data/
  • ln -s /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/../3d-tracking/data/gta5_tracking/ gta5_tracking
  • echo 'mv GTA dataset (train, val, test) to data/gta5_tracking'
    mv GTA dataset (train, val, test) to data/gta5_tracking
  • cd /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/data/
  • ln -s /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch/../3d-tracking/data/kitti_tracking/ kitti_tracking
  • echo 'mv KITTI tracking dataset (train, test) to data/kitti_tracking'
    mv KITTI tracking dataset (train, test) to data/kitti_tracking
  • cd /home/pf/3d-vehicle-tracking/faster-rcnn.pytorch
  • mkdir vis/
  • mkdir -p models/res101/gta_det/
  • mkdir models/res101/kitti/
  • echo 'Please move faster_rcnn_200_14_18895.pth to models/res101/gta_det/'
    Please move faster_rcnn_200_14_18895.pth to models/res101/gta_det/
  • echo 'Please move faster_rcnn_300_100_175.pth to models/res101/kitti/'
    Please move faster_rcnn_300_100_175.pth to models/res101/kitti/
    -----------------i do not understand when i run the upper command,show this commands.should I execute this commands?---------------------

system :ubuntu16.04
gpu:1080ti

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.