Code Monkey home page Code Monkey logo

globaltrack's People

Contributors

huanglianghua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

globaltrack's Issues

COCO 、LaSOT、GOT10K preprocess

Thank you for the author's code. Do we need to preprocess the coco, LaSOT, GOT10K data when reproducing the code ourselves, or just put it in the corresponding folder?

hi

have you plan to release your code??

ModuleNotFoundError: No module named '_init_paths'

Traceback (most recent call last):
File "tools/test_global_track.py", line 1, in
import _init_paths
ModuleNotFoundError: No module named '_init_paths'

I followed the steps for the setup. How to fix this?

Hi, would you please release your 'neuron' repo?

I saw the library neuron in this repo GlobalTrack, it seems to an advanced version of the repo got10k for visual tracking dataset processing. So, as the title says, do you have a plan to release that repo named
as neuron? thank you and all of your doing.

not install mmcv

following the tutorial in readme.md, python setup.py develop
But there is no mmcv in the conda list.
What is the corresponding mmcv version required? Can I install it from ?
Thank you!

ImportError: _submodules/mmdetection/mmdet/ops/dcn/deform_conv_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration

hi,
I got some error:
ImportError: _submodules/mmdetection/mmdet/ops/dcn/deform_conv_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration
I tried on two computer(1060 and 2080ti), they all got same error.
Can anyone help me?
thanks

你好:
我在1060, 2080ti上 嘗試運作,都獲得同樣的error,能否給予我一點協助,非常感謝

ImportError: libcudart.so.9.0

`Traceback (most recent call last):

File "tools/test_global_track.py", line 6, in
import _init_paths
File "/home/huchenjie/CODE/GlobalTrack-master/_init_paths.py", line 6, in
from modules import *
File "/home/huchenjie/CODE/GlobalTrack-master/modules/init.py", line 1, in
from .modulators import *
File "/home/huchenjie/CODE/GlobalTrack-master/modules/modulators.py", line 3, in
from mmdet.models.roi_extractors import SingleRoIExtractor
File "_submodules/mmdetection/mmdet/models/init.py", line 1, in
from .anchor_heads import * # noqa: F401,F403
File "_submodules/mmdetection/mmdet/models/anchor_heads/init.py", line 1, in
from .anchor_head import AnchorHead
File "_submodules/mmdetection/mmdet/models/anchor_heads/anchor_head.py", line 8, in
from mmdet.core import (AnchorGenerator, anchor_target, delta2bbox, force_fp32,
File "_submodules/mmdetection/mmdet/core/init.py", line 6, in
from .post_processing import * # noqa: F401, F403
File "_submodules/mmdetection/mmdet/core/post_processing/init.py", line 1, in
from .bbox_nms import multiclass_nms
File "_submodules/mmdetection/mmdet/core/post_processing/bbox_nms.py", line 3, in
from mmdet.ops.nms import nms_wrapper
File "_submodules/mmdetection/mmdet/ops/init.py", line 2, in
from .dcn import (DeformConv, DeformConvPack, DeformRoIPooling,
File "_submodules/mmdetection/mmdet/ops/dcn/init.py", line 1, in
from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv,
File "_submodules/mmdetection/mmdet/ops/dcn/deform_conv.py", line 9, in
from . import deform_conv_cuda
ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory
`
Thank you for the code.I installed the environment as required and successfully compiled the Cpp/CUDA extensions.The only difference is that pytorch 1.1.0 was replaced by pytorch 1.5.1.The above error occurred while I was running 'test_global_track.py',I tried many ways but failed to solve the problem,so I hope to get help from the author and other friends.

list index out of range

I follow Non-distributed training to train all dataset. but when some time, the programe will interrupt. The question is IndexError: list index out of range. Can you tell me how to solve this question? Is the dataset is something wrong? @huanglianghua Thank you very much

multi_init boxes

hello,thanks for your nice work!
however, i am thinking could i set more than one init_boxes on the first frame?
In another word, may this job work on MOT,can you give some advice about that?
Thanks!

setup.py

当我执行setup进行编译的时候,会报错:error: unknown file type '' (from 'src/soft_nms_cpu.pyx/mmdet/ops/nms')
请问这是什么情况呀,麻烦告诉我一下,谢谢您

OTB-15acc is 60.6 using the pretrained weights

Thanks for your excellent work and your sharing. When I download your pretrained weights(qg_rcnn_r50_fpn_coco_got10k_lasot.pth) and testing it directly on OTB100, I found the acc is only 60.6, is that something wrong? Have you ever test it on OTB100. Waiting for your reply. Best wishes!

OTB-15 demo not performing well and LaSOTBenchmark not found.

hi,
I have two issues when I run python test_global_track.py

  1. the testing data basketball, bird... in OTB-15 not performing well.
    I used original set in test_global_track.py.
    cfg_file = 'configs/qg_rcnn_r50_fpn.py'
    ckp_file = 'checkpoints/qg_rcnn_r50_fpn_coco_got10k_lasot.pth'
    The problem of tracking the wrong object is serious (eg., wrong player, wrong bird... ).

  2. I got the error OSError: /home/myAccount/data/LaSOTBenchmark/airplane/airplane-1/groundtruth.txt not found

Where to download the LaSOTB benchmark?
or it will download automatically, when I run test_global_track.py.

Files already downloaded.
Processing sequence [1/280]: airplane-1...
Traceback (most recent call last):
File "test_global_track.py", line 15, in
data.EvaluatorLaSOT(frame_stride=10),
File "_submodules/neuron/neuron/data/evaluators/otb_eval.py", line 430, in init
dataset = datasets.LaSOT(root_dir, subset='test')
File "_submodules/neuron/neuron/data/datasets/lasot.py", line 42, in init
subset=self.subset)
File "_submodules/neuron/neuron/data/datasets/dataset.py", line 30, in init
seq_dict = self._construct_seq_dict(**kwargs)
File "_submodules/neuron/neuron/data/datasets/lasot.py", line 69, in _construct_seq_dict
anno_files[s], delimiter=',', dtype=np.float32)
File "/home/myAccount/miniconda3/envs/globalTrack/lib/python3.7/site-packages/numpy/lib/npyio.py", line 962, in loadtxt
fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
File "/home/myAccount/miniconda3/envs/globalTrack/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 266, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "/home/myAccount/miniconda3/envs/globalTrack/lib/python3.7/site-packages/numpy/lib/_datasource.py", line 624, in open
raise IOError("%s not found." % path)
OSError: /home/myAccount/data/LaSOTBenchmark/airplane/airplane-1/groundtruth.txt not found.

结果在tracker_benchmark上表现很差

Result of attributes -- 'GlobalTrack'
'ALL' overlap : 10.5% failures : 10.0
'BC' overlap : 10.0% failures : 10.0
'DEF' overlap : 9.9% failures : 10.0
'FM' overlap : 11.2% failures : 10.0
'IPR' overlap : 11.0% failures : 9.9
'IV' overlap : 10.8% failures : 10.0
'LR' overlap : 9.0% failures : 10.0
'MB' overlap : 10.9% failures : 10.0
'OCC' overlap : 9.5% failures : 10.0
'OPR' overlap : 10.2% failures : 10.0
'OV' overlap : 10.9% failures : 10.0
'SV' overlap : 10.8% failures : 10.0
8cf167323b9b975113a30fe0c3dc6db

compile the Cpp/CUDA

Thank you for sharing!
Do you need additional conditions to compile successfully?such as the version of CUDA and VS .
(I chose the second option)

dataset

assuming dataset storied in ./data
What do these data sets consist of? Training set, test set?
Can you explain the directory structure?
Thank you!

cuda out of memory

I met cuda out of memory when using 2080 with 11G memory.Is there any way to solve this problem?

hope you release tracking results on common datasets

As the title describes, can you release your tracking results on common datasets?
Though the paper shows overall results on most long-term datasets, I want to analyze the result on each video of common datasets, especially short-term datasets, such as OTB2015, VOT2018, .etc.
Thank you!
Best regards!

About RPN Modulator

Hi, thanks for your cool work!

For the RPN_Modulator class in modulators.py file, it seems that it's different with the equation (1) in the paper in 2 aspects:

  • fx(x) is missing, no 3x3 conv applied to the search img feature.
  • convolution between fx(x) and fz(z) is replaced by dot product.

The related code in modulators.py file is
out_ij = [self.proj_modulator[k](query) * gallary[k] for k in range(len(gallary))]

Could you check, thanks!

RuntimeError: cuda runtime error (30) : unknown error at mmdet/ops/roi_align/src/roi_align_kernel.cu:145

My GPU is 2080Ti ,CUDA version is 9.0 ,pytorch is 1.1.0, torchvisio is 0.3.0,but I meet a error:,I tried many ways but failed to solve the problem,so I hope to get help from the author and other friends.

Args:
-- Namespace(autoscale_lr=True, base_dataset='got10k_train', base_transforms='extra_partial', config='configs/qg_rcnn_r50_fpn.py', fp16=False, gpus=2, launcher='none', load_from=None, local_rank=0, resume_from=None, sampling_prob='0.4,0.4,0.2', seed=None, validate=False, work_dir='work_dirs/qg_rcnn_r50_fpn', workers=None)
Configs:
-- Config (path: /media/hdc/data4/wxl/GlobalTrack/configs/qg_rcnn_r50_fpn.py): {'model': {'type': 'QG_RCNN', 'pretrained': 'torchvision://resnet50', 'backbone': {'type': 'ResNet', 'depth': 50, 'num_stages': 4, 'out_indices': (0, 1, 2, 3), 'frozen_stages': 1, 'style': 'pytorch'}, 'neck': {'type': 'FPN', 'in_channels': [256, 512, 1024, 2048], 'out_channels': 256, 'num_outs': 5}, 'rpn_head': {'type': 'RPNHead', 'in_channels': 256, 'feat_channels': 256, 'anchor_scales': [8], 'anchor_ratios': [0.5, 1.0, 2.0], 'anchor_strides': [4, 8, 16, 32, 64], 'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'loss_cls': {'type': 'CrossEntropyLoss', 'use_sigmoid': True, 'loss_weight': 1.0}, 'loss_bbox': {'type': 'SmoothL1Loss', 'beta': 0.1111111111111111, 'loss_weight': 1.0}}, 'bbox_roi_extractor': {'type': 'SingleRoIExtractor', 'roi_layer': {'type': 'RoIAlign', 'out_size': 7, 'sample_num': 2}, 'out_channels': 256, 'featmap_strides': [4, 8, 16, 32]}, 'bbox_head': {'type': 'SharedFCBBoxHead', 'num_fcs': 2, 'in_channels': 256, 'fc_out_channels': 1024, 'roi_feat_size': 7, 'num_classes': 2, 'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'reg_class_agnostic': False, 'loss_cls': {'type': 'CrossEntropyLoss', 'use_sigmoid': False, 'loss_weight': 1.0}, 'loss_bbox': {'type': 'SmoothL1Loss', 'beta': 1.0, 'loss_weight': 1.0}}}, 'train_cfg': {'rpn': {'assigner': {'type': 'MaxIoUAssigner', 'pos_iou_thr': 0.7, 'neg_iou_thr': 0.3, 'min_pos_iou': 0.3, 'ignore_iof_thr': -1}, 'sampler': {'type': 'RandomSampler', 'num': 256, 'pos_fraction': 0.5, 'neg_pos_ub': -1, 'add_gt_as_proposals': False}, 'allowed_border': 0, 'pos_weight': -1, 'debug': False}, 'rpn_proposal': {'nms_across_levels': False, 'nms_pre': 2000, 'nms_post': 2000, 'max_num': 2000, 'nms_thr': 0.7, 'min_bbox_size': 0}, 'rcnn': {'assigner': {'type': 'MaxIoUAssigner', 'pos_iou_thr': 0.5, 'neg_iou_thr': 0.5, 'min_pos_iou': 0.5, 'ignore_iof_thr': -1}, 'sampler': {'type': 'RandomSampler', 'num': 512, 'pos_fraction': 0.25, 'neg_pos_ub': -1, 'add_gt_as_proposals': True}, 'pos_weight': -1, 'debug': False}}, 'test_cfg': {'rpn': {'nms_across_levels': False, 'nms_pre': 1000, 'nms_post': 1000, 'max_num': 1000, 'nms_thr': 0.7, 'min_bbox_size': 0}, 'rcnn': {'score_thr': 0.0, 'nms': {'type': 'nms', 'iou_thr': 0.5}, 'max_per_img': 1000}}, 'data': {'imgs_per_gpu': 1, 'workers_per_gpu': 4, 'train': {'type': 'PairWrapper', 'ann_file': None, 'base_dataset': 'got10k_train', 'base_transforms': 'extra_partial', 'sampling_prob': [0.4, 0.4, 0.2], 'max_size': 30000, 'max_instances': 8, 'with_label': True}}, 'optimizer': {'type': 'SGD', 'lr': 0.0025, 'momentum': 0.9, 'weight_decay': 0.0001}, 'optimizer_config': {'grad_clip': {'max_norm': 35, 'norm_type': 2}}, 'lr_config': {'policy': 'step', 'warmup': 'linear', 'warmup_iters': 500, 'warmup_ratio': 0.3333333333333333, 'step': [8, 11]}, 'checkpoint_config': {'interval': 1}, 'log_config': {'interval': 50, 'hooks': [{'type': 'TextLoggerHook'}]}, 'total_epochs': 12, 'cudnn_benchmark': True, 'dist_params': {'backend': 'nccl'}, 'log_level': 'INFO', 'work_dir': 'work_dirs/qg_rcnn_r50_fpn', 'load_from': 'checkpoints/qg_rcnn_r50_fpn_2x_20181010-443129e1.pth', 'resume_from': None, 'workflow': [('train', 1)], 'gpus': 2}
2021-02-10 12:04:44,722 - INFO - Distributed training: False
2021-02-10 12:04:45,171 - INFO - load model from: torchvision://resnet50
2021-02-10 12:04:45,366 - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2021-02-10 12:04:50,101 - INFO - load checkpoint from checkpoints/qg_rcnn_r50_fpn_2x_20181010-443129e1.pth
2021-02-10 12:04:50,431 - INFO - Start running, host: root@hdc-IBM, work_dir: /media/hdc/data4/wxl/GlobalTrack/work_dirs/qg_rcnn_r50_fpn
2021-02-10 12:04:50,432 - INFO - workflow: [('train', 1)], max: 12 epochs
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=383 error=11 : invalid argument
Traceback (most recent call last):

File "/media/hdc/data4/wxl/GlobalTrack/tools/train_qg_rcnn.py", line 143, in
main()
File "/media/hdc/data4/wxl/GlobalTrack/tools/train_qg_rcnn.py", line 138, in main
logger=logger)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 62, in train_detector
_non_dist_train(model, dataset, cfg, validate=validate)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 229, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/mmcv/runner/runner.py", line 264, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 38, in batch_processor
losses = model(**data)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/qg_rcnn.py", line 58, in forward
img_z, img_x, img_meta_z, img_meta_x, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/qg_rcnn.py", line 91, in forward_train
for x_ij, i, j in self.rpn_modulator(z, x, gt_bboxes_z):
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/modulators.py", line 37, in forward
modulator=self.learn(feats_z, gt_bboxes_z))
File "/media/hdc/data4/wxl/GlobalTrack/modules/modulators.py", line 54, in learn
feats_z[:self.roi_extractor.num_inputs], rois)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/core/fp16/decorators.py", line 127, in new_func
return old_func(*args, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/submodules/mmdetection/mmdet/models/roi_extractors/single_level.py", line 105, in forward
roi_feats_t = self.roi_layers[i](feats[i], rois
)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/ops/roi_align/roi_align.py", line 80, in forward
self.sample_num)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/ops/roi_align/roi_align.py", line 26, in forward
sample_num, output)
RuntimeError: cuda runtime error (30) : unknown error at mmdet/ops/roi_align/src/roi_align_kernel.cu:145

How to get dataset used in the paper?

I'm trying to reproduce the code. I‘m just getting into PyTorch, I'd lile to ask if I need to download the dataset used in the paper? If so, how can I get the dataset? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.