Code Monkey home page Code Monkey logo

rotationdetection's Introduction

AlphaRotate: A Rotation Detection Benchmark using TensorFlow

Documentation Status PyPI Downloads License Average time to resolve an issue Percentage of issues still open

🚀🚀🚀 News: MMRotate has been released at https://github.com/open-mmlab/mmrotate 🚀🚀🚀

Abstract

AlphaRotate is mainly maintained by Xue Yang with Shanghai Jiao Tong University supervised by Prof. Junchi Yan.

Papers and codes related to remote sensing/aerial image detection: DOTA-DOAI .

Techniques:

The above-mentioned rotation detectors are all modified based on the following horizontal detectors:

3

Projects

0

Latest Performance

DOTA (Task1)

Baseline

Backbone Neck Training/test dataset Data Augmentation Epoch NMS
ResNet50_v1d 600->800 FPN trainval/test × 13 (AP50) or 17 (AP50:95) is enough for baseline (default is 13) gpu nms (slightly worse <1% than cpu nms but faster)
Method Baseline DOTA1.0 DOTA1.5 DOTA2.0 Model Anchor Angle Pred. Reg. Loss Angle Range Configs
- RetinaNet-R 67.25 56.50 42.04 Baidu Drive (bi8b) R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
- RetinaNet-H 64.17 56.10 43.06 Baidu Drive (bi8b) H Reg. (∆⍬) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- RetinaNet-H 65.33 57.21 44.58 Baidu Drive (bi8b) H Reg. (sin⍬, cos⍬) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- RetinaNet-H 65.73 58.87 44.16 Baidu Drive (bi8b) H Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
IoU-Smooth L1 RetinaNet-H 66.99 59.17 46.31 Baidu Drive (qcvc) H Reg. (∆⍬) iou-smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
RIDet RetinaNet-H 66.06 58.91 45.35 Baidu Drive (njjv) H Quad. hungarian loss - dota1.0, dota1.5, dota2.0
RSDet RetinaNet-H 67.27 61.42 46.71 Baidu Drive (2a1f) H Quad. modulated loss - dota1.0, dota1.5, dota2.0
CSL RetinaNet-H 67.38 58.55 43.34 Baidu Drive (sdbb) H Cls.: Gaussian (r=1, w=10) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
DCL RetinaNet-H 67.39 59.38 45.46 Baidu Drive (m7pq) H Cls.: BCL (w=180/256) smooth L1 [-90,90) dota1.0, dota1.5, dota2.0
- FCOS 67.69 61.05 48.10 Baidu Drive (pic4) - Quad smooth L1 - dota1.0, dota1.5, dota2.0
RSDet++ FCOS 67.91 62.18 48.81 Baidu Drive (8ww5) - Quad modulated loss - dota1.0, dota1.5 dota2.0
GWD RetinaNet-H 68.93 60.03 46.65 Baidu Drive (7g5a) H Reg. (∆⍬) gwd [-90,0) dota1.0, dota1.5, dota2.0
GWD + SWA RetinaNet-H 69.92 60.60 47.63 Baidu Drive (qcn0) H Reg. (∆⍬) gwd [-90,0) dota1.0, dota1.5, dota2.0
BCD RetinaNet-H 71.23 60.78 47.48 Baidu Drive (0puk) H Reg. (∆⍬) bcd [-90,0) dota1.0, dota1.5, dota2.0
KLD RetinaNet-H 71.28 62.50 47.69 Baidu Drive (o6rv) H Reg. (∆⍬) kld [-90,0) dota1.0, dota1.5, dota2.0
KFIoU RetinaNet-H 70.64 62.71 48.04 Baidu Drive (o72o) H Reg. (∆⍬) kfiou [-90,0) dota1.0, dota1.5, dota2.0
KFIoU* RetinaNet-H 71.60 - 48.94 Baidu Drive (o72o) H Reg. (∆⍬) kfiou [-90,0) dota1.0, dota2.0
R3Det RetinaNet-H 70.66 62.91 48.43 Baidu Drive (n9mv) H->R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5, dota2.0
DCL R3Det 71.21 61.98 48.71 Baidu Drive (eg2s) H->R Cls.: BCL (w=180/256) iou-smooth L1 [-90,0)->[-90,90) dota1.0, dota1.5, dota2.0
GWD R3Det 71.56 63.22 49.25 Baidu Drive (jb6e) H->R Reg. (∆⍬) smooth L1->gwd [-90,0) dota1.0, dota1.5, dota2.0
BCD R3Det 72.22 63.53 49.71 Baidu Drive (v60g) H->R Reg. (∆⍬) bcd [-90,0) dota1.0, dota1.5, dota2.0
KLD R3Det 71.73 65.18 50.90 Baidu Drive (tq7f) H->R Reg. (∆⍬) kld [-90,0) dota1.0, dota1.5, dota2.0
KFIoU R3Det 72.28 64.69 50.41 Baidu Drive (u77v) H->R Reg. (∆⍬) kfiou [-90,0) dota1.0, dota1.5, dota2.0
- R2CNN (Faster-RCNN) 72.27 66.45 52.35 Baidu Drive (02s5) H->R Reg. (∆⍬) smooth L1 [-90,0) dota1.0, dota1.5 dota2.0

SOTA

Method Backbone DOTA1.0 Model MS Data Augmentation Epoch Configs
R2CNN-BCD ResNet152_v1d-FPN 79.54 Baidu Drive (h2u1) 34 dota1.0
RetinaNet-BCD ResNet152_v1d-FPN 78.52 Baidu Drive (0puk) 51 dota1.0
R3Det-BCD ResNet50_v1d-FPN 79.08 Baidu Drive (v60g) 51 dota1.0
R3Det-BCD ResNet152_v1d-FPN 79.95 Baidu Drive (v60g) 51 dota1.0

Note:

  • Single GPU training: SAVE_WEIGHTS_INTE = iter_epoch * 1 (DOTA1.0: iter_epoch=27000, DOTA1.5: iter_epoch=32000, DOTA2.0: iter_epoch=40000)
  • Multi-GPU training (better): SAVE_WEIGHTS_INTE = iter_epoch * 2

My Development Environment

  • python3.5 (anaconda recommend)
  • cuda 10.0
  • opencv-python 4.1.1.26 (important)
  • tfplot 0.2.0 (optional)
  • tensorflow-gpu 1.13
  • tqdm 4.54.0
  • Shapely 1.7.1

Installation

Manual configuration (cuda version < 11)

pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"

Or, you can simply install AlphaRotate with the following command:

pip install alpharotate  # Not suitable for dev.

Docker (cuda version < 11)

docker images: yangxue2docker/yx-tf-det:tensorflow1.13.1-cuda10-gpu-py3

Note: For 30xx series graphics cards (cuda version >= 11), I recommend this blog to install tf1.xx, or download image from tensorflow-release-notes according to your development environment, e.g. nvcr.io/nvidia/tensorflow:20.11-tf1-py3

cd alpharotate/libs/utils/cython_utils
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace (or make)

cd alpharotate/libs/utils/
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace

Download Model

Pretrain weights

Download a pretrain weight you need from the following three options, and then put it to $PATH_ROOT/dataloader/pretrained_weights.

  1. MxNet pretrain weights (recommend in this repo, default in NET_NAME): resnet_v1d, resnet_v1b, refer to gluon2TF.
  1. Tensorflow pretrain weights: resnet50_v1, resnet101_v1, resnet152_v1, efficientnet, mobilenet_v2, darknet53 (Baidu Drive (1jg2), Google Drive).
  2. PyTorch pretrain weights, refer to pretrain_zoo.py and Others.

Trained weights

  1. Please download trained models by this project, then put them to $PATH_ROOT/output/pretained_weights.

Train

  1. If you want to train your own dataset, please note:

    (1) Select the detector and dataset you want to use, and mark them as #DETECTOR and #DATASET (such as #DETECTOR=retinanet and #DATASET=DOTA)
    (2) Modify parameters (such as CLASS_NUM, DATASET_NAME, VERSION, etc.) in $PATH_ROO./configs/#DATASET/#DETECTOR/cfgs_xxx.py
    (3) Copy $PATH_ROO./configs/#DATASET/#DETECTOR/cfgs_xxx.py to $PATH_ROO./configs/cfgs.py
    (4) Add category information in $PATH_ROOT/libs/label_name_dict/label_dict.py     
    (5) Add data_name to $PATH_ROOT/dataloader/dataset/read_tfrecord.py  
    
  2. Make tfrecord
    If image is very large (such as DOTA dataset), the image needs to be cropped. Take DOTA dataset as a example:

    cd $PATH_ROOT/dataloader/dataset/DOTA
    python data_crop.py
    

    If image does not need to be cropped, just convert the annotation file into xml format, refer to example.xml.

    cd $PATH_ROOT/dataloader/dataset/  
    python convert_data_to_tfrecord.py --root_dir='/PATH/TO/DOTA/' 
                                       --xml_dir='labeltxt'
                                       --image_dir='images'
                                       --save_name='train' 
                                       --img_format='.png' 
                                       --dataset='DOTA'
    
  3. Start training

    cd $PATH_ROOT/tools/#DETECTOR
    python train.py
    

Test

  1. For large-scale image, take DOTA dataset as a example (the output file or visualization is in $PATH_ROOT/tools/#DETECTOR/test_dota/VERSION):

    cd $PATH_ROOT/tools/#DETECTOR
    python test_dota.py --test_dir='/PATH/TO/IMAGES/'  
                        --gpus=0,1,2,3,4,5,6,7  
                        -ms (multi-scale testing, optional)
                        -s (visualization, optional)
    
    or (recommend in this repo, better than multi-scale testing)
    
    python test_dota_sota.py --test_dir='/PATH/TO/IMAGES/'  
                             --gpus=0,1,2,3,4,5,6,7  
                             -s (visualization, optional)
    

    Notice: In order to set the breakpoint conveniently, the read and write mode of the file is' a+'. If the model of the same #VERSION needs to be tested again, the original test results need to be deleted.

  2. For small-scale image, take HRSC2016 dataset as a example:

    cd $PATH_ROOT/tools/#DETECTOR
    python test_hrsc2016.py --test_dir='/PATH/TO/IMAGES/'  
                            --gpu=0
                            --image_ext='bmp'
                            --test_annotation_path='/PATH/TO/ANNOTATIONS'
                            -s (visualization, optional)
    

Tensorboard

cd $PATH_ROOT/output/summary
tensorboard --logdir=.

1

2

Citation

If you find our code useful for your research, please consider cite.

@article{yang2021alpharotate,
    author  = {Yang, Xue and Zhou, Yue and Yan, Junchi},
    title   = {AlphaRotate: A Rotation Detection Benchmark using TensorFlow},
    journal = {arXiv preprint arXiv:2111.06677},
    year    = {2021},
}

Reference

1、https://github.com/endernewton/tf-faster-rcnn
2、https://github.com/zengarden/light_head_rcnn
3、https://github.com/tensorflow/models/tree/master/research/object_detection
4、https://github.com/fizyr/keras-retinanet

rotationdetection's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rotationdetection's Issues

HRSC2016 trained model

hello, can you provide the HRSC2016_179999model.ckpt file? because I want to test a few of pictures on this dataset

When i am trying to export pb using exportpb.py file from the checkpoints getting error. Please guide me on this...

WARNING:tensorflow:From exportPb.py:61: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From exportPb.py:41: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From ../../libs/models/backbones/mobilenet/mobilenet.py:324: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use layer.__call__ method instead.
WARNING:tensorflow:From ../../libs/models/necks/fpn_p3top7.py:24: The name tf.image.resize_bilinear is deprecated. Please use tf.compat.v1.image.resize_bilinear instead.

WARNING:tensorflow:From exportPb.py:65: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

WARNING:tensorflow:From exportPb.py:67: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2021-10-19 12:17:08.433221: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2021-10-19 12:17:08.456539: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.456701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
2021-10-19 12:17:08.456731: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-10-19 12:17:08.457562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-10-19 12:17:08.458291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-10-19 12:17:08.458464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-10-19 12:17:08.459427: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-10-19 12:17:08.460157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-10-19 12:17:08.462534: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-10-19 12:17:08.462645: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.462864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.462996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2021-10-19 12:17:08.463220: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-10-19 12:17:08.484313: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz
2021-10-19 12:17:08.484748: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563a907797f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-10-19 12:17:08.484766: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-10-19 12:17:08.484948: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.485137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
2021-10-19 12:17:08.485178: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-10-19 12:17:08.485194: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2021-10-19 12:17:08.485207: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2021-10-19 12:17:08.485219: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2021-10-19 12:17:08.485232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2021-10-19 12:17:08.485245: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2021-10-19 12:17:08.485257: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2021-10-19 12:17:08.485314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.485486: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.485618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2021-10-19 12:17:08.485645: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2021-10-19 12:17:08.586685: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-19 12:17:08.586707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2021-10-19 12:17:08.586716: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2021-10-19 12:17:08.586882: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.587085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.587255: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-19 12:17:08.587400: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 84 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-10-19 12:17:08.588667: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563a960139c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-10-19 12:17:08.588680: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce GTX 1050, Compute Capability 6.1
we have restred the weights from =====>>
../../output/trained_weights/RetinaNet_DOTA1.5_2x_20210314/DOTA1.5_1000model.ckpt
Traceback (most recent call last):
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [105] rhs shape= [84]
[[{{node save/Assign_294}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1290, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [105] rhs shape= [84]
[[node save/Assign_294 (defined at /home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'save/Assign_294':
File "exportPb.py", line 313, in
exporter.export_frozenPB()
File "exportPb.py", line 65, in export_frozenPB
saver = tf.train.Saver()
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 828, in init
self.build()
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 840, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build
build_restore=build_restore)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal
restore_sequentially, reshape)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore
self.op.get_shape().is_fully_defined())
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
validate_shape=validate_shape)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
use_locking=use_locking, name=name)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "exportPb.py", line 313, in
exporter.export_frozenPB()
File "exportPb.py", line 69, in export_frozenPB
saver.restore(sess, CKPT_PATH)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1326, in restore
err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [105] rhs shape= [84]
[[node save/Assign_294 (defined at /home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'save/Assign_294':
File "exportPb.py", line 313, in
exporter.export_frozenPB()
File "exportPb.py", line 65, in export_frozenPB
saver = tf.train.Saver()
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 828, in init
self.build()
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 840, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build
build_restore=build_restore)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal
restore_sequentially, reshape)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore
self.op.get_shape().is_fully_defined())
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
validate_shape=validate_shape)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
use_locking=use_locking, name=name)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/home/sandhya/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Tensor name "resnet50_v1d/C1/conv0/BatchNorm/beta" not found in checkpoint files /tank/newhome/liuyi/Documents/RotationDetection-main/dataloader/pretrained_weights/resnet50_v1d.ckpt

tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Tensor name "resnet50_v1d/C1/conv0/BatchNorm/beta" not found in checkpoint files /tank/newhome/liuyi/Documents/RotationDetection-main/dataloader/pretrained_weights/resnet50_v1d.ckpt
[[node save/RestoreV2 (defined at /tank/newhome/liuyi/enter/envs/rod/lib/python3.5/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

预训练模型权重链接失效

如标题所示。 大佬你好,CSL 以及DCL的R3DET模型和KLD的模型百度网盘链接均失效, 能麻烦大佬帮忙补一下链接么? 谢谢。

使用iou loss损失爆炸


我看这里的配置是没使用iou loss,使用的话会有性能提升吗?我把这个loss移植到其他工程以后损失直接nan了(不排除我移植的问题)。所以想请教一下您有没遇到过类似的问题,可能会是什么原因导致的。在scrdet++的代码配置中,我没找到iou loss的调用。您觉得iou loss在旋转框检测任务中能稳定涨点吗,我想把他移植到RoI transformer,看看效果是否提升,目前问题是损失爆炸。

train normally and generate output/trained_weights , but test_dota.py cannot detect any targets

Hi 大佬, I can train my own data normally and generate output/trained_weights accordingly using the docker provided, but test_dota.py cannot detect any target.
What could be the reason? Is my training process too short? or my datsat is too small? or the output/trained_weights is not used for detection?
I cannot think a reason? can you give me some advice to debug this problem

My train command is
RotationDetection-main/tools/r3det# python train.py

My detection command is
/RotationDetection-main/tools/r3det# python test_dota.py --test_dir='/Downloads/RotationDetection-main/DOTA/test/' --gpus=0 -s --show_box

关于compile问题

大佬,你好
如果我不想编译
cd $PATH_ROOT/libs/utils/cython_utils
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace (or make)

cd $PATH_ROOT/libs/utils/
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace

这些, 有现成的python脚本实现么? 直接用CPU运算的。
如果有的我应该如何导入?
谢谢

problem with dataset setting……

sorry, still me……maybe I am too stupid, sorry to bring trouble to you.....
the problem yesterday does not solve yet. Now the result imgs are like this(I change the self.cfgs.VIS_SCORE so it can show some boxes but obviously not correct). the dataset and command I use are like following. can you help me to check is there anything wrong TAT
2021-06-17 09-47-55屏幕截图
2021-06-17 09-39-18屏幕截图
2021-06-17 09-39-39屏幕截图
2021-06-17 09-39-59屏幕截图
2021-06-17 09-40-17屏幕截图
2021-06-17 09-50-41屏幕截图
2021-06-17 09-58-58屏幕截图

About label tool

Great work!
But a little question: Do you know any tools for labeling images of rotating bboxes using eight params(namely four points of bbox)?
Thanks~

Gradient about IOU-Smooth L1 loss in SCRDet

here's the relative link

In the link I say the backward gradient will be 0 eternally.

In other point, make the |u| underivable that gradient will not be 0. But the gradient of u/|u| is not 1 anymore.

@yangxue0827 Could you please help me out? Many thanks!

how to start training

I did the prepare work as you said in REMADE.md, but when I start training, some error happens, like this:

OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[node get_batch/batch (defined at /home/dl/Desktop/RotationDetection/dataloader/dataset/read_tfrecord.py:132) ]]

So, how to start training my own datasets?

Cause error when testing SCRDET

Hello, I succeed in training and testing several detectors such as RRPN, RSDET and R3DET

However, when I tried SCRDET, it caused error.

Traceback (most recent call last):
File "test_hrsc2016.py", line 52, in
tester.eval()
File "test_hrsc2016.py", line 31, in eval
image_ext=self.args.image_ext)
File "../../tools/test_hrsc2016_base.py", line 81, in eval_with_plac
input_img_batch=img_batch)
ValueError: too many values to unpack (expected 3)

I don't know why and I can run test_hrsc2016.py successfully for RRPN, RSDET and R3DET

I can also run train.py successfully for SCRDET

Here is the code of test_hrsc2016.py Please help me, thank you!

screenshot

ImportError: No module named 'libs'

Hello, thank you for such wonderful work!

I have a problem that when I ran a test such as test_hrsc2016.py, it raised 'ImportError: No module named 'libs''

I am sure I have compiled according the read.md and I trained my datasets successfully.

I have complied several times but it did not work.

image

DETECTOR and DATASET Parameters

Hello yangxue0827,

thank you for making this awesome repository. I am currently trying to train my own Dataset and I struggle at one point:

Select the detector and dataset you want to use, and mark them as #DETECTOR and #DATASET (such as #DETECTOR=retinanet and #DATASET=DOTA)

I do not understand where I have to set the #Detector and #Dataset parameters? Or does this simply refer to the folder structure?

为何nms没有起作用?

大佬您好,我想请教下为何nms好像没有起作用。系统是Ubuntu18.04,nms部分已经编译了。算法用的是gwd,训练和测试均能正常进行,就是有的好像经过了nms处理,有的又没有,而且训练和测试的时候我都把nms开启了

post-processing

NMS = True
NMS_IOU_THRESHOLD = 0.45
MAXIMUM_DETECTIONS = 200
FILTERED_SCORE = 0.5
VIS_SCORE = 0.4

test and eval

TEST_SAVE_PATH = os.path.join(ROOT_PATH, 'tools/test_result')
EVALUATE_R_DIR = os.path.join(ROOT_PATH, 'output/evaluate_result_pickle/')
USE_07_METRIC = True
EVAL_THRESHOLD = 0.45

训练:
train

测试:
DJI_0005_000090

关于打标签的工具我还是不太清楚

您好,不好意思,看了您知乎的文章,关于打标签的工具我还是不太清楚。请问您用retanglelabel2mylabel.py转化之前,这个retanglelabel是用的什么工具打标签呢?能不能说得详细一点,谢谢啦~

Download Trained Model

Hello

I would like to know, where I can download the trained model (not pretrained):

"Please download trained models by this project, then put them to trained_weights."

When I go to Baidu and enter the code from here: #29 (comment) I still cannot download as some pop up comes up that I cannot read. Is there maybe any other place (Google Drive) where I can find it?

Best regards and thank you

Performance issue in /libs/models/backbones/efficientnet (by P3)

Hello! I've found a performance issue in /utils.py: dataset.batch(self.batch_size,drop_remainder=batch_drop_remainder)(here) should be called before dataset.map(_parse_function)(here), which could make your program more efficient.

Here is the tensorflow document to support it.

Besides, you need to check the function _parse_function called in dataset.map(_parse_function) whether to be affected or not to make the changed code work properly. For example, if _parse_function needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z) after fix.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

problem with a try with one image

Excuse me, I think I have done as the steps in readme and set the show_box as true. Now the code running process and the testing images are like following. I think the result image and original image looks the same and I want to know which part I made mistake which may cause this.
Is I loaded wrong data model? Is I use wrong original image? Is I use wrong cfgs?……
I hope you can help me. thank you very much
111
112
113
2021-06-16 09-35-14屏幕截图

Where is trained models?

Thank you for your contribution to the detection community.
And I notice that “Latest: More results and trained models are available in the MODEL_ZOO.md.” in README.md,but I can’t find the “MODEL_ZOO.md”. If you could provide the trained model,words cannot express how thankful I am.

Dimension mismatch

Nice job!

I'm using r2cnn detector. When I adjust the BASE_ANCHOR_SIZE_LIST and ANCHOR_STRIDE for small object detection, the follow error happens:

when BASE_ANCHOR_SIZE_LIST = [32,64,128] and ANCHOR_STRIDE = [4, 8, 16]:
(0) Invalid argument: Incompatible shapes: [367500] vs. [372883]
[[node tower_0/postprocess_FPN/mul_3 (defined at ../../libs/utils/bbox_transform.py:29) ]]

when BASE_ANCHOR_SIZE_LIST = [32,64,128, 256] and ANCHOR_STRIDE = [4, 8, 16, 32]:
(0) Invalid argument: Incompatible shapes: [371875] vs. [372883]
[[node tower_0/postprocess_FPN/mul_3 (defined at ../../libs/utils/bbox_transform.py:29) ]]

Do I miss some other necessary modification?

咨询

作者,你好,请问八参数的iou计算方式?

Cannot generate test_dota file

An excellent project, thanks to the author for sharing. When I use tools/resdet/test_dota_5p.py to test val/imsges, the program runs well, but the test result test_dota is not generated. Can you help me solve it?

SCRDet

请问有scrdet有训练好的模型吗?

Question about iou_smooth_l1_loss_log

When I read the code in iou_smooth_l1_loss_log line118-line135. The loss factor on line 125 is -log(iou)/regression_loss, and the return factor *regression_loss/normalizer is equal to -log(iou)/normalizer. But iou is calculated by cv2, how to calculate the gradient? Thank you!

overlaps = tf.py_func(iou_rotate_calculate2,
inp=[tf.reshape(boxes_pred, [-1, 5]), tf.reshape(target_boxes[:, :-1], [-1, 5])],
Tout=[tf.float32])
overlaps = tf.reshape(overlaps, [-1, 1])
regression_loss = tf.reshape(tf.reduce_sum(regression_loss, axis=1), [-1, 1])
# -ln(x)
iou_factor = tf.stop_gradient(-1 * tf.log(overlaps)) / (tf.stop_gradient(regression_loss) + self.cfgs.EPSILON)
# iou_factor = tf.Print(iou_factor, [iou_factor], 'iou_factor', summarize=50)
normalizer = tf.stop_gradient(tf.where(tf.equal(anchor_state, 1)))
normalizer = tf.cast(tf.shape(normalizer)[0], tf.float32)
normalizer = tf.maximum(1.0, normalizer)
# normalizer = tf.stop_gradient(tf.cast(tf.equal(anchor_state, 1), tf.float32))
# normalizer = tf.maximum(tf.reduce_sum(normalizer), 1)
return tf.reduce_sum(regression_loss * iou_factor) / normalizer

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

在跑scrnet时出现问题,打印g之后发现
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Tensor("tower_0/clip_by_norm_88:0", shape=(3, 3, 256, 256), dtype=float32, device=/device:GPU:0)
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Tensor("tower_1/clip_by_norm_88:0", shape=(3, 3, 256, 256), dtype=float32, device=/device:GPU:1)
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Tensor("tower_0/clip_by_norm_89:0", shape=(1, 1, 256, 1024), dtype=float32, device=/device:GPU:0)
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Tensor("tower_1/clip_by_norm_89:0", shape=(1, 1, 256, 1024), dtype=float32, device=/device:GPU:1)
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
None

restoring trained or pretrained models blocked...

cd $PATH_ROOT/tools/r3det/ && python test_dota.py --test_dir='/PATH/TO/IMAGES/' --gpus=0

blocked when restore trained model ,(python3.6,tensorflow-gpu==1.13.0,CUDA10.0)

necessary trained model has been downloaded and put in $PATH_ROOT/output/trained_weights/ ,and $PATH_ROOT/libs/configs/cfgs.py has been replaced and updated

help!

baidu files

how can i download baidu files ??? it needs extraction code

How to test train and test retinanet-gwd in HRSC2016 dataset?

1.i have downloaded trained models by this project, then put them to $PATH_ROOT/output/pretained_weights.
the pretained_weights is resnet_v1d.
2. i have compiled .

cd $PATH_ROOT/libs/utils/cython_utils
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace (or make)

cd $PATH_ROOT/libs/utils/
rm *.so
rm *.c
rm *.cpp
python setup.py build_ext --inplace
  1. i have Copied $PATH_ROOT/libs/configs/HRSC2016/gwd/cfgs_res50_hrsc2016_gwd_v6.py to$PATH_ROOT/libs/configs/cfgs.py
  2. the structure directory of HRSC2016 Dataset
    image

5.when i python tools/gwd/train.py ,i got some errors.

2021-09-05 07:49:28.459600: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../../dataloader/tfrecord; No such file or directory
2021-09-05 07:49:28.534617: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../../dataloader/tfrecord; No such file or directory
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: ../../dataloader/tfrecord; No such file or directory
[[{{node get_batch/matching_filenames/MatchingFiles}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 160, in
trainer.main()
File "train.py", line 155, in main
self.log_printer(gwd, optimizer, global_step, tower_grads, total_loss_dict, num_gpu, graph)
File "../../tools/train_base.py", line 196, in log_printer
sess.run(init_op)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: ../../dataloader/tfrecord; No such file or directory
[[node get_batch/matching_filenames/MatchingFiles (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'get_batch/matching_filenames/MatchingFiles':
File "train.py", line 160, in
trainer.main()
File "train.py", line 53, in main
is_training=True)
File "../../dataloader/dataset/read_tfrecord.py", line 115, in next_batch
filename_tensorlist = tf.train.match_filenames_once(pattern)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/input.py", line 76, in match_filenames_once
name=name, initial_value=io_ops.matching_files(pattern),
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_io_ops.py", line 464, in matching_files
"MatchingFiles", pattern=pattern, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 513, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Can you help me solve this problem?
i hope your reply

Measuring accuracy

Hi, thank you for creating this resource. I have set it up to use my own dataset. I want to compare the performance of the different detectors on my data. How can I do this? Tensorboard only outputs the loss values, so I have no way to compare the accuracies.

Path too short / Docker Tensorflow

Hello,

when running the make_test_xml from the HRSC2016 dataset I get the error below. I am using the:

  • docker images: yangxue2docker/yx-tf-det:tensorflow1.13.1-cuda10-gpu-py3
  • RTX2060 GPU

I also get the error when I start e.g. the convert_data_to_tfrecord.py or data_crop.py, so I think it is maybe something with the environment?

Looking forward to your reply

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/usr/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/usr/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: /usr/lib/x86_64-linux-gnu/libcuda.so.1: file too short

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "make_test_xml.py", line 11, in <module>
    from libs.configs import cfgs
  File "../../../libs/configs/cfgs.py", line 6, in <module>
    from libs.configs._base_.models.retinanet_r50_fpn import *
  File "../../../libs/configs/_base_/models/retinanet_r50_fpn.py", line 5, in <module>
    import tensorflow as tf
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/usr/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/usr/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: /usr/lib/x86_64-linux-gnu/libcuda.so.1: file too short


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.

多卡训练时报错

image
当我用双卡训练scrdet网络时报这个错误,是我的参数设置的有问题吗

请问有人用windows配置环境吗

python setup.py build_ext --inplace命令报错
Traceback (most recent call last):
File "setup.py", line 55, in
CUDA = locate_cuda()
File "setup.py", line 43, in locate_cuda
raise EnvironmentError('The nvcc binary could not be '
OSError: The nvcc binary could not be located in your $PATH. Either add it to your path, or set $CUDAHOME

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.