Code Monkey home page Code Monkey logo

d3feat's Introduction

D3Feat repository

TensorFlow implementation of D3Feat for CVPR'2020 Oral paper "D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features", by Xuyang Bai, Zixin Luo, Lei Zhou, Hongbo Fu, Long Quan and Chiew-Lan Tai.

This paper focus on dense feature detection and description for 3D point clouds in a joint manner. If you find this project useful, please cite:

@article{bai2020d3feat,
  title={D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features},
  author={Xuyang Bai, Zixin Luo, Lei Zhou, Hongbo Fu, Long Quan and Chiew-Lan Tai},
  journal={arXiv:2003.03164 [cs.CV]},
  year={2020}
}

The PyTorch implementation can be found here.

Check our new paper on outlier rejection for more robust registration here !

Introduction

A successful point cloud registration often lies on robust establishment of sparse matches through discriminative 3D local features. Despite the fast evolution of learning-based 3D feature descriptors, little attention has been drawn to the learning of 3D feature detectors, even less for a joint learning of the two tasks. In this paper, we leverage a 3D fully convolutional network for 3D point clouds, and propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point. In particular, we propose a keypoint selection strategy that overcomes the inherent density variations of 3D point clouds, and further propose a self-supervised detector loss guided by the on-the-fly feature matching results during training. Finally, our method achieves state-of-the-art results in both indoor and outdoor scenarios, evaluated on 3DMatch and KITTI datasets, and shows its strong generalization ability on the ETH dataset. Towards practical use, we show that by adopting a reliable feature detector, sampling a smaller number of features is sufficient to achieve accurate and fast point cloud alignment.

fig1

Installation

  • Create the environment and install the required libaries:

         conda env create -f environment.yml
    
  • Compile the customized Tensorflow operators located in tf_custom_ops. Open a terminal in this folder, and run:

        sh compile_op.sh
    
  • Compile the C++ extension module for python located in cpp_wrappers. Open a terminal in this folder, and run:

        sh compile_wrappers.sh
    

The code is heavily borrowed from KPConv. You can find the guidance for compiling the tensorflow operators and C++ wrappers in INSTALL.md.

Demo

We provide a small demo to extract dense feature and detection score for two point cloud, and register them using RANSAC. The ply files are saved in the demo_data folder, which can be replaced by your own data. Now we are using two point cloud fragments from 3DMatch dataset. To try the demo, please run

    python demo_registration.py

It will compute the descriptors and detection scores using the released weight on 3DMatch dataset, and save them in .npz file in demo_data. These descriptors are then used to estimate the rigid-body transformation parameters using RANSAC. Visualization of the inital state and registered state will show up.

demo

We also visualize the detected keypoints on two point cloud.

demo

Dataset Download

3DMatch

The training set of 3DMatch[1] can be downloaded from here. It is generated by datasets/cal_overlap.py which select all the point cloud fragments pairs having more than 30% overlap.

The test set point clouds and the evaluation files(for registration recall) can be downloaded from 3DMatch Geometric Registration website.

Please put the training set under data/3DMatch folder and the test set under data/3DMatch/fragments. And I have already put the ground truth poses in the geometric_registration/gt_result folder.

KITTI

The training and test set can be download from KITTI Odometry website. I follow the FCGF[3] for pre-processing.

ETH

The test set (we only use ETH dataset to evaluate the generalization ability of our method) can be downloaded from here. Detail instructions can be found in PerfectMatch[2].

Instructions to training and testing

3DMatch

The training on 3DMatch dataset can be done by running

python training_3dmatch.py

This file contains a configuration subclass ThreeDMatchConfig, inherited from the general configuration class Config defined in utils/config.py. The value of every parameter can be modified in the subclass. The default path to 3DMatch training set is data/3DMatch, which can be changed in dataset/ThreeDMatch.py.

The testing with the pretrained models on 3DMatch can by easily done by changing the path of log in test_3dmatch.py file

chosen_log = 'path_to_pretrained_log'

and runing

python test_3dmatch.py

The descriptors and detection scores for each point will be generated and saved in geometric_registration/D3Feat_{timestr}/ folder. Then the Feature Matching Recall and inlier ratio can be caluclated by running

cd geometric_registration/
python evaluate.py D3Feat [timestr of the model]

The Registration Recall can be calculated by running the evaluate.m in geometric_registration/3dmatch which are provided by 3DMatch. You need to modify the descriptorName to D3Feat_{timestr} in the geometric_registration/3dmatch/evaluate.m file. You can change the number of keypoints in evaluate.py

KITTI

Similarly, the training and testing of KITTI data set can be done by running

python training_KITTI.py

And

python test_KITTI.py

The detected keypoints and scores for each fragment, as well as the estimated transformation matrix between each ground truth pair will be saved in geometric_registration_kitti/D3Feat_{timestr}/ folder. Then the Relative Rotation Error and Relative Translation Error will be calculated by comparing the ground truth pose and estimated pose. The code of this part is heavily borrowed from FCGF[3]. You can change the number of keypoints in utils/test.py.

Keypoint Repeatability

After generating the descriptors and detection scores (which will be saved in geometric_registration or geometric_registration_kitti), the keypoint repeatbility can be calculated by running

cd repeatability/
python evaluate_3dmatch_our.py D3Feat [timestr of the model]

or

cd repeatability/
python evaluate_kitti_our.py D3Feat [timestr of the model]

Pretrained Model

We provide the pre-trained model of 3DMatch in results/ and KITTI in results_kitti/.

Post-Conference Update

  • Training Loss: We have found that circle loss provides an insightful idea for metric learning area and shows better and fast convergence for training D3Feat. To enable it, please change the loss_type to 'circle_loss' in KPFCNN_model.py, and the hyper-paramters for circle loss can be changed in loss.py.

References

[1] 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions, Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser, CVPR 2017.

[2] The Perfect Match: 3D Point Cloud Matching with Smoothed Densities, Zan Gojcic, Caifa Zhou, Jan D. Wegner, and Andreas Wieser, CVPR 2019.

[3] Fully Convolutional Geometric Features: Christopher Choy and Jaesik Park and Vladlen Koltun, ICCV 2019.

d3feat's People

Contributors

xuyangbai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

d3feat's Issues

Models from different sensors

Hi,

First, thanks for this amazing work and making it open source!

There is something I wanted to ask your advice on.

I would like to use D3Feat for registration of indoor models from different sensors. Specifically, I’m trying to register a dense and detailed TLS (Terrestrial Laser Scanner) point cloud with a mesh created from a depth camera (mesh is converted to point cloud by either taking the vertices or by randomly sampling the mesh, but in both cases these are not as detailed as TLS point clouds).
I’ve had some success by using the 3DMatch pretrained network and improved the generalisation by increasing the voxel size, scale of kernel points/receptive field, and the first subsampling to 10 cm, but I’m trying to figure out is there a better angle of approach.

I was thinking about (i) training the network with two different sets of data at once, by combining 3DMatch (depth) and TLS data, and (ii) training two different networks on two different datasets and dealing with the domain adaptation later (with a third network), but neither looks like a good solution to me (ofc might be wrong here).

I know this is a broad question, but how would you approach this problem?

It would be great to hear your opinion!

3DMatch model on KITTI

Hi,

I am trying to use the 3DMatch model on KITTI, but so far I only get 0% succ rate on KITTI.
In addition to setting first_subsampling_dl = 0.30, do you have any other recommendation to improve performance?

Thanks

The test results are not consistent with those in the paper

Sorry, i accidentally pressed 'Enter'. The following is the text.

Hi,Xuyang
Thanks for your sharing. I have run your code according to the README.md to test the 3DMatch dataset, but the results i got are not consistent with those in the paper. Here is the results i got in my machine.

Rand 250:
recall: 78.73045633845116%
average num inliers: 9.363852426181337
average num inliers ratio: 0.19733789310284588
registration recall: 10.01%

Rand 1000:
recall: 91.159575532389%
average num inliers: 39.14157202336968
average num inliers ratio: 0.2830029677777195
registration recall: 44.10%

Rand 5000:
recall: 94.89696519206765%
average num inliers: 194.81370930163627
average num inliers ratio: 0.3952895905324679
registration recall: 79.83%

pred 250:
recall: 90.66840429898846%
average num inliers: 22.28248399643017
average num inliers ratio: 0.5087867387749848
registration recall: 65.44%

pred 1000:
recall: 93.50181475115426%
average num inliers: 77.3823343172171
average num inliers ratio: 0.4946857046798971
registration recall: 83.82%

pred 5000:
recall: 95.41775054399773%
average num inliers: 259.560668054325
average num inliers ratio: 0.4549467720974129
registration recall: 88.58%

As you see, some results are better than the paper results, some are whose than the paper results, but the fluctuation is large. Also, the inlier ratio decrease with the numbers of keypoints increase. So i am confused about this result. I swear i did not change any code.

Best
Gilgamesh

How to determine the inlier threshold for calculating overlap ratio?

Hi xuyang,

I have a question about the overlap ratio. As we know, different inlier threshold has much influence on the overlap ratio according to the common definition. Further, the inlier threshold is much related to the point density.

So I calculated the average point density of all subsampled (voxel_size = 0.03) point cloud for 3DMatch dataset and the result is 0.0189. I notice that you just set the inlier threshold same as the voxel_size (that is 0.03). Is there any consideration for such setting?

Thanks,
Hongkai

Question about training my own dataset

Hi. I have one question about how to train my own dataset.
I see the python code 'datasets/cal_overlap.py' you provided, and I am confused about the 'scene_list_{split}.txt' and the '{ind}.pose.npy' files. It is will be helpful for me to understand, if you provides these files to me.
Thanks very much. Have a nice day.

ERROR: pclkeypoint==0.0.1

Hi,

Installing the conda env I get this error:

ERROR: Could not find a version that satisfies the requirement pclkeypoint==0.0.1

Is this package necessary?

Cheers

Failed to compile cpp_wrappers

Hi there,

Firstly thanks for the excellent work. I'm having trouble compiling the grid_subsampling extention (compile_wrappers.sh). The error log is here:

running build_ext
building 'grid_subsampling' extension
Warning: Can't read registry to find the necessary compiler setting
Make sure that Python modules winreg, win32api or win32con are installed.
C compiler: gcc -pthread -B /home/des/anaconda3/envs/D3Feat/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/cpp_wrappers
creating build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils
creating build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud
creating build/temp.linux-x86_64-3.6/grid_subsampling
compile options: '-I/home/des/anaconda3/envs/D3Feat/lib/python3.6/site-packages/numpy/core/include -I/home/des/anaconda3/envs/D3Feat/include/python3.6m -c'
extra options: '-std=c++11'
gcc: ../cpp_utils/cloud/cloud.cpp
gcc: grid_subsampling/grid_subsampling.cpp
gcc: wrapper.cpp
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/des/anaconda3/envs/D3Feat/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1832,
from /home/des/anaconda3/envs/D3Feat/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/des/anaconda3/envs/D3Feat/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from wrapper.cpp:2:
/home/des/anaconda3/envs/D3Feat/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
17 | #warning "Using deprecated NumPy API, disable it with " \
grid_subsampling/grid_subsampling.cpp: In function ‘void grid_subsampling(std::vector&, std::vector&, std::vector&, std::vector&, std::vector&, std::vector&, float, int)’:
grid_subsampling/grid_subsampling.cpp:99:25: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
99 | for (int i = 0; i < ldim; i++)
wrapper.cpp: In function ‘PyObject* grid_subsampling_compute(PyObject*, PyObject*, PyObject*)’:
wrapper.cpp:70:27: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
70 | static char kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
wrapper.cpp:70:37: warning: ISO C++ forbids converting a string constant to ‘char
’ [-Wwrite-strings]
70 | static char kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
wrapper.cpp:70:49: warning: ISO C++ forbids converting a string constant to ‘char
’ [-Wwrite-strings]
70 | static char kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
wrapper.cpp:70:60: warning: ISO C++ forbids converting a string constant to ‘char
’ [-Wwrite-strings]
70 | static char kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
wrapper.cpp:70:72: warning: ISO C++ forbids converting a string constant to ‘char
’ [-Wwrite-strings]
70 | static char kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
wrapper.cpp:70:82: warning: ISO C++ forbids converting a string constant to ‘char
’ [-Wwrite-strings]
70 | static char *kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL };
g++ -pthread -shared -B /home/des/anaconda3/envs/D3Feat/compiler_compat -L/home/des/anaconda3/envs/D3Feat/lib -Wl,-rpath=/home/des/anaconda3/envs/D3Feat/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o build/temp.linux-x86_64-3.6/grid_subsampling/grid_subsampling.o build/temp.linux-x86_64-3.6/wrapper.o -o /mnt/G/LiDAR datasets/d3feat/cpp_wrappers/cpp_subsampling/grid_subsampling.cpython-36m-x86_64-linux-gnu.so
/home/des/anaconda3/envs/D3Feat/compiler_compat/ld: build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o: unable to initialize decompress status for section .debug_info
/home/des/anaconda3/envs/D3Feat/compiler_compat/ld: build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o: unable to initialize decompress status for section .debug_info
/home/des/anaconda3/envs/D3Feat/compiler_compat/ld: build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o: unable to initialize decompress status for section .debug_info
/home/des/anaconda3/envs/D3Feat/compiler_compat/ld: build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o: unable to initialize decompress status for section .debug_info
build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: Command "g++ -pthread -shared -B /home/des/anaconda3/envs/D3Feat/compiler_compat -L/home/des/anaconda3/envs/D3Feat/lib -Wl,-rpath=/home/des/anaconda3/envs/D3Feat/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/cpp_wrappers/cpp_utils/cloud/cloud.o build/temp.linux-x86_64-3.6/grid_subsampling/grid_subsampling.o build/temp.linux-x86_64-3.6/wrapper.o -o /mnt/G/LiDAR datasets/d3feat/cpp_wrappers/cpp_subsampling/grid_subsampling.cpython-36m-x86_64-linux-gnu.so" failed with exit status 1

My environment:

  • Anaconda on ArchLinux
  • CUDA 9.0
  • tensorflow=1.12.0
  • cudatoolkit=9.0
  • cudnn=7.6.0
  • Default system GCC version: 10.2
  • Default CUDA9.0 GCC version: 6.5

I suspected the version of GCC might be the cause, so I tried changing both the versions of GCC in the system and in CUDA to 5.5 but the problem persists.

Could you please advise how to solve this issue? Thanks.

A question about 'voxel_size'

Hi, I noticed that the downsampling of training 3DMatch is 0.03 (first_subsampling_dl = 0.03). Is the default downsample of 0.03 also used when testing? In general, when we test the model, the training and the testing are usually set up the same way, right? For example, if I train the model with a 0.025 downsample setting, I should also test it with a 0.025 downsample setting.

Performance of FCGF on ETH dataset

hi Xuyang,

Can you confirm the performance of FCGF on ETH dataset as reported in your paper? It's significantly worse than PerfectMatch and is a bit strange to me.

Best,
Shengyu

error when run test_kitti

hi,when i run test_kitti.py use pretrained model your provided, there is an error in
/utils/tester.py, line 326, in test_kitti
T_gth = inputs['trans']
KeyError: 'trans'

i have checked inputs do not have a key 'trans', so how to solve it, thank you.

About the circle loss

Hi,
Thank you for your excellent work.
I wonder what is the circle loss in code,the paper only mentions triplet loss and contractive loss.

Any suggestions on how to train a good D3Feat model?

Hi Xuyang
Hello, thank you for your excellent work! I am currently trying to train D3Feat on KITTI, both with your original dataset and my modified KITTI dataset (with different frame pairs). But with the default setting, I cannot get satisfying results with my modified dataset (test registration accuracy around 42%, way worse than your pretrained model on KITTI).

I must confess that this is confusing, and is probably caused by my unintentional code tweaks rather than false training strategies. I have been running reference experiments on freshly downloaded master version D3Feat as well to rule out that possibillity, but that will take days and I am in a hurry. I wonder if there are any suggestions or code changes to train a good model like the pretrained one on KITTI?

best
Quan

Pretrained Model with voxel = 0.05

Thanks your work! I change the code like below, but i cannot get the same result like voxel = 0.03.
I want to know if something else should be changed. Thanks!
## The release weight is pretrained on 3DMatch dataset, where we use a voxel downsample with 0.03m.
## If you want to test your own point cloud data which have different scale, you should change the scale
## of the kernel points so that the receptive field is also enlarged. For example, when testing the
## generalization on ETH, we use the following code to rescale the kernel points.
` for v in my_vars: if 'kernel_points' in v.name: rescale_op = v.assign(tf.multiply(v, 0.05 / 0.03)) # print('kernel_points', rescale_op) self.sess.run(rescale_op)

Training data generation

Hi Xuyang,

Thanks for sharing your excellent work. I just want to learn some details about training data generation. For training point cloud fragments, did you select depth images in every 50 frames or fuse every 50 frames to generate ptclds?

Best ,
Bing

Could you publish a pytorch implementation ?

Dear XuyangBai,

Thank you for your great work! I have noticed that you have an implementation of KPConv. Have you done your pytorch implementation of D3Feat based on that ?

Best regards,
Xianghua Qu

Some questions about *.kpl

Hello,
Thank you for your innovative work.
I see that the 3DMatch dataset seems to have to use ".kpl" files.
Are the files "
.kpl" about keypoints?
Do I must need ".kpl" files for training?
How do I get the "
.kpl" files using my dataset?
I do not understand how to train using my own dataset only with "*.ply" files.
Thanks.

Aligned or unaligned pts in Training and Testing datasets

Hi Xuyang,

I just noticed that you saved the aligned point cloud points as the training dataset but the testing dataset from 3DMatch are unaligned during the testing phase. May I confirm this with you? Many thanks.

Best,
Bing

How to test ETH?

Hi, Xuyang

Sorry for the bother! I am trying to the ETH dataset using the model which trained on 3dmatch, but the following error raise. So i wonder if that need more space to test ETH, or there are some settings i ignored?

/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])

Dataset Preparation


(50367, 3)
(54203, 3)
(53067, 3)
(48749, 3)
(44246, 3)
(42854, 3)
(41144, 3)
(38663, 3)
(39982, 3)
(38772, 3)
(35781, 3)
(33488, 3)
(30180, 3)
(28480, 3)
(26907, 3)
(30962, 3)
(30314, 3)
(30571, 3)
(31442, 3)
(29981, 3)
(36504, 3)
(38673, 3)
(38908, 3)
(43512, 3)
(43254, 3)
(43394, 3)
(46688, 3)
(47217, 3)
(49071, 3)
(55600, 3)
(56516, 3)
(53873, 3)
(44141, 3)
(45239, 3)
(45198, 3)
(44476, 3)
(43106, 3)
(32730, 3)
(36993, 3)
(37667, 3)
(40342, 3)
(39887, 3)
(37221, 3)
(35227, 3)
(35270, 3)
(39027, 3)
(40245, 3)
(35224, 3)
(32548, 3)
(38184, 3)
(39914, 3)
(41074, 3)
(35561, 3)
(37025, 3)
(40676, 3)
(42016, 3)
(43244, 3)
(45002, 3)
(45876, 3)
(47714, 3)
(48683, 3)
(49047, 3)
(48733, 3)
(55752, 3)
(56517, 3)
(55095, 3)
(53169, 3)
(51016, 3)
(51420, 3)
(52332, 3)
(50786, 3)
(50527, 3)
(47891, 3)
(51851, 3)
(56069, 3)
(57539, 3)
(54436, 3)
(52746, 3)
(52892, 3)
(53423, 3)
(54421, 3)
(53736, 3)
(52290, 3)
(51320, 3)
(51047, 3)
(50506, 3)
(49948, 3)
(46921, 3)
(46470, 3)
(48085, 3)
(49550, 3)
(51027, 3)
(49111, 3)
(49417, 3)
(49074, 3)
(53985, 3)
(55507, 3)
(53216, 3)
(49678, 3)
(52591, 3)
(60122, 3)
(58153, 3)
(55480, 3)
(61990, 3)
(67667, 3)
(62600, 3)
(62394, 3)
(62576, 3)
(62745, 3)
(61567, 3)
(58630, 3)
(56838, 3)
(56258, 3)
(55453, 3)
(52196, 3)
(52221, 3)
(51866, 3)
(49033, 3)
(47441, 3)
(47629, 3)
(48152, 3)
(49553, 3)
(50461, 3)
(49966, 3)
(50028, 3)
(49642, 3)
(50830, 3)
(51745, 3)
(52989, 3)
(52877, 3)
(51941, 3)
(52474, 3)
Initiating test input pipelines
WARNING:tensorflow:From /home/Gilgamesh/D3Feat/datasets/common.py:1308: calling reduce_min (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
2020-08-01 18:18:22.394755: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-08-01 18:18:22.663084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:04:00.0
totalMemory: 11.91GiB freeMemory: 11.76GiB
2020-08-01 18:18:22.916220: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:08:00.0
totalMemory: 11.91GiB freeMemory: 11.76GiB
2020-08-01 18:18:22.920787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1
2020-08-01 18:18:24.278918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-01 18:18:24.278990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1
2020-08-01 18:18:24.279008: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y
2020-08-01 18:18:24.279028: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N
2020-08-01 18:18:24.285207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11376 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:04:00.0, compute capability: 6.1)
2020-08-01 18:18:24.285817: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11376 MB memory) -> physical GPU (device: 1, name: TITAN Xp, pci bus id: 0000:08:00.0, compute capability: 6.1)
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131]
Calib Neighbors 00000000 : timings 3178.73 385.08
Calib Neighbors 00000001 : timings 2902.80 279.60
Calib Neighbors 00000002 : timings 2865.64 658.51
Calib Neighbors 00000003 : timings 2657.15 247.62
Calib Neighbors 00000004 : timings 2587.97 143.16

self.neighborhood: [ 66 117 144 152 169]
Creating Model


WARNING:tensorflow:From /home/Gilgamesh/D3Feat/models/D3Feat.py:111: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20.
Instructions for updating:
Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:

    sess = tf.Session()
    with sess.as_default():
        tensor = tf.range(10)
        print_op = tf.print(tensor)
        with tf.control_dependencies([print_op]):
          out = tf.add(tensor, tensor)
        sess.run(out)
    ```
Additionally, to use tf.print in python 2.7, users must make sure to import
the following:

  `from __future__ import print_function`

2020-08-01 18:18:49.254766: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1
2020-08-01 18:18:49.254975: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-01 18:18:49.255005: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 1 
2020-08-01 18:18:49.255047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N Y 
2020-08-01 18:18:49.255068: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1:   Y N 
2020-08-01 18:18:49.256962: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11376 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:04:00.0, compute capability: 6.1)
2020-08-01 18:18:49.257232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11376 MB memory) -> physical GPU (device: 1, name: TITAN Xp, pci bus id: 0000:08:00.0, compute capability: 6.1)
Model restored from results_kitti/Log_11011605/snapshots/snap-61

----------------
Done in 4.8 s
----------------

Start Test
**********

[  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17
  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35
  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53
  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71
  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89
  90  91  92  93  94  95  96  97  98  99 100 101 102 103 104 105 106 107
 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
 126 127 128 129 130 131]
num of local max[33906]
Generate cloud_bin_0 for gazebo_summer
****************************************
num of local max[35646]
Generate cloud_bin_1 for gazebo_summer
****************************************
num of local max[30748]
Generate cloud_bin_2 for gazebo_summer
****************************************
num of local max[27088]
Generate cloud_bin_3 for gazebo_summer
****************************************
num of local max[24068]
Generate cloud_bin_4 for gazebo_summer
****************************************
num of local max[24182]
Generate cloud_bin_5 for gazebo_summer
****************************************
num of local max[24650]
Generate cloud_bin_6 for gazebo_summer
****************************************
num of local max[22276]
Generate cloud_bin_7 for gazebo_summer
****************************************
num of local max[22546]
Generate cloud_bin_8 for gazebo_summer
****************************************
num of local max[22662]
Generate cloud_bin_9 for gazebo_summer
****************************************
num of local max[20822]
Generate cloud_bin_10 for gazebo_summer
****************************************
num of local max[19382]
Generate cloud_bin_11 for gazebo_summer
****************************************
num of local max[19734]
Generate cloud_bin_12 for gazebo_summer
****************************************
num of local max[18154]
Generate cloud_bin_13 for gazebo_summer
****************************************
num of local max[19532]
Generate cloud_bin_14 for gazebo_summer
****************************************
num of local max[22790]
Generate cloud_bin_15 for gazebo_summer
****************************************
num of local max[21532]
Generate cloud_bin_16 for gazebo_summer
****************************************
num of local max[20368]
Generate cloud_bin_17 for gazebo_summer
****************************************
num of local max[24498]
Generate cloud_bin_18 for gazebo_summer
****************************************
num of local max[25424]
Generate cloud_bin_19 for gazebo_summer
****************************************
num of local max[29964]
Generate cloud_bin_20 for gazebo_summer
****************************************
num of local max[31310]
Generate cloud_bin_21 for gazebo_summer
****************************************
num of local max[35868]
Generate cloud_bin_22 for gazebo_summer
****************************************
num of local max[38020]
Generate cloud_bin_23 for gazebo_summer
****************************************
num of local max[36818]
Generate cloud_bin_24 for gazebo_summer
****************************************
num of local max[35306]
Generate cloud_bin_25 for gazebo_summer
****************************************
num of local max[36596]
Generate cloud_bin_26 for gazebo_summer
****************************************
num of local max[39436]
Generate cloud_bin_27 for gazebo_summer
****************************************
num of local max[39752]
Generate cloud_bin_28 for gazebo_summer
****************************************
num of local max[46806]
Generate cloud_bin_29 for gazebo_summer
****************************************
num of local max[49566]
Generate cloud_bin_30 for gazebo_summer
****************************************
num of local max[45802]
Generate cloud_bin_31 for gazebo_summer
****************************************
num of local max[57398]
Generate cloud_bin_0 for gazebo_winter
****************************************
num of local max[55134]
Generate cloud_bin_1 for gazebo_winter
****************************************
num of local max[50668]
Generate cloud_bin_2 for gazebo_winter
****************************************
num of local max[47366]
Generate cloud_bin_3 for gazebo_winter
****************************************
num of local max[45184]
Generate cloud_bin_4 for gazebo_winter
****************************************
num of local max[36462]
Generate cloud_bin_5 for gazebo_winter
****************************************
num of local max[39358]
Generate cloud_bin_6 for gazebo_winter
****************************************
num of local max[39224]
Generate cloud_bin_7 for gazebo_winter
****************************************
num of local max[37402]
Generate cloud_bin_8 for gazebo_winter
****************************************
num of local max[36522]
Generate cloud_bin_9 for gazebo_winter
****************************************
num of local max[35572]
Generate cloud_bin_10 for gazebo_winter
****************************************
num of local max[34852]
Generate cloud_bin_11 for gazebo_winter
****************************************
num of local max[36472]
Generate cloud_bin_12 for gazebo_winter
****************************************
num of local max[39158]
Generate cloud_bin_13 for gazebo_winter
****************************************
num of local max[38880]
Generate cloud_bin_14 for gazebo_winter
****************************************
num of local max[35684]
Generate cloud_bin_15 for gazebo_winter
****************************************
num of local max[38366]
Generate cloud_bin_16 for gazebo_winter
****************************************
num of local max[44426]
Generate cloud_bin_17 for gazebo_winter
****************************************
num of local max[50016]
Generate cloud_bin_18 for gazebo_winter
****************************************
num of local max[51948]
Generate cloud_bin_19 for gazebo_winter
****************************************
num of local max[52352]
Generate cloud_bin_20 for gazebo_winter
****************************************
num of local max[52042]
Generate cloud_bin_21 for gazebo_winter
****************************************
num of local max[51230]
Generate cloud_bin_22 for gazebo_winter
****************************************
num of local max[56012]
Generate cloud_bin_23 for gazebo_winter
****************************************
num of local max[55040]
Generate cloud_bin_24 for gazebo_winter
****************************************
num of local max[56960]
Generate cloud_bin_25 for gazebo_winter
****************************************
num of local max[59904]
Generate cloud_bin_26 for gazebo_winter
****************************************
num of local max[57582]
Generate cloud_bin_27 for gazebo_winter
****************************************
num of local max[60356]
Generate cloud_bin_28 for gazebo_winter
****************************************
num of local max[69758]
Generate cloud_bin_29 for gazebo_winter
****************************************
num of local max[67220]
Generate cloud_bin_30 for gazebo_winter
****************************************
num of local max[60774]
Generate cloud_bin_0 for wood_autmn
****************************************
num of local max[65700]
Generate cloud_bin_1 for wood_autmn
****************************************
num of local max[68522]
Generate cloud_bin_2 for wood_autmn
****************************************
num of local max[70236]
Generate cloud_bin_3 for wood_autmn
****************************************
num of local max[65856]
Generate cloud_bin_4 for wood_autmn
****************************************
num of local max[60350]
Generate cloud_bin_5 for wood_autmn
****************************************
num of local max[62804]
Generate cloud_bin_6 for wood_autmn
****************************************
num of local max[62538]
Generate cloud_bin_7 for wood_autmn
****************************************
num of local max[64912]
Generate cloud_bin_8 for wood_autmn
****************************************
num of local max[70100]
Generate cloud_bin_9 for wood_autmn
****************************************
num of local max[70348]
Generate cloud_bin_10 for wood_autmn
****************************************
num of local max[72372]
Generate cloud_bin_11 for wood_autmn
****************************************
num of local max[79158]
****************************************
num of local max[77498]
Generate cloud_bin_13 for wood_autmn
****************************************
num of local max[75524]
Generate cloud_bin_14 for wood_autmn
****************************************
num of local max[74736]
Generate cloud_bin_15 for wood_autmn
****************************************
num of local max[76278]
Generate cloud_bin_16 for wood_autmn
****************************************
num of local max[81292]
Generate cloud_bin_17 for wood_autmn
****************************************
num of local max[77378]
Generate cloud_bin_18 for wood_autmn
****************************************
num of local max[75340]
Generate cloud_bin_19 for wood_autmn
****************************************
num of local max[69720]
Generate cloud_bin_20 for wood_autmn
****************************************
num of local max[73258]
Generate cloud_bin_21 for wood_autmn
****************************************
num of local max[74398]
Generate cloud_bin_22 for wood_autmn
****************************************
num of local max[72218]
Generate cloud_bin_23 for wood_autmn
****************************************
num of local max[69036]
Generate cloud_bin_24 for wood_autmn
****************************************
num of local max[73088]
Generate cloud_bin_25 for wood_autmn
****************************************
num of local max[73986]
Generate cloud_bin_26 for wood_autmn
****************************************
num of local max[74708]
Generate cloud_bin_27 for wood_autmn
****************************************
num of local max[73818]
Generate cloud_bin_28 for wood_autmn
****************************************
num of local max[70550]
Generate cloud_bin_29 for wood_autmn
****************************************
num of local max[70328]
Generate cloud_bin_30 for wood_autmn
****************************************
num of local max[62090]
Generate cloud_bin_31 for wood_autmn
****************************************
num of local max[67538]
Generate cloud_bin_0 for wood_summer
****************************************
num of local max[69884]
Generate cloud_bin_1 for wood_summer
****************************************
num of local max[73172]
Generate cloud_bin_2 for wood_summer
****************************************
num of local max[69554]
Generate cloud_bin_3 for wood_summer
****************************************
num of local max[70372]
Generate cloud_bin_4 for wood_summer
****************************************
num of local max[71776]
Generate cloud_bin_5 for wood_summer
****************************************
num of local max[69822]
Generate cloud_bin_6 for wood_summer
****************************************
2020-08-01 18:23:38.838700: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 4.10GiB.  Current allocation summary follows.
2020-08-01 18:23:38.838854: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (256):   Total Chunks: 132, Chunks in use: 127. 33.0KiB allocated for chunks. 31.8KiB in use in bin. 20.9KiB client-requested in use in bin.
2020-08-01 18:23:38.838871: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (512):   Total Chunks: 81, Chunks in use: 75. 41.0KiB allocated for chunks. 37.5KiB in use in bin. 37.5KiB client-requested in use in bin.
2020-08-01 18:23:38.838885: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (1024):  Total Chunks: 82, Chunks in use: 78. 82.5KiB allocated for chunks. 78.2KiB in use in bin. 78.0KiB client-requested in use in bin.
2020-08-01 18:23:38.838898: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (2048):  Total Chunks: 62, Chunks in use: 62. 125.8KiB allocated for chunks. 125.8KiB in use in bin. 125.8KiB client-requested in use in bin.
2020-08-01 18:23:38.838910: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (4096):  Total Chunks: 20, Chunks in use: 20. 80.0KiB allocated for chunks. 80.0KiB in use in bin. 80.0KiB client-requested in use in bin.
2020-08-01 18:23:38.838923: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (8192):  Total Chunks: 17, Chunks in use: 17. 136.0KiB allocated for chunks. 136.0KiB in use in bin. 136.0KiB client-requested in use in bin.
2020-08-01 18:23:38.838935: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (16384):         Total Chunks: 3, Chunks in use: 3. 48.0KiB allocated for chunks. 48.0KiB in use in bin. 48.0KiB client-requested in use in bin.
2020-08-01 18:23:38.838947: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (32768):         Total Chunks: 5, Chunks in use: 5. 240.0KiB allocated for chunks. 240.0KiB in use in bin. 239.8KiB client-requested in use in bin.
2020-08-01 18:23:38.838959: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (65536):         Total Chunks: 4, Chunks in use: 4. 256.0KiB allocated for chunks. 256.0KiB in use in bin. 256.0KiB client-requested in use in bin.
2020-08-01 18:23:38.838971: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (131072):        Total Chunks: 9, Chunks in use: 8. 1.68MiB allocated for chunks. 1.47MiB in use in bin. 1.47MiB client-requested in use in bin.
2020-08-01 18:23:38.838982: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (262144):        Total Chunks: 7, Chunks in use: 7. 1.95MiB allocated for chunks. 1.95MiB in use in bin. 1.93MiB client-requested in use in bin.
2020-08-01 18:23:38.838993: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (524288):        Total Chunks: 3, Chunks in use: 3. 1.94MiB allocated for chunks. 1.94MiB in use in bin. 1.94MiB client-requested in use in bin.
2020-08-01 18:23:38.839005: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (1048576):       Total Chunks: 7, Chunks in use: 5. 8.37MiB allocated for chunks. 5.22MiB in use in bin. 4.94MiB client-requested in use in bin.
2020-08-01 18:23:38.839023: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (2097152):       Total Chunks: 5, Chunks in use: 5. 13.35MiB allocated for chunks. 13.35MiB in use in bin. 13.30MiB client-requested in use in bin.
2020-08-01 18:23:38.839043: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (4194304):       Total Chunks: 3, Chunks in use: 3. 16.09MiB allocated for chunks. 16.09MiB in use in bin. 15.60MiB client-requested in use in bin.
2020-08-01 18:23:38.839066: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (8388608):       Total Chunks: 4, Chunks in use: 4. 40.28MiB allocated for chunks. 40.28MiB in use in bin. 37.28MiB client-requested in use in bin.
2020-08-01 18:23:38.839082: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (16777216):      Total Chunks: 4, Chunks in use: 4. 90.00MiB allocated for chunks. 90.00MiB in use in bin. 71.48MiB client-requested in use in bin.
2020-08-01 18:23:38.839096: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (33554432):      Total Chunks: 5, Chunks in use: 4. 239.61MiB allocated for chunks. 207.19MiB in use in bin. 165.36MiB client-requested in use in bin.
2020-08-01 18:23:38.839109: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (67108864):      Total Chunks: 4, Chunks in use: 4. 411.71MiB allocated for chunks. 411.71MiB in use in bin. 381.42MiB client-requested in use in bin.
2020-08-01 18:23:38.839121: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (134217728):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-08-01 18:23:38.839134: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (268435456):     Total Chunks: 8, Chunks in use: 2. 10.30GiB allocated for chunks. 558.96MiB in use in bin. 549.11MiB client-requested in use in bin.
2020-08-01 18:23:38.839145: I tensorflow/core/common_runtime/bfc_allocator.cc:613] Bin for 4.10GiB was 256.00MiB, Chunk State: 
2020-08-01 18:23:38.839164: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 256.74MiB | Requested Size: 69.74MiB | in_use: 0, prev:   Size: 69.74MiB | Requested Size: 69.74MiB | in_use: 1, next:   Size: 61.28MiB | Requested Size: 39.42MiB | in_use: 1
2020-08-01 18:23:38.839180: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 353.09MiB | Requested Size: 261.54MiB | in_use: 0, next:   Size: 117.70MiB | Requested Size: 117.70MiB | in_use: 1
2020-08-01 18:23:38.839192: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 1.00GiB | Requested Size: 203.17MiB | in_use: 0
2020-08-01 18:23:38.839205: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 1.24GiB | Requested Size: 575.39MiB | in_use: 0, prev:   Size: 302.96MiB | Requested Size: 302.96MiB | in_use: 1
2020-08-01 18:23:38.839217: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 2.92GiB | Requested Size: 1.99GiB | in_use: 0, next:   Size: 43.83MiB | Requested Size: 43.83MiB | in_use: 1
2020-08-01 18:23:38.839227: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 4.00GiB | Requested Size: 3.67GiB | in_use: 0
2020-08-01 18:23:38.839241: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21600000 of size 1024
2020-08-01 18:23:38.839250: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21600400 of size 1024
2020-08-01 18:23:38.839257: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21600800 of size 1024
2020-08-01 18:23:38.839286: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21600c00 of size 1024
2020-08-01 18:23:38.839295: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21601000 of size 1024
2020-08-01 18:23:38.839303: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21601400 of size 1024
2020-08-01 18:23:38.839310: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21601800 of size 1024
2020-08-01 18:23:38.839318: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21601c00 of size 1024
2020-08-01 18:23:38.839326: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21602000 of size 256
2020-08-01 18:23:38.839334: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac21602100 of size 3932160
2020-08-01 18:23:38.839349: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac219c2100 of size 4448000
2020-08-01 18:23:38.839357: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000000 of size 256
2020-08-01 18:23:38.839365: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000100 of size 256
2020-08-01 18:23:38.839372: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000200 of size 256
2020-08-01 18:23:38.839379: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000300 of size 256
2020-08-01 18:23:38.839387: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000400 of size 256
2020-08-01 18:23:38.839394: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29000500 of size 256
2020-08-01 18:23:38.849209: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29010300 of size 1024
2020-08-01 18:23:38.849217: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29010700 of size 1024
2020-08-01 18:23:38.849224: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29010b00 of size 1024
2020-08-01 18:23:38.849231: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29010f00 of size 1024
2020-08-01 18:23:38.849239: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29011300 of size 256
2020-08-01 18:23:38.849246: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29011400 of size 1024
2020-08-01 18:23:38.849253: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29011800 of size 1024
2020-08-01 18:23:38.849260: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29011c00 of size 1024
2020-08-01 18:23:38.849268: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29012000 of size 1024
2020-08-01 18:23:38.849275: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29012400 of size 1024
2020-08-01 18:23:38.849282: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29012800 of size 1024
2020-08-01 18:23:38.849291: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29012c00 of size 1024
2020-08-01 18:23:38.849298: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29013000 of size 1024
2020-08-01 18:23:38.849306: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29013400 of size 256
2020-08-01 18:23:38.849313: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29013500 of size 4096
2020-08-01 18:23:38.849321: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29014500 of size 4096
2020-08-01 18:23:38.849328: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29015500 of size 256
2020-08-01 18:23:38.849336: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29015600 of size 2048
2020-08-01 18:23:38.849343: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29015e00 of size 2048
2020-08-01 18:23:38.849350: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29016600 of size 2048
2020-08-01 18:23:38.849368: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29016e00 of size 2048
2020-08-01 18:23:38.849386: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29017600 of size 2048
2020-08-01 18:23:38.849393: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29017e00 of size 2048
2020-08-01 18:23:38.849401: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29018600 of size 2048
2020-08-01 18:23:38.849408: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29018e00 of size 2048
2020-08-01 18:23:38.849415: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29019600 of size 256
2020-08-01 18:23:38.849424: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29019700 of size 8192
2020-08-01 18:23:38.849431: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901b700 of size 8192
2020-08-01 18:23:38.849439: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901d700 of size 256
2020-08-01 18:23:38.849460: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901d800 of size 256
2020-08-01 18:23:38.849468: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901d900 of size 256
2020-08-01 18:23:38.849476: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901da00 of size 256
2020-08-01 18:23:38.849483: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901db00 of size 256
2020-08-01 18:23:38.849490: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901dc00 of size 256
2020-08-01 18:23:38.849497: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901dd00 of size 256
2020-08-01 18:23:38.849505: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901de00 of size 512
2020-08-01 18:23:38.849512: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901e000 of size 512
2020-08-01 18:23:38.849519: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901e200 of size 512
2020-08-01 18:23:38.849526: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901e400 of size 512
2020-08-01 18:23:38.849534: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901e600 of size 256
2020-08-01 18:23:38.849541: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901e700 of size 1024
2020-08-01 18:23:38.849548: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901eb00 of size 1024
2020-08-01 18:23:38.849556: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901ef00 of size 1024
2020-08-01 18:23:38.849563: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901f300 of size 1024
2020-08-01 18:23:38.849570: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901f700 of size 256
2020-08-01 18:23:38.849578: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2901f800 of size 2048
2020-08-01 18:23:38.849585: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29020000 of size 2048
2020-08-01 18:23:38.849592: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29020800 of size 2048
2020-08-01 18:23:38.849600: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29021000 of size 2048
2020-08-01 18:23:38.849607: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29021800 of size 256
2020-08-01 18:23:38.849614: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29021900 of size 256
2020-08-01 18:23:38.849622: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29021a00 of size 256
2020-08-01 18:23:38.849629: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29021b00 of size 1280
2020-08-01 18:23:38.849637: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022000 of size 256
2020-08-01 18:23:38.849644: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022100 of size 256
2020-08-01 18:23:38.849651: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022200 of size 256
2020-08-01 18:23:38.849659: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022300 of size 256
2020-08-01 18:23:38.849666: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022400 of size 256
2020-08-01 18:23:38.849673: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022500 of size 256
2020-08-01 18:23:38.849681: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022600 of size 256
2020-08-01 18:23:38.849688: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022700 of size 256
2020-08-01 18:23:38.849695: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022800 of size 256
2020-08-01 18:23:38.849702: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022900 of size 512
2020-08-01 18:23:38.853037: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022b00 of size 512
2020-08-01 18:23:38.853062: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022d00 of size 512
2020-08-01 18:23:38.853071: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29022f00 of size 512
2020-08-01 18:23:38.853078: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29023100 of size 512
2020-08-01 18:23:38.853086: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29023300 of size 512
2020-08-01 18:23:38.853094: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29023500 of size 512
2020-08-01 18:23:38.853101: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29023700 of size 512
2020-08-01 18:23:38.853110: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29023900 of size 32768
2020-08-01 18:23:38.853118: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902b900 of size 256
2020-08-01 18:23:38.853125: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902ba00 of size 256
2020-08-01 18:23:38.853133: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902bb00 of size 256
2020-08-01 18:23:38.853141: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902bc00 of size 256
2020-08-01 18:23:38.853149: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902bd00 of size 16384
2020-08-01 18:23:38.853156: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902fd00 of size 256
2020-08-01 18:23:38.853164: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902fe00 of size 256
2020-08-01 18:23:38.853172: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2902ff00 of size 256
2020-08-01 18:23:38.853179: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29030000 of size 256
2020-08-01 18:23:38.853186: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29030100 of size 256
2020-08-01 18:23:38.853194: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29030200 of size 61440
2020-08-01 18:23:38.853202: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2903f200 of size 61440
2020-08-01 18:23:38.853209: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2904e200 of size 512
2020-08-01 18:23:38.853217: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2904e400 of size 512
2020-08-01 18:23:38.853224: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2904e600 of size 512
2020-08-01 18:23:38.853231: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2904e800 of size 512
2020-08-01 18:23:38.853239: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac2904ea00 of size 16384
2020-08-01 18:23:38.853247: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29052a00 of size 16384
2020-08-01 18:23:38.853254: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056a00 of size 256
2020-08-01 18:23:38.853262: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056b00 of size 256
2020-08-01 18:23:38.853269: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056c00 of size 256
2020-08-01 18:23:38.853277: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056d00 of size 256
2020-08-01 18:23:38.853284: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056e00 of size 256
2020-08-01 18:23:38.853292: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29056f00 of size 3840
2020-08-01 18:23:38.853300: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29057e00 of size 256
2020-08-01 18:23:38.853307: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29057f00 of size 256
2020-08-01 18:23:38.853315: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29058000 of size 256
2020-08-01 18:23:38.853322: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29058100 of size 256
2020-08-01 18:23:38.853330: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29058200 of size 32768
2020-08-01 18:23:38.853337: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060200 of size 256
2020-08-01 18:23:38.853351: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060300 of size 256
2020-08-01 18:23:38.853366: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060400 of size 256
2020-08-01 18:23:38.853374: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060500 of size 256
2020-08-01 18:23:38.853381: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060600 of size 256
2020-08-01 18:23:38.853389: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060700 of size 1024
2020-08-01 18:23:38.853397: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060b00 of size 1024
2020-08-01 18:23:38.853404: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29060f00 of size 1024
2020-08-01 18:23:38.853411: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29061300 of size 1024
2020-08-01 18:23:38.853419: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29061700 of size 1024
2020-08-01 18:23:38.853426: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29061b00 of size 1024
2020-08-01 18:23:38.853433: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29061f00 of size 1024
2020-08-01 18:23:38.853441: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29062300 of size 1024
2020-08-01 18:23:38.853449: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29062700 of size 131072
2020-08-01 18:23:38.853456: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082700 of size 256
2020-08-01 18:23:38.853464: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082800 of size 256
2020-08-01 18:23:38.853471: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082900 of size 256
2020-08-01 18:23:38.853479: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082a00 of size 256
2020-08-01 18:23:38.853486: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082b00 of size 256
2020-08-01 18:23:38.853493: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082c00 of size 256
2020-08-01 18:23:38.853501: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082d00 of size 256
2020-08-01 18:23:38.853508: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082e00 of size 256
2020-08-01 18:23:38.853526: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29082f00 of size 256
2020-08-01 18:23:38.853536: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac29083000 of size 245760
2020-08-01 18:23:38.853545: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aac290bf000 of size 266240
2020-08-01 18:23:38.861812: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4844700 of size 512
2020-08-01 18:23:38.861835: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4844900 of size 512
2020-08-01 18:23:38.861844: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4844b00 of size 512
2020-08-01 18:23:38.861852: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4844d00 of size 262144
2020-08-01 18:23:38.861860: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4884d00 of size 262144
2020-08-01 18:23:38.861868: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd48c4d00 of size 1024
2020-08-01 18:23:38.861876: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd48c5100 of size 1024
2020-08-01 18:23:38.861884: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd48c5500 of size 1024
2020-08-01 18:23:38.861892: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd48c5900 of size 1024
2020-08-01 18:23:38.861900: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd48c5d00 of size 1048576
2020-08-01 18:23:38.861908: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd49c5d00 of size 1048576
2020-08-01 18:23:38.861916: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac5d00 of size 2048
2020-08-01 18:23:38.861925: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac6500 of size 2048
2020-08-01 18:23:38.861933: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac6d00 of size 2048
2020-08-01 18:23:38.861941: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac7500 of size 2048
2020-08-01 18:23:38.861949: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac7d00 of size 256
2020-08-01 18:23:38.861957: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac7e00 of size 256
2020-08-01 18:23:38.861965: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac7f00 of size 8192
2020-08-01 18:23:38.861973: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ac9f00 of size 256
2020-08-01 18:23:38.861981: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4aca000 of size 2048
2020-08-01 18:23:38.861989: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4aca800 of size 256
2020-08-01 18:23:38.861997: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4aca900 of size 1024
2020-08-01 18:23:38.862005: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acad00 of size 256
2020-08-01 18:23:38.862014: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acae00 of size 512
2020-08-01 18:23:38.862036: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb000 of size 256
2020-08-01 18:23:38.862044: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb100 of size 256
2020-08-01 18:23:38.862061: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb200 of size 256
2020-08-01 18:23:38.862069: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb300 of size 256
2020-08-01 18:23:38.862077: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb400 of size 256
2020-08-01 18:23:38.862085: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb500 of size 256
2020-08-01 18:23:38.862093: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb600 of size 256
2020-08-01 18:23:38.862101: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb700 of size 256
2020-08-01 18:23:38.862109: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb800 of size 256
2020-08-01 18:23:38.862117: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acb900 of size 512
2020-08-01 18:23:38.862125: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acbb00 of size 1024
2020-08-01 18:23:38.865767: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acbf00 of size 512
2020-08-01 18:23:38.865805: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acc100 of size 2048
2020-08-01 18:23:38.865822: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acc900 of size 4096
2020-08-01 18:23:38.865835: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acd900 of size 2048
2020-08-01 18:23:38.865848: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ace100 of size 2048
2020-08-01 18:23:38.865862: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ace900 of size 512
2020-08-01 18:23:38.865875: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4aceb00 of size 512
2020-08-01 18:23:38.865888: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4aced00 of size 512
2020-08-01 18:23:38.865901: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acef00 of size 1024
2020-08-01 18:23:38.865914: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acf300 of size 512
2020-08-01 18:23:38.865927: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4acf500 of size 512
2020-08-01 18:23:38.865940: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4acf700 of size 8192
2020-08-01 18:23:38.865953: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad1700 of size 8192
2020-08-01 18:23:38.865966: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad3700 of size 4096
2020-08-01 18:23:38.865979: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad4700 of size 4096
2020-08-01 18:23:38.865992: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad5700 of size 2048
2020-08-01 18:23:38.866004: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad5f00 of size 1024
2020-08-01 18:23:38.866025: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad6300 of size 1024
2020-08-01 18:23:38.866049: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad6700 of size 2048
2020-08-01 18:23:38.866062: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad6f00 of size 1024
2020-08-01 18:23:38.866075: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad7300 of size 2048
2020-08-01 18:23:38.866088: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad7b00 of size 256
2020-08-01 18:23:38.866101: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad7c00 of size 2048
2020-08-01 18:23:38.866114: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ad8400 of size 256
2020-08-01 18:23:38.866127: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad8500 of size 1024
2020-08-01 18:23:38.866140: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad8900 of size 256
2020-08-01 18:23:38.866153: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ad8a00 of size 256
2020-08-01 18:23:38.866166: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad8b00 of size 512
2020-08-01 18:23:38.866178: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ad8d00 of size 8192
2020-08-01 18:23:38.866192: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4adad00 of size 4096
2020-08-01 18:23:38.866204: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4adbd00 of size 2048
2020-08-01 18:23:38.866217: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4adc500 of size 1024
2020-08-01 18:23:38.866230: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4adc900 of size 2048
2020-08-01 18:23:38.866243: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4add100 of size 1024
2020-08-01 18:23:38.866256: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4add500 of size 512
2020-08-01 18:23:38.866268: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4add700 of size 1024
2020-08-01 18:23:38.866280: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4addb00 of size 8192
2020-08-01 18:23:38.866292: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4adfb00 of size 256
2020-08-01 18:23:38.866310: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4adfc00 of size 512
2020-08-01 18:23:38.866322: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4adfe00 of size 4096
2020-08-01 18:23:38.866334: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae0e00 of size 1024
2020-08-01 18:23:38.866347: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae1200 of size 2048
2020-08-01 18:23:38.866363: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae1a00 of size 256
2020-08-01 18:23:38.866376: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae1b00 of size 1024
2020-08-01 18:23:38.866392: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae1f00 of size 2048
2020-08-01 18:23:38.866404: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae2700 of size 768
2020-08-01 18:23:38.866417: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae2a00 of size 4096
2020-08-01 18:23:38.866429: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae3a00 of size 1024
2020-08-01 18:23:38.866442: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae3e00 of size 256
2020-08-01 18:23:38.866454: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae3f00 of size 2048
2020-08-01 18:23:38.866466: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae4700 of size 768
2020-08-01 18:23:38.866478: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae4a00 of size 512
2020-08-01 18:23:38.866490: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae4c00 of size 512
2020-08-01 18:23:38.866502: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae4e00 of size 512
2020-08-01 18:23:38.866514: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae5000 of size 1024
2020-08-01 18:23:38.866526: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae5400 of size 1024
2020-08-01 18:23:38.866539: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae5800 of size 256
2020-08-01 18:23:38.866551: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae5900 of size 256
2020-08-01 18:23:38.866564: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae5a00 of size 512
2020-08-01 18:23:38.866576: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae5c00 of size 1280
2020-08-01 18:23:38.866588: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ae6100 of size 1024
2020-08-01 18:23:38.866600: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4ae6500 of size 1526272
2020-08-01 18:23:38.866614: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4c5af00 of size 221952
2020-08-01 18:23:38.866627: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4c91200 of size 221952
2020-08-01 18:23:38.866639: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4cc7500 of size 221952
2020-08-01 18:23:38.866651: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4cfd800 of size 1775616
2020-08-01 18:23:38.866664: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4eaf000 of size 285696
2020-08-01 18:23:38.866677: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4ef4c00 of size 147200
2020-08-01 18:23:38.866690: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4f18b00 of size 443904
2020-08-01 18:23:38.866702: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4f85100 of size 57344
2020-08-01 18:23:38.866715: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd4f93100 of size 221952
2020-08-01 18:23:38.866727: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aacd4fc9400 of size 224256
2020-08-01 18:23:38.866741: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd6000000 of size 15728640
2020-08-01 18:23:38.866773: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd6f00000 of size 8388608
2020-08-01 18:23:38.866788: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacd7700000 of size 9437184
2020-08-01 18:23:38.866801: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacdc000000 of size 29293568
2020-08-01 18:23:38.866814: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacddbefc00 of size 8227584
2020-08-01 18:23:38.866826: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacde3c8700 of size 29587712
2020-08-01 18:23:38.866839: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aace8000000 of size 18283008
2020-08-01 18:23:38.866851: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aace916fa00 of size 33995264
2020-08-01 18:23:38.866864: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aaceb1db400 of size 8684800
2020-08-01 18:23:38.866877: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aaceba23900 of size 2755584
2020-08-01 18:23:38.866889: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacebcc4500 of size 3063808
2020-08-01 18:23:38.866902: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacebfb0500 of size 17207552
2020-08-01 18:23:38.866915: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aaced019600 of size 50227712
2020-08-01 18:23:38.866928: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aacf8000000 of size 268435456
2020-08-01 18:23:38.866941: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aad08000000 of size 130270208
2020-08-01 18:23:38.866953: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aad0fc3c400 of size 73132032
2020-08-01 18:23:38.866966: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 0x2aad141fac00 of size 269209600
2020-08-01 18:23:38.866978: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x2aad242b7c00 of size 64259072
2020-08-01 18:23:38.872938: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 3932160 totalling 3.75MiB
2020-08-01 18:23:38.872951: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 4194304 totalling 4.00MiB
2020-08-01 18:23:38.872964: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 4448000 totalling 4.24MiB
2020-08-01 18:23:38.872977: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 8227584 totalling 7.85MiB
2020-08-01 18:23:38.872990: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 8388608 totalling 8.00MiB
2020-08-01 18:23:38.873003: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 8684800 totalling 8.28MiB
2020-08-01 18:23:38.873034: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 9437184 totalling 9.00MiB
2020-08-01 18:23:38.873049: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 15728640 totalling 15.00MiB
2020-08-01 18:23:38.873063: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 17207552 totalling 16.41MiB
2020-08-01 18:23:38.873077: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 18283008 totalling 17.44MiB
2020-08-01 18:23:38.873090: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 29293568 totalling 27.94MiB
2020-08-01 18:23:38.873104: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 29587712 totalling 28.22MiB
2020-08-01 18:23:38.873117: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 45954560 totalling 43.83MiB
2020-08-01 18:23:38.873131: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 50227712 totalling 47.90MiB
2020-08-01 18:23:38.873145: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 56811520 totalling 54.18MiB
2020-08-01 18:23:38.873158: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 64259072 totalling 61.28MiB
2020-08-01 18:23:38.873172: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 73132032 totalling 69.74MiB
2020-08-01 18:23:38.873185: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 104898048 totalling 100.04MiB
2020-08-01 18:23:38.873199: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 123413760 totalling 117.70MiB
2020-08-01 18:23:38.873213: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 130270208 totalling 124.24MiB
2020-08-01 18:23:38.873227: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 268435456 totalling 256.00MiB
2020-08-01 18:23:38.873241: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 317675520 totalling 302.96MiB
2020-08-01 18:23:38.873254: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 1.32GiB
2020-08-01 18:23:38.873271: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: 
Limit:                 11928924980
InUse:                  1414706432
MaxInUse:               8091007488
NumAllocs:                   62502
MaxAllocSize:           4294967296

2020-08-01 18:23:38.873351: W tensorflow/core/common_runtime/bfc_allocator.cc:271] *******_*____________****_________________________________________________________________________**
2020-08-01 18:23:38.873422: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_op.cc:103 : Resource exhausted: OOM when allocating tensor with shape[36768,117,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[36768,117,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/GatherV2}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/concat, IteratorGetNext/_435, KernelPointNetwork/GatherV2/axis)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node KernelPointNetwork/l2_normalize/_467}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2765_KernelPointNetwork/l2_normalize", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test_eth.py", line 135, in <module>
    test_caller(chosen_log, chosen_snapshot, on_val)
  File "test_eth.py", line 101, in test_caller
    tester.generate_descriptor(model, dataset)
  File "/home/Gilgamesh/D3Feat/utils/tester.py", line 199, in generate_descriptor
    [inputs, features, scores, anc_id] = self.sess.run(ops, {model.dropout_prob: 1.0})
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[36768,117,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/GatherV2 (defined at /home/Gilgamesh/D3Feat/models/network_blocks.py:63)  = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/concat, IteratorGetNext/_435, KernelPointNetwork/GatherV2/axis)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node KernelPointNetwork/l2_normalize/_467}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2765_KernelPointNetwork/l2_normalize", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/GatherV2', defined at:
  File "test_eth.py", line 135, in <module>
    test_caller(chosen_log, chosen_snapshot, on_val)
  File "test_eth.py", line 75, in test_caller
    model = KernelPointFCNN(dataset.flat_inputs, config)
  File "/home/Gilgamesh/D3Feat/models/KPFCNN_model.py", line 130, in __init__
    self.out_features, self.out_scores = assemble_FCNN_blocks(self.anchor_inputs, self.config, self.dropout_prob)
  File "/home/Gilgamesh/D3Feat/models/D3Feat.py", line 15, in assemble_FCNN_blocks
    F = assemble_CNN_blocks(inputs, config, dropout_prob)
  File "/home/Gilgamesh/D3Feat/models/network_blocks.py", line 1099, in assemble_CNN_blocks
    training)
  File "/home/Gilgamesh/D3Feat/models/network_blocks.py", line 600, in resnetb_strided_block
    shortcut = ind_max_pool(features, inputs['pools'][layer_ind])
  File "/home/Gilgamesh/D3Feat/models/network_blocks.py", line 63, in ind_max_pool
    pool_features = tf.gather(x, inds, axis=0)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2675, in gather
    return gen_array_ops.gather_v2(params, indices, axis, name=name)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3332, in gather_v2
    "GatherV2", params=params, indices=indices, axis=axis, name=name)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
    op_def=op_def)
  File "/home/Gilgamesh/anaconda3/envs/D3Feat/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[36768,117,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/GatherV2 (defined at /home/Gilgamesh/D3Feat/models/network_blocks.py:63)  = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](KernelPointNetwork/layer_1/resnetb_strided_1/shortcut/concat, IteratorGetNext/_435, KernelPointNetwork/GatherV2/axis)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node KernelPointNetwork/l2_normalize/_467}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2765_KernelPointNetwork/l2_normalize", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


2020-08-01 18:23:39.186317: W tensorflow/core/kernels/data/generator_dataset_op.cc:78] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
         [[{{node PyFunc}} = PyFunc[Tin=[DT_INT64], Tout=[DT_INT64], token="pyfunc_5"](arg0)]]

Error during training on 3DMatch dataset

Hello, thank you very much for this nice work.
I'm trying to train a model using the 3DMatch dataset, but after a while, I'm getting the following error:

[1059  530   38 ...  631  144  924]
Validation : 0.0% (timings : 58.95 0.00)
2022-02-07 16:05:30.380600: E tensorflow/stream_executor/dnn.cc:613] CUDNN_STATUS_NOT_SUPPORTED
in tensorflow/stream_executor/cuda/cuda_dnn.cc(3935): 'cudnnBatchNormalizationForwardInference( cudnn.handle(), mode, &one, &zero, x_descriptor.handle(), x.opaque(), x_descriptor.handle(), y->opaque(), scale_offset_descriptor.handle(), scale.opaque(), offset.opaque(), estimated_mean.opaque(), maybe_inv_var, epsilon)'
Traceback (most recent call last):
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found.
  (0) Out of range: End of sequence
         [[{{node IteratorGetNext}}]]
  (1) Out of range: End of sequence
         [[{{node IteratorGetNext}}]]
         [[optimizer/gradients/KernelPointNetwork/Sum_1_grad/Fill/value/_571]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'IteratorGetNext':
  File "training_3DMatch.py", line 175, in <module>
    dataset.init_input_pipeline(config)
  File "/home/rambo/ws_benji/D3Feat/datasets/common.py", line 770, in init_input_pipeline
    self.flat_inputs = iter.get_next()
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 429, in get_next
    name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 2518, in iterator_get_next
    output_shapes=output_shapes, name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 513, in new_func
    return func(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
  (0) Internal: cuDNN launch failure : input shape ([68418,64,1,1])
         [[{{node KernelPointNetwork/layer_0/simple_0/batch_normalization/cond/FusedBatchNormV3_1}}]]
         [[loss/cdist/Sqrt/_1141]]
  (1) Internal: cuDNN launch failure : input shape ([68418,64,1,1])
         [[{{node KernelPointNetwork/layer_0/simple_0/batch_normalization/cond/FusedBatchNormV3_1}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "training_3DMatch.py", line 207, in <module>
    trainer.train(model, dataset)
  File "/home/rambo/ws_benji/D3Feat/utils/trainer.py", line 387, in train
    self.validation(model, dataset)
  File "/home/rambo/ws_benji/D3Feat/utils/trainer.py", line 441, in validation
    desc_loss, det_loss, accuracy, ave_d_pos, ave_d_neg, dists, scores, anc_key, pos_key = self.sess.run(ops, {model.dropout_prob: 1.0})
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
  (0) Internal: cuDNN launch failure : input shape ([68418,64,1,1])
         [[node KernelPointNetwork/layer_0/simple_0/batch_normalization/cond/FusedBatchNormV3_1 (defined at /home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
         [[loss/cdist/Sqrt/_1141]]
  (1) Internal: cuDNN launch failure : input shape ([68418,64,1,1])
         [[node KernelPointNetwork/layer_0/simple_0/batch_normalization/cond/FusedBatchNormV3_1 (defined at /home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'KernelPointNetwork/layer_0/simple_0/batch_normalization/cond/FusedBatchNormV3_1':
  File "training_3DMatch.py", line 189, in <module>
    model = KernelPointFCNN(dataset.flat_inputs, config)
  File "/home/rambo/ws_benji/D3Feat/models/KPFCNN_model.py", line 130, in __init__
    self.out_features, self.out_scores = assemble_FCNN_blocks(self.anchor_inputs, self.config, self.dropout_prob)
  File "/home/rambo/ws_benji/D3Feat/models/D3Feat.py", line 15, in assemble_FCNN_blocks
    F = assemble_CNN_blocks(inputs, config, dropout_prob)
  File "/home/rambo/ws_benji/D3Feat/models/network_blocks.py", line 1099, in assemble_CNN_blocks
    training)
  File "/home/rambo/ws_benji/D3Feat/models/network_blocks.py", line 242, in simple_block
    training))
  File "/home/rambo/ws_benji/D3Feat/models/network_blocks.py", line 160, in batch_norm
    training=training)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 330, in new_func
    return func(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/layers/normalization.py", line 327, in batch_normalization
    return layer.apply(inputs, training=training)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 330, in new_func
    return func(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1700, in apply
    return self.__call__(inputs, *args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/layers/base.py", line 548, in __call__
    outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 854, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 234, in wrapper
    return converted_call(f, options, args, kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 439, in converted_call
    return _call_unconverted(f, args, kwargs, options)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 330, in _call_unconverted
    return f(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/layers/normalization.py", line 167, in call
    return super(BatchNormalization, self).call(inputs, training=training)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/normalization.py", line 710, in call
    outputs = self._fused_batch_norm(inputs, training=training)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/normalization.py", line 565, in _fused_batch_norm
    training, _fused_batch_norm_training, _fused_batch_norm_inference)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/tf_utils.py", line 59, in smart_cond
    pred, true_fn=true_fn, false_fn=false_fn, name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/smart_cond.py", line 59, in smart_cond
    name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 513, in new_func
    return func(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1235, in cond
    orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1061, in BuildCondBranch
    original_result = fn()
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/normalization.py", line 562, in _fused_batch_norm_inference
    data_format=data_format)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/ops/nn_impl.py", line 1502, in fused_batch_norm
    name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_nn_ops.py", line 4620, in fused_batch_norm_v3
    name=name)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 513, in new_func
    return func(*args, **kwargs)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/home/rambo/anaconda3/envs/tf_n1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

2022-02-07 16:05:32.189426: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.

The error comes from this section of code:
image

Do you have any idea why is this happening?

Thanks a lot,
Benjamin

Evaluation results on ETH are different from the paper

Hi@XuyangBai, I run the test code on ETH dataset with 3dmatch pre-train model but I get the following results which are different with the paper:


[80.97826086956522, 70.93425605536332, 46.95652173913044, 28.000000000000004]
Avergae Matching Recall: 62.13183730715287%
All 8 scene, average recall: 56.71725966601474%
All 8 scene, average num inliers: 8.667818363226777
All 8 scene, average num inliers ratio: 0.1517589764200676

I have changed the following codes:

In tester.py:
# for i, var in enumerate(my_vars): # print(i, var.name) for v in my_vars: if 'kernel_points' in v.name: rescale_op = v.assign(tf.multiply(v, 0.0625 / 0.03)) self.sess.run(rescale_op)

In test_eth.py:
# Should change the parameter of 3DMatch model to adopt to ETH #import pdb #pdb.set_trace() config.first_subsampling_dl = 0.0625 config.dataset = 'ETH' #config.KP_extent = 2

And i use the log : chosen_log = 'results/Log_circleloss'

The results are run under 250 predicted keypoints, and i also run them under 5000 predicted keypoints, gotting the following results:


[79.34782608695652, 65.39792387543253, 36.52173913043478, 31.2]
Avergae Matching Recall: 58.345021037868165%
All 8 scene, average recall: 53.11687227320595%
All 8 scene, average num inliers: 114.00879929304588
All 8 scene, average num inliers ratio: 0.13993148638504888

Could you please give me some hint about that? Thank you very much!

Architecture issues

Thanks for your great work! However, I am very curious about the sentense in the paper "All layers except the last one are followed by batch normalization and ReLU". I wonder why BN and ReLU are not adopted as in 2D CNNs. Looking forward to your kind reply!

RANSAC

hi Xuyang,

For Table 2 in your paper, may I ask which RANSAC do you use for FCGF features? Is that the same one as this:

result = open3d.registration_ransac_based_on_feature_matching(
? The result here is much better than that reported in FCGF paper, I also tried this RANSAC, but can only get 83% recall. With Mutual check, I can get 86.3% recall, which is close to that reported in your work.

Best,
Shengyu

File size limit of the data

Hi,How do you use open3d to visualize the kitti dataset, do you convert it to .ply format or use .bin directly? In addition, are there any tests for the file size limit of the data used or the number of point clouds?Thanks.

segmentation fault during training 3D match

hello! Thanks for your sharing.

The code seems raise an error during runing the following code when I attempt to train on 3D Match.
train_data = train_data.map(map_func=map_func, num_parallel_calls=self.num_threads)

Looking forward to your kind reply!

Figure2 metadata

hi Xuyang,

Do you mind sharing the metadata for Figure 2 in your paper?

best,
Shengyu

A question about downsampling of ETH DataSet

Hi,
Thanks for your amazing work.
I come across a problem and want to ask you for help. When you preprocessed the ETH dataset, you used a downsample_size of 0.0625. But unfortunately, when we tested it, our GUP (22945MiB) was out of memory. After debugging, we found that the number of points after 0.0625 sampling was too large. And we also found that the point cloud samples of different frames are The number of points is uneven. How did you solve it?

A question about how to get the result of "rotated" on 3DMatch.

I'd like to do some comparative experiments based on your work. I saw that your FMR on 3DMatch had a "rotated" result. I wanted to use the same setting as your paper in terms of getting the results from 'rotated'. How did you get this result? Is the original point cloud applied to a random 4 x 4 matrix, and the Ground Truth also transformed accordingly?

Looking forward to your reply, thank you.

Increasing the receptive field on ETH dataset

Hi Xuyang,

in one of the issues you have written that the receptiev field can be increased without increasing the voxel size. Could you elaborate shortly on that, do you just increase the conv_radius or how do you exactly test the generalizability to the ETH dataset.

Best
Zan

And for our method, we are also able to increase the receptive field of each point without changing the voxel size (by scaling up the grid size of each layer). nally posted by @XuyangBai in #1 (comment)_

Can this be used to enhance object detection?

Thanks for your open source code. Can this work be used to merge frames of a time series to get a higher dectection accuracy. For example, I have a lidar with 16 lines. Can I use this work to combine two frames to simulate a Lidar with 32 lines?

ETH data

Hello,
Thank you for your innovative work.
we haven't changed any parameters. Why is the recall of ETH dataset only 1%?
Thanks

How to evaluate ETH dataset

Hi @XuyangBai
Sorry for the bother again! I am trying to evaluate the ETH dataset using result which is been tested on the kitti model.
I ran the script D3Feat/geometric_registration_eth/ evaluate_eth.py , errors reports here:

multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/ubuntu/.conda/envs/tia36/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/ubuntu/.conda/envs/tia36/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/disk/tia/D3Feat/geometric_registration_eth/evaluate_eth.py", line 100, in deal_with_one_scene
    os.mkdir(f"/disk/tia/D3Feat/geometric_registration_eth/pred_result/{scene}/")
FileNotFoundError: [Errno 2] No such file or directory: '/disk/tia/D3Feat/geometric_registration_eth/pred_result/wood_summer/'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/disk/tia/D3Feat/geometric_registration_eth/evaluate_eth.py", line 136, in <module>
    pool.map(func, scene_list)
  File "/home/ubuntu/.conda/envs/tia36/lib/python3.6/multiprocessing/pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/ubuntu/.conda/envs/tia36/lib/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
FileNotFoundError: [Errno 2] No such file or directory: '/disk/tia/D3Feat/geometric_registration_eth/pred_result/wood_summer/'

I revised the function deal_with_one_scene and found that the pred_result folder can't be generated.

    if not os.path.exists(f"pred_result/{scene}/"):
        os.mkdir(f"pred_result/{scene}/")

So, how to solve this problem?
Thank you very much!

9753 nan nan 0.00 nan nan 1698.001 16280.8

Hi @XuyangBai

I'm sorry to disturb you with the following questions. In the process of 3dmatch training, in 9753 step

Steps desc_loss det_loss train_accuracy d_pos d_neg time memory
9753 nan nan 0.00 nan nan 1698.001 16280.8.

What does this mean when the desc_loss and det_loss is nan, and the train_accuracy become 0.00, is it some kind of error in the training process?

Thank you very much and waiting for your reply!

Issue evaluating 3DMatch -> neighbors = sess.run(ops)

Hello,
Thank you very much for your work.
I'm trying to evaluate the 3DMatch dataset but I'm getting the following error:

File: datasets/common.py - 641

image

image

Do you have any idea why this is occurring?

Thank you very much,
Benjamin

Where to download the ply file ?

In datasets/cal_overlap.py, it use the ply file instead of the rgbd picture which download on 3DMatch dataset. How to get the ply files?

Why i cannot get the first dimension of the features' shape?

Thanks your work!
In the D3Feat.py, i print the shape,

def assemble_FCNN_blocks(inputs, config, dropout_prob):
    # First get features from CNN  
    F = assemble_CNN_blocks(inputs, config, dropout_prob)
    features = F[-1]
    for f in F:
        print("the feature shape ", f.shape)

but get the below:
image
but i see the dimension of the features in paper are : 64 128 256 512 1024

Matching point clouds with different scales

Hello,
Thanks for your group's excellent work! I want to use D3Feat to match 2 point clouds which are generated by 3D reconstruction, which means their scales will be different. I am wondering if D3Feat works for matching point clouds with different scales? Thanks!

Question about score for FCGF network

Hi,

Thank you very much for your work. I tried to implement the detector of yours on FCGF because in your paper. It mentions that your detector will help FCGF performance. And during the training process, after a few iteration, the detect loss reaches 0, score drops to 0 as well. It is the same for positive loss. I was wondering do you come across the same problem for FCGF as well? I am looking forward to your reply. Thank you very much!

IndexError:run demo with 'kitti' model

HI, i want to run demo with 'kitti' model, so i change the path to 'results_kitti/Log_11011605/'. Howerver, an error occurred on this line
IndexError: tuple index out of range
I would appreciate it if you could help me.

tensorflow.python.framework.errors_impl.NotFoundError: tf_custom_ops/tf_neighbors.so: cannot open shared object file: No such file or directory

Hi @XuyangBai
I have completed training on the kitti dataset. After generating the descriptors and detection scores, I ran the script python evaluate_kitti_our.py D3Feat [timestr of the model] to calculated the keypoint repeatability, but an error occur:

Traceback (most recent call last):
  File "evaluate_kitti_our.py", line 9, in <module>
    from datasets.KITTI import KITTIDataset
  File "../datasets/KITTI.py", line 13, in <module>
    from datasets.common import Dataset
  File "../datasets/common.py", line 34, in <module>
    tf_neighbors_module = tf.load_op_library('tf_custom_ops/tf_neighbors.so')
  File "/home/ubuntu/.conda/envs/tia36/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: tf_custom_ops/tf_neighbors.so: cannot open shared object file: No such file or directory

However, no errors were reported during the entire training process of the kitti dataset, which should prove that my compilation is correct and the file is in the path D3Feat/tf_custom_ops/tf_neighbors.so.

How can I help the python evaluate_kitti_our.py D3Feat [timestr of the model] to find the D3Feat/tf_custom_ops/tf_neighbors.so file?
Could you please help me to solve this problem?

KITTI and 3DMatch, unalign or align?

Hey Xuyang,

I noticed that you use unaligned point clouds for KITTI training and the aligned point clouds are only used as backup_points for computing the false negative mask, but when it comes to 3DMatch, you use aligned point clouds.

In an other issue you pointed out that you aligned the point clouds in 3DMatch and add rotation translation augmentation to achieve rotation invariance. Could I ask why there is a difference between 3DMatch and KITTI?

Best,
Zhengdi

USIP results in Figure 3

Hi, can you explain the performance difference of USIP on KITTI in Fig 3 of your paper (15%-20%) vs. that in Fig 4 of the USIP paper (30%-60%)? The only hint I can find in your paper is

since USIP and D3Feat use different processing and splitting strategies and USIP requires surface normal and curvature as input, the results are not directly comparable.

However, to me this doesn't fully answer the question. Also as I remember (not 100% sure), USIP doesn't need normal and curvature.

Besides, probably as a drawback of their approach, the performance of USIP is sensible to the two parameters M (number of samples) and K (number of nearest neighbors). I assume this is especially the case if you take their pre-trained model and apply it to another dataset like 3DMatch. Did you try to select proper M, K values for a fair comparison in the first plot of Fig 3?

How can I obtain the original 3DMatch dataset

Hi, your /datasets/cal_overlap.py, which handle the original 3DMatch dataset, could you please share the this original 3DMatch dataset, or provide a download link, I can not find the "scene_list_{split}.txt" file, is it provided officially as "http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/download.sh" and
"http://vision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/split.txt" in
"http://3dmatch.cs.princeton.edu/#geometric-registration-benchmark"?
image

Demo problem

Sorry to bother. I have a question about demo_registration.py. When I run it, it always has the "Segmentation fault" problem. What is wrong with my device?
The code was running on cuda10.0, Tensorflow1.12, Tesla M40. The log is as follows:
Instructions for updating:
keep_dims is deprecated, use keepdims instead
2020-06-21 13:49:59.269588: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-06-21 13:50:03.450215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Tesla M40 major: 5 minor: 2 memoryClockRate(GHz): 1.112
pciBusID: 0000:02:00.0
totalMemory: 11.18GiB freeMemory: 11.07GiB
2020-06-21 13:50:03.513556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties:
name: Tesla M40 major: 5 minor: 2 memoryClockRate(GHz): 1.112
pciBusID: 0000:82:00.0
totalMemory: 11.18GiB freeMemory: 11.07GiB
2020-06-21 13:50:03.513649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1
2020-06-21 13:50:04.343520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-21 13:50:04.343572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1
2020-06-21 13:50:04.343579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N
2020-06-21 13:50:04.343584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N
2020-06-21 13:50:04.344708: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10709 MB memory) -> physical GPU (device: 0, name: Tesla M40, pci bus id: 0000:02:00.0, compute capability: 5.2)
2020-06-21 13:50:04.345655: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10709 MB memory) -> physical GPU (device: 1, name: Tesla M40, pci bus id: 0000:82:00.0, compute capability: 5.2)
End of train dataset

self.neighborhood: [37 30 34 36 37]
2020-06-21 13:50:08.567410: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1
2020-06-21 13:50:08.567557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-21 13:50:08.567571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1
2020-06-21 13:50:08.567578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N
2020-06-21 13:50:08.567584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N
2020-06-21 13:50:08.568705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10709 MB memory) -> physical GPU (device: 0, name: Tesla M40, pci bus id: 0000:02:00.0, compute capability: 5.2)
2020-06-21 13:50:08.568985: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10709 MB memory) -> physical GPU (device: 1, name: Tesla M40, pci bus id: 0000:82:00.0, compute capability: 5.2)
Model restored from results/Log_contraloss/snapshots/snap-54
Segmentation fault

Can you give me some advice? Thx.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.