Code Monkey home page Code Monkey logo

lisrd's Introduction

LISRD - Local Invariance Selection at Runtime for Descriptors

This repository contains the implementation of the paper: Online Invariance Selection for Local Feature Descriptors, R. Pautrat, V. Larsson, M. Oswald and M. Pollefeys (Oral at ECCV 2020).

LISRD offers the possibility to leverage descriptors with different invariances (e.g. invariant or variant to rotation and/or illumination) and to perform an online selection of the most adapted invariance when matching two images.

The following shows for example matches predicted with LISRD-SIFT that selects descriptors between SIFT (rotation invariant) and Upright SIFT (rotation variant): demo_lisrd_sift

Usage

Installation

Clone the repository with its submodule:

git clone --recurse-submodules https://github.com/rpautrat/LISRD.git

We recommend using this code in a Python environment (e.g. venv or conda). The following script installs the necessary requirements and install the repository as a Python package locally:

make install

Training your own model

All the training parameters should be present in a configuration file located in the folder lisrd/configs. The generic script to train your own model from the root folder is the following:

python -m lisrd.experiment train <path to your config file> <path to your experiment>

For example, training LISRD with the 4 types of invariance used in the paper would be:

python -m lisrd.experiment train lisrd/configs/lisrd.yaml ~/Documents/experiments/My_experiment

Use the config file lisrd/configs/lisrd_sift.yaml to instead train LISRD-SIFT that chooses between SIFT and Upright SIFT descriptors.

Pretrained models

We provide two pretrained models:

How to use it

We provide a notebook showing how to use the trained models of LISRD. Additionally, lisrd/export_features.yaml is a script to export the LISRD descriptors with either SIFT or SuperPoint keypoints on a given set of images. It can be used as follows:

python -m lisrd.export_features <path to a txt file listing all your images> <name of the model (lisrd or lisrd_sift)> --checkpoint <path to checkpoint> --keypoints <type of keypoints (sift or superpoint)> --num_kp <number of keypoints (default: 2000)>

Results on the RDNIM dataset

The Rotated Day-Night Image Matching (RDNIM) dataset originates from the DNIM dataset and has been augmented with homographic warps with 50% of the images including rotations. The images used for evaluation in the paper are available here.

Comparison to the state of the art on the RDNIM dataset, using SuperPoint keypoints for all methods and a correctness threshold of 3 pixels:

Day reference Night reference
Homography estimation Precision Recall Homography estimation Precision Recall
Root SIFT 0.134 0.184 0.125 0.186 0.239 0.182
HardNet 0.249 0.225 0.224 0.325 0.359 0.365
SOSNet 0.226 0.218 0.226 0.252 0.288 0.296
SuperPoint 0.178 0.191 0.214 0.235 0.259 0.296
D2-Net 0.124 0.196 0.145 0.195 0.265 0.218
R2D2 0.190 0.173 0.180 0.229 0.237 0.237
GIFT 0.225 0.155 0.149 0.294 0.240 0.229
LISRD (ours) 0.318 0.406 0.439 0.391 0.488 0.520

mma_rdnim

Bibtex

If you use this code in your project, please consider citing the following paper:

@InProceedings{Pautrat_2020_ECCV,
    author = {Pautrat, Rémi and Larsson, Viktor and Oswald, Martin R. and Pollefeys, Marc},
    title = {Online Invariance Selection for Local Feature Descriptors},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    year = {2020},
}

lisrd's People

Contributors

rpautrat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

lisrd's Issues

Hi, I have a problem converting LISRD to ONNX

LISRD2ONNX
Problem 1: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (h % tile != 0) or (w % tile != 0): # should be divisible by tile
First check is not problem
ONNX2NCCN
Problem 2:
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Unsupported unsqueeze axes !
Unknown data type 0
Expand not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Equal not supported yet!
Where not supported yet!
Expand not supported yet!
Unsupported unsqueeze axes !
Shape not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Unsupported unsqueeze axes !
Unknown data type 0
Expand not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Equal not supported yet!
Where not supported yet!
Expand not supported yet!
Unsupported unsqueeze axes !
Shape not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Unsupported unsqueeze axes !
Unknown data type 0
Expand not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Equal not supported yet!
Where not supported yet!
Expand not supported yet!
Unsupported unsqueeze axes !
Shape not supported yet!
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Shape not supported yet!
Unsupported unsqueeze axes !
Unknown data type 0
Unsupported unsqueeze axes !
Unknown data type 0
Expand not supported yet!
Shape not supported yet!

About model lisrd_sift

Thank you very much for the code and models.I encountered a problem when using the lisrd_sift option.

File "g:/project/3d/LISRD-master/lisrd/export_features.py", line 88, in export
func.grid_sample(torch.Tensor(descs[k]), grid_points),
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\functional.py", line 4304, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: grid_sampler(): expected grid to have size 1 in last dimension, but got grid with sizes [1, 2000, 1, 2]

How can I solve this issue?
Looking forward to your replay

About model size

Thank you very much for the code and models. Could you explain why the size of the model lisrd_aachen.pth is smaller than that of the model lisrd_vidit.pth?

About get keypoints method

I feel that ALIKE is better than SuperPoint in the way of obtaining key points, comparing SuperPoint + LISRD and ALIKE + LISRD. I feel that such a result should not be, you can try this ALIKE, the robustness of your network.

Some questions about meta descriptors

Hi, @rpautrat,
I‘m very interested to your excellent work. And I‘ve done some other tests with your work. I tried to use meta descriptors (tile set 1*1) to do image retrieval task. And the results were not good. Do you think it's reasonable to use this network for retrieval?
Really hope to get your advice. Thanks in advance and have a nice day.

Evaluation of SuperPoint on datasets

Sorry to bother you,
I have evaluated the pertrained models on HPatches and RDNIM datasets already.
If I want to compare the performance of SuperPoint on these datasets, how can I do it? Could you give me some details?
Thanks!

How do i set the VIDIT dataset?

hi~The following is my error warning.

[09/03/2021 10:30:01 INFO] Running command TRAIN
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\LISRD-master\lisrd\experiment.py", line 130, in
args.func(config, exper_dir, args)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\LISRD-master\lisrd\experiment.py", line 39, in _train
config['data']['name'])(config['data'], device)
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\LISRD-master\lisrd\datasets\mixed_dataset.py", line 32, in init
device))
File "C:\Users\User\Desktop\python\STUDY_CNN_imagematching\LISRD-master\lisrd\datasets\vidit.py", line 27, in init
files = np.sort(files).reshape(300, 40)
ValueError: cannot reshape array of size 11999 into shape (300,40)

image
here are many datasets, which dataset is suitable for use?
And what should the folder structure be?

Hpatches

Hello,I am learning your code. I didn't find your code about table 2(Comparison to the state of the art on HPatches). I hope to get your code and data set about this part. thank you very much!!!

Matching problem

Hello, sorry to disturb you.
I try to use the same two images for evaluation---superpoint-master and LISRD
Such as superpoint-master : python match_features_demo.py sp_v6 $DATA_PATH/HPatches/i_pool/1.ppm $DATA_PATH/i_pool/6.ppm
and demo_lisrd.ipynb
But the matching result of superpoint is much better than demo_lisrd,the result in lisrd There are only a few matches, and the accuracy is not high.Maybe it’s a mistake in my operation. I think the result should not be like this. LISRD maybe better in day-night changes or rotation changes.
So could you give me some suggestions? Thank you very much.

image

hpatches_evaluation

Hello, When I am running your evaluation notebook in notebooks/hpatches_evaluation.ipynb, I get the error: No such file or directory: '/home/victor/LISRD/Documents/datasets/Hpatches_sequences/i_nijmegen/1.ppm.kornia_sift'. Where can I get these pictures?

Pointcloud

Is it possible to find the similarities in Pointcloud?

Using pretrained models

Hi, I am a freshman to this field, and I prepare using the pretrained model to see your work's result on my own images. Therefore, whether I needn't training my own model? Could you give an advice about how to use the command?
"python -m lisrd.export_features <name of the model (lisrd or lisrd_sift)> --checkpoint --keypoints <type of keypoints (sift or superpoint)> --num_kp <number of keypoints (default: 2000)>"
I haven't found any file or folder named checkpoint, how can I get this file?

Code covert c++

Hello, thank you very much for your code, can you convert the post-processing part of your code into C++, I have encountered many problems and there is no way to solve them.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.