This repository is forked from TFeat.
Additionally C++ frontend / API example (for PyTorch) is placed here.
If you want to use this, you should do the following.
-
Export a model using
export_model.ipynb
.
Now you havetfeat_model.pt
. This is loaded by a cpp-example. -
Download and unzip
libtorch
. This is necessary if we use cpp-frontend of PyTroch.cd cpp_example && bash setup_libtorch.sh
-
Compile
tfeat_demo.cpp
usingCMakeLists.txt
.mkdir build && cd build cmake .. && make
-
Execute
tfeat_demo
inbuild
!(e.g.) ./tfeat_demo ../../tfeat_model.pt ../../imgs/v_churchill/1.ppm ../../imgs/v_churchill/6.ppm
This figure is the result of the above.
As you know, C++ API of PyTorch is "beta" stability.
Now this implementation works on my environment(Ubuntu18, Pytorch1.0), but in the future this may not work.
By the way, the result of C++ API is slightly different from the result of Python.
I'm investigating this issue.
Code for the BMVC 2016 paper Learning local feature descriptors with triplets and shallow convolutional neural networks
We provide the following pre-trained models:
network name | model link | training dataset |
---|---|---|
tfeat-liberty |
tfeat-liberty.params | liberty (UBC) |
tfeat-yosemite |
tfeat-yosemite.params | yosemite (UBC) |
tfeat-notredame |
tfeat-notredame.params | notredame (UBC) |
tfeat-ubc |
coming soon... | all UBC |
tfeat-hpatches |
coming soon... | HPatches (split A) |
tfeat-all |
coming soon... | All the above |
To run TFeat
on a tensor of patches:
tfeat = tfeat_model.TNet()
net_name = 'tfeat-liberty'
models_path = 'pretrained-models'
net_name = 'tfeat-liberty'
tfeat.load_state_dict(torch.load(os.path.join(models_path,net_name+".params")))
tfeat.cuda()
tfeat.eval()
x = torch.rand(10,1,32,32).cuda()
descrs = tfeat(x)
print(descrs.size())
#torch.Size([10, 128])
Note that no normalisation is needed for the input patches, it is done internally inside the network.
We provide an ipython
notebook that shows how to load and use
the pre-trained networks. We also provide the following examples:
- extracting descriptors from image patches
- matching two images using
openCV
- matching two images using
vlfeat
For the testing example code, check tfeat-test notebook
We provide an ipython
notebook with examples on how to train
TFeat
. Training can either use the UBC
datasets Liberty, Notredame, Yosemite
, the HPatches
dataset, and combinations
of all the datasets.
For the training code, check tfeat-train notebook