Code Monkey home page Code Monkey logo

kapture-localization's Introduction

kapture-localization: toolbox

1. Overview

kapture-localization is a toolbox in which you will find implementations for various localization related algorithms. It strongly relies on the kapture format for data representation and manipulation.

The localization algorithms include:

  1. mapping,

  2. localization, and

  3. benchmarking (image retrieval for visual localization).

It works on Ubuntu, Windows, and MacOS.

2. Structure

The directories are organised as follow:

├── kapture_localization/  # package (library)
├── pipeline/              # main programs executing all steps of the localization pipelines
├── samples/               # some sample data
├── tests/                 # unit tests
└── tools/                 # sub programs involved in the pipeline

The kapture-localization toolbox is available as:

  • Python package (kapture_localization/),

  • Python executable scripts (pipeline/ & tools/).

There are 3 pipelines available:

  1. mapping,

  2. localization, and

  3. image retrieval benchmark (global sfm, local sfm, pose approximation).

3. Installation

It can be installed using docker, pip or from manually from source code. After installing python (>=3.8) and COLMAP (>=3.6), this toolbox can be installed with:

pip install kapture-localization

See doc/installation.adoc for more details.

4. Tutorial

See doc/tutorial for a short introduction and examples of the provided processing pipelines.

5. Image Retrieval Benchmark

6. Contributing

There are many ways to contribute to the kapture-localization project:

  • provide feedback and suggestion,

  • submit bug reports in the project bug tracker,

  • implement a feature or bug-fix for an outstanding issue,

  • provide scripts to create data in kapture format (e.g. local/global feature extraction),

  • propose a new feature and implement it.

If you wish to contribute, please refer to the CONTRIBUTING page.

7. License

Software license is detailed in the LICENSE file.

8. References

If you use this work for your research, please cite the respective paper(s):

Structure-based localization or kapture format
@misc{kapture2020,
      title={Robust Image Retrieval-based Visual Localization using Kapture},
      author={Martin Humenberger and Yohann Cabon and Nicolas Guerin and Julien Morat and Jérôme Revaud and Philippe Rerole and Noé Pion and Cesar de Souza and Vincent Leroy and Gabriela Csurka},
      year={2020},
      eprint={2007.13867},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Image retrieval benchmark
@inproceedings{benchmarking_ir3DV2020,
      title={Benchmarking Image Retrieval for Visual Localization},
      author={Noé Pion, Martin Humenberger, Gabriela Csurka, Yohann Cabon, Torsten Sattler},
      year={2020},
      booktitle={International Conference on 3D Vision}
}

@article{humenberger2022investigating,
  title={Investigating the Role of Image Retrieval for Visual Localization},
  author={Humenberger, Martin and Cabon, Yohann and Pion, No{\'e} and Weinzaepfel, Philippe and Lee, Donghwan and Gu{\'e}rin, Nicolas and Sattler, Torsten and Csurka, Gabriela},
  journal={International Journal of Computer Vision},
  year={2022},
  publisher={Springer}
}

kapture-localization's People

Contributors

beanmilk avatar humenbergerm avatar jujumo avatar mhumenbe avatar mohammedshafeeqet avatar nguerin avatar sarlinpe avatar yocabon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kapture-localization's Issues

No trajectory.txt for the query images in RobotCar and Aachen Datasets

Hi, authors. I am now trying to evaluate the performance of my image retrieval network. But I found no trajectory.txt for the query images in RobotCar and Aachen Dataset. However, the ground truth poses information is necessary for the final accuracy evaluation. (at least not included in your dataset downloader). I will appreciate it if you can provide the files.

Thanks in advance!

Localization Pipeline

Hello authors,
I would like to know more details about the custom pipeline that is being explained. What is the ground truth model used for pose estimation for query images in kapture localization pipeline?

Thanks
Mukula

Error occur at kapture_localization/utils/symlink.py

When I follow the tutorial and do mapping, an error occurs with symlimk.py

$ kapture_pipeline_mapping.py -v info \
> -i ./mapping/ \
> -kpt ./local_features/r2d2_500/keypoints/ \
> -desc ./local_features/r2d2_500/descriptors/ \
> -gfeat ./global_features/AP-GeM-LM18/global_features/ \
> -matches ./local_features/r2d2_500/NN_no_gv/matches/ \
> -matches-gv ./local_features/r2d2_500/NN_colmap_gv/matches/ \
> --colmap-map ./colmap-sfm/r2d2_500/AP-GeM-LM18_top5/ \
> --topk 5
Traceback (most recent call last):
  File "/home/jaram/anaconda3/envs/bluedot/bin/kapture_pipeline_mapping.py", line 4, in <module>
    __import__('pkg_resources').run_script('kapture-localization==0.1.2', 'kapture_pipeline_mapping.py')
  File "/home/jaram/anaconda3/envs/bluedot/lib/python3.8/site-packages/pkg_resources/__init__.py", line 651, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/home/jaram/anaconda3/envs/bluedot/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1448, in run_script
    exec(code, namespace, namespace)
  File "/home/jaram/anaconda3/envs/bluedot/lib/python3.8/site-packages/kapture_localization-0.1.2-py3.8.egg/EGG-INFO/scripts/kapture_pipeline_mapping.py", line 17, in <module>
    from kapture_localization.utils.symlink import can_use_symlinks, create_kapture_proxy
  File "/home/jaram/anaconda3/envs/bluedot/lib/python3.8/site-packages/kapture_localization-0.1.2-py3.8.egg/kapture_localization/utils/symlink.py", line 11, in <module>
    from kapture.io.features import guess_feature_name_from_path
ImportError: cannot import name 'guess_feature_name_from_path' from 'kapture.io.features' (/home/jaram/anaconda3/envs/bluedot/lib/python3.8/site-packages/kapture-1.1.1-py3.8.egg/kapture/io/features.py)

It seems there is no function name with 'guess_feature_name_from_path' in features.py. Problem was 'guess_feature_name_from_path', and I found related commit e54bad2.

I changed symlink.py to the old version(before e54bad2) and reinstall, and it works well. I didn't fully understand these codes, but I think it can be a temporary solution for this problem.

sqlite3.IntegrityError: UNIQUE constraint failed: images.name When localization on RobotCar_Seasons-v2

What did I do:

For RobotCar_Seasons-v2 dataset,
At first I use all mapping images to create the map,
this action got the "sqlite3.IntegrityError: UNIQUE constraint failed: images.name" error during the mapping step
after that I try to use only the 01 folder to create the map,
this action got the "sqlite3.IntegrityError: UNIQUE constraint failed: images.name" error during the localization step

Question:

I wonder why this error happens?
There should be no overlap between query and mapping images.name

shell script :

'# 7) localization pipeline
LOCAL=r2d2_WASF_N8_big
GLOBAL=Resnet101-AP-GeM-LM18
kapture_pipeline_localize.py -v info -f
--benchmark-style RobotCar_Seasons
-i ${WORKING_DIR}/${DATASET}/01/mapping
--query ${WORKING_DIR}/${DATASET}/query
-kpt ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/keypoints
-desc ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/descriptors
-gfeat ${WORKING_DIR}/${DATASET}/global_features/${GLOBAL}/global_features
-matches ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/NN_no_gv/matches/01
-matches-gv ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/NN_colmap_gv/matches/01
--colmap-map ${WORKING_DIR}/${DATASET}/colmap-sfm/${LOCAL}/${GLOBAL}/01
-o ${WORKING_DIR}/${DATASET}/colmap-localize/${LOCAL}/${GLOBAL}/
--topk ${TOPK}
--config 2'

error info:

'/mine-run_robotcar-v2.sh
INFO ::kapture: deleting already existing /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_mapping
sensors in path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/01/mapping/sensors
keypoints_path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/local_features/r2d2_WASF_N8_big/keypoints
INFO ::kapture: deleting already existing /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_query
sensors in path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/query/sensors
keypoints_path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/local_features/r2d2_WASF_N8_big/keypoints
INFO ::kapture: deleting already existing /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/pairs_localization_20.txt
INFO ::compute_image_pairs: compute_image_pairs. loading mapping: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_mapping
INFO ::compute_image_pairs: computing pairs with Resnet101-AP-GeM-LM18...
INFO ::compute_image_pairs: compute_image_pairs. loading query: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_query
INFO ::compute_image_pairs: saving to file ...
INFO ::compute_image_pairs: all done
INFO ::merge: Loading /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_mapping
INFO ::merge: Loading /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_query
INFO ::merge: Writing merged kapture data...
INFO ::kapture: deleting already existing /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_map_plus_query
sensors in path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/map_plus_query/sensors
keypoints_path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/local_features/r2d2_WASF_N8_big/keypoints
INFO ::compute_matches: compute_matches. loading input: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_map_plus_query
INFO ::compute_matches: compute_matches. entering main loop...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 124320/124320 [00:00<00:00, 321940.37it/s]
INFO ::compute_matches: all done
INFO ::kapture: deleting already existing /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_map_plus_query_gv
sensors in path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/map_plus_query/sensors
keypoints_path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/local_features/r2d2_WASF_N8_big/keypoints
INFO ::run_colmap_gv: run_colmap_gv...
INFO ::run_colmap_gv: remove rigs notation.
colmap_dbi_path: /sdb3/myfolder/00projects/sfm/kapture-localization/pipeline/examples/RobotCar_Seasons-v2/colmap-localize/r2d2_WASF_N8_big/Resnet101-AP-GeM-LM18/kapture_inputs/proxy_map_plus_query_gv/colmap.db
INFO ::colmap: registering 135 sensors (cameras) in database...
INFO ::colmap: registering 12996 images in database...
Traceback (most recent call last):
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_run_colmap_gv.py", line 203, in
run_colmap_gv_command_line()
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_run_colmap_gv.py", line 196, in run_colmap_gv_command_line
run_colmap_gv(args.input, args.output, args.colmap_binary,
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_run_colmap_gv.py", line 47, in run_colmap_gv
run_colmap_gv_from_loaded_data(kapture_none_matches,
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_run_colmap_gv.py", line 111, in run_colmap_gv_from_loaded_data
database_extra.kapture_to_colmap(kapture_data_to_export, kapture_none_matches_dirpath,
File "/sdb3/myfolder/softwares/anaconda3_gy/lib/python3.8/site-packages/kapture/converter/colmap/database_extra.py", line 641, in kapture_to_colmap
colmap_image_ids = add_images_to_database(
File "/sdb3/myfolder/softwares/anaconda3_gy/lib/python3.8/site-packages/kapture/converter/colmap/database_extra.py", line 493, in add_images_to_database
colmap_image_ids = add_images_from_list_in_colmap_format(database, images_in_colmap_format)
File "/sdb3/myfolder/softwares/anaconda3_gy/lib/python3.8/site-packages/kapture/converter/colmap/database_extra.py", line 416, in add_images_from_list_in_colmap_format
colmap_image_ids[name] = database.add_image(
File "/sdb3/myfolder/softwares/anaconda3_gy/lib/python3.8/site-packages/kapture/converter/colmap/database.py", line 172, in add_image
cursor = self.execute(
sqlite3.IntegrityError: UNIQUE constraint failed: images.name
Traceback (most recent call last):
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_pipeline_localize.py", line 394, in
localize_pipeline_command_line()
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_pipeline_localize.py", line 363, in localize_pipeline_command_line
localize_pipeline(args.kapture_map,
File "/sdb3/myfolder/softwares/anaconda3_gy/bin/kapture_pipeline_localize.py", line 187, in localize_pipeline
run_python_command(local_run_colmap_gv_path, run_colmap_gv_args, python_binary)
File "/sdb3/myfolder/softwares/anaconda3_gy/lib/python3.8/site-packages/kapture_localization/utils/subprocess.py", line 67, in run_python_command
raise ValueError('\nSubprocess Error (Return code:' f' {python_process.returncode} )')
ValueError:
Subprocess Error (Return code: 1 )'

Get much lower global_sfm results for OpenIBL

I directly downloaded the model_best.pth.tar and pca_params_model_best.h5 for sfrs from the google drive link offered by the author of OpenIBL. And produce global features like extract_kapture.py with pca whitening. Then I ran kapture_pipeline_image_retrieval_benchmark.py and get sfrs_top1(Feather2d2 20k) results for global_sfm:

Model: global_sfm_config_2

Found 688 / 916 image positions (75.11 %).
Found 688 / 916 image rotations (75.11 %).
Localized images: mean=(38.5954m, 50.3685 deg) / median=(10.5967m, 2.3077 deg)
All: median=(39.9641m, 24.5639 deg)
Min: 0.0037m; 0.0890 deg
Max: 1678.3912m; 179.9506 deg

(0.25m, 2.0 deg): 30.79%
(0.5m, 5.0 deg): 33.41%
(5.0m, 10.0 deg): 34.61%

which is much lower than the results you offered for openibl_vgg16_netvlad:
(0.25m_2.0deg):42.36%
(1.0m_5.0deg):48.03%
median distance (m):10.6814
median angle (deg):1.8746

What may be the problems I get while reproducing the results?

I also use the openibl_vgg16_netvlad global features provided by you and get similar results as the benchmark:

Model: global_sfm_config_2

Found 789 / 916 image positions (86.14 %).
Found 789 / 916 image rotations (86.14 %).
Localized images: mean=(31.5377m, 38.1892 deg) / median=(0.1040m, 1.4645 deg)
All: median=(10.0326m, 1.9461 deg)
Min: 0.0037m; 0.1103 deg
Max: 197.0356m; 179.9974 deg

(0.25m, 2.0 deg): 42.03%
(0.5m, 5.0 deg): 47.60%
(5.0m, 10.0 deg): 48.47%

error in new version

when I use 'kapture == 1.1.1 and 'kapture-localization==0.1.2' to run this benchmark mapping code :
kapture_pipeline_image_retrieval_benchmark.py -v info \ -i Aachen-Day-Night-v1.1/mapping \ --query Aachen-Day-Night-v1.1/query_all \ -kpt Aachen-Day-Night-v1.1/local_features/r2d2_WASF-N8_20k/keypoints \ -desc Aachen-Day-Night-v1.1/local_features/r2d2_WASF-N8_20k/descriptors \ -gfeat Aachen-Day-Night-v1.1/global_features/AP-GeM-LM18/global_features \ -matches Aachen-Day-Night-v1.1/local_features/r2d2_WASF-N8_20k/NN_no_gv/matches \ -matches-gv Aachen-Day-Night-v1.1/local_features/r2d2_WASF-N8_20k/NN_colmap_gv/matches \ --colmap-map Aachen-Day-Night-v1.1/colmap-sfm/r2d2_WASF-N8_20k/frustum_thresh10_far50/colmap \ -o Aachen-Day-Night-v1.1/image_retrieval_benchmark/r2d2_WASF-N8_20k/frustum_thresh10_far50/AP-GeM-LM18_top20 \ --topk 20 \ --config 2
I encounter this error:
ImportError: cannot import name 'guess_feature_name_from_path' from 'kapture.io.features' (/home/jty/anaconda3/envs/vslam/lib/python3.7/site-packages/kapture/io/features.py)
Is that something wrong with the new version?

Error in running kapture_download_dataset.py update

I wanna to download the datasets mentioned in benchmark.adoc and get the error below. I thought it might be the certificate verification problem and set requests.get(index_remote_url, allow_redirects=True,verify=False), but it didn't work. So what's the problem with it? Could you please give me some ideas about how to solve it?

kapture_download_dataset.py update
INFO ::downloader: updating dataset list from https://github.com/naver/kapture/raw/main/dataset ...
/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/connectionpool.py:1052: InsecureRequestWarning: Unverified HTTPS request is being made to host 'github.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
CRITICAL::downloader: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /naver/kapture/raw/main/dataset/kapture_dataset_index.yaml (Caused by SSLError(SSLError(1, '[SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2570)')))
Traceback (most recent call last):
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/connectionpool.py", line 710, in urlopen
chunked=chunked,
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/http/client.py", line 1373, in getresponse
response.begin()
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/http/client.py", line 319, in begin
version, status, reason = self._read_status()
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/http/client.py", line 280, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
ssl.SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2570)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/adapters.py", line 499, in send
timeout=timeout,
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/connectionpool.py", line 788, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /naver/kapture/raw/main/dataset/kapture_dataset_index.yaml (Caused by SSLError(SSLError(1, '[SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2570)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 643, in
sys.exit(kapture_download_dataset_cli())
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 638, in kapture_download_dataset_cli
raise e
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 633, in kapture_download_dataset_cli
return kapture_download_dataset(args, index_filepath)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 510, in kapture_download_dataset
r = requests.get(index_remote_url, allow_redirects=True,verify=False)#edit:verify=False
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/requests/adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /naver/kapture/raw/main/dataset/kapture_dataset_index.yaml (Caused by SSLError(SSLError(1, '[

Get global_SFM and local_SFM results different from the listed benchmark results

I follow the Image Retrieval Benchmark tutorial but got global_SFM and local_SFM results different from the listed benchmark results, whereas the EWB results are the same with the given benchmark. I ran kapture_pipeline_image_retrieval_benchmark_from_pairsfile.py with COLMAP 3.8 ,pycolmap 0.1.0, and LFEAT=feather2d2_dim32_20k.

  1. run full benchmark (all 3 tasks) for AP-GeM-LM18_top20 on Gangnam Station B2

Model: global_sfm_config_2

Found 911 / 916 image positions (99.45 %).
Found 911 / 916 image rotations (99.45 %).
Localized images: mean=(25.8816m, 36.4095 deg) / median=(0.0750m, 1.3386 deg)
All: median=(0.0756m, 1.3410 deg) the listed results: median=(0.0756m, 1.2993 deg)
Min: 0.0033m; 0.0537 deg
Max: 994.0691m; 179.9981 deg

(0.25m, 2.0 deg): 56.00% the listed results: 35.59%
(0.5m, 5.0 deg): 61.46% the listed results: 56.66%
(5.0m, 10.0 deg): 63.21% the listed results: 62.23%

Model: local_sfm

Found 378 / 916 image positions (41.27 %).
Found 378 / 916 image rotations (41.27 %).
Localized images: mean=(23.2583m, 33.8776 deg) / median=(0.1011m, 1.5585 deg)
All: median=(infm, inf deg) the listed results: median=(infm, inf deg)
Min: 0.0063m; 0.1690 deg
Max: 245.3013m; 179.9680 deg

(0.25m, 2.0 deg): 22.38% the listed results: 12.88%
(0.5m, 5.0 deg): 25.55% the listed results: 21.18%
(5.0m, 10.0 deg): 27.18% the listed results: 25.33%

Model: EWB

Found 916 / 916 image positions (100.00 %).
Found 916 / 916 image rotations (100.00 %).
Localized images: mean=(53.1739m, 75.3842 deg) / median=(51.6639m, 63.4245 deg)
All: median=(51.6639m, 63.4245 deg) It is exactly the same with the listed results.
Min: 1.1050m; 0.8699 deg
Max: 161.1966m; 179.7832 deg

(0.25m, 2.0 deg): 0.00%
(0.5m, 5.0 deg): 0.00%
(5.0m, 10.0 deg): 0.00% It is exactly the same with the listed results.

  1. run full benchmark (all 3 tasks) for fire_top20 on Gangnam Station B2

Model: global_sfm_config_2

Found 914 / 916 image positions (99.78 %).
Found 914 / 916 image rotations (99.78 %).
Localized images: mean=(19.5645m, 19.1694 deg) / median=(0.0507m, 0.9815 deg)
All: median=(0.0508m, 0.9876 deg) the listed results: median=(0.0518m, 1.0021 deg)
Min: 0.0053m; 0.0648 deg
Max: 4749.6707m; 179.9998 deg

(0.25m, 2.0 deg): 69.98% the listed results: 47.27%
(0.5m, 5.0 deg): 75.44% the listed results: 69.10%
(5.0m, 10.0 deg): 77.07% the listed results: 75.22%

Model: local_sfm

Found 588 / 916 image positions (64.19 %).
Found 588 / 916 image rotations (64.19 %).
Localized images: mean=(14.1432m, 21.6597 deg) / median=(0.0693m, 1.2338 deg)
All: median=(4.8720m, 4.5286 deg) the listed results: median=(5.1373m, 4.4642 deg)
Min: 0.0041m; 0.1077 deg
Max: 197.2665m; 179.9642 deg

(0.25m, 2.0 deg): 40.17% the listed results: 25.44%
(0.5m, 5.0 deg): 46.18% the listed results: 40.61%
(5.0m, 10.0 deg): 48.80% the listed results: 46.51%

Model: EWB

Found 916 / 916 image positions (100.00 %).
Found 916 / 916 image rotations (100.00 %).
Localized images: mean=(35.9011m, 55.0691 deg) / median=(32.6906m, 40.2351 deg)
All: median=(32.6906m, 40.2351 deg) It is exactly the same with the listed results.
Min: 0.1836m; 0.4056 deg
Max: 163.5414m; 179.9981 deg

(0.25m, 2.0 deg): 0.00%
(0.5m, 5.0 deg): 0.00%
(5.0m, 10.0 deg): 1.20% It is exactly the same with the listed results.

So why does this difference occur?

Some images are not localized using localSfM

Hi!
I tried to make prediction with kapture_pipeline_image_retrieval_benchmark.py.
I gave 1200 test images to predict but got only 940 images (All images are from my custom dataset).
All features are from R2D2 and AP-GeM.
Here is the failure log.

INFO    ::root: mapping and localization for 20220825-125110.603-11~12-Tr/stage_1/images/1170_1170_frame_000544.png
INFO    ::kapture: deleting already existing /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/colmap.db
INFO    ::kapture: deleting already existing /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/reconstruction
INFO    ::kapture: deleting already existing /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/priors_for_reconstruction
INFO    ::colmap_build_map: Using precomputed keypoints and matches
INFO    ::colmap_build_map: Step 1: Export kapture format to colmap
INFO    ::colmap: registering 1 sensors (cameras) in database...
INFO    ::colmap: registering 20 images in database...
INFO    ::colmap: registering 20 keypoints in database...
INFO    ::colmap: registering 190 matches in database...
INFO    ::colmap_build_map: Step 2: Run geometric verification - skipped
INFO    ::colmap_build_map: Step 3: Exporting priors for reconstruction.
INFO    ::colmap: creating colmap cameras.txt
INFO    ::colmap: creating colmap images.txt
INFO    ::root: creating colmap points3D.txt
INFO    ::colmap_build_map: Step 4: Triangulation
INFO    ::colmap: ['colmap', 'point_triangulator', '--database_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/colmap.db', '--image_path', '/home/pjs4073/kapture_localsfm_h4tech/kapture_inputs/proxy_map_plus_query_gv/sensors/records_data', '--input_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/priors_for_reconstruction', '--output_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/reconstruction']

==============================================================================
Loading model
==============================================================================


==============================================================================
Loading database
==============================================================================

Loading cameras... 1 in 0.000s
Loading matches... 190 in 0.000s
Loading images... 20 in 0.003s (connected 20)
Building correspondence graph... in 0.003s (ignored 0)

Elapsed time: 0.000 [minutes]


==============================================================================
Triangulating image #1 (0)
==============================================================================

  => Image sees 0 / 1457 points
  => Triangulated 0 points

==============================================================================
Triangulating image #2 (1)
==============================================================================

  => Image sees 0 / 655 points
  => Triangulated 1 points

==============================================================================
Triangulating image #3 (2)
==============================================================================

  => Image sees 3 / 1433 points
  => Triangulated 5 points

==============================================================================
Triangulating image #4 (3)
==============================================================================

  => Image sees 2 / 844 points
  => Triangulated 9 points

==============================================================================
Triangulating image #5 (4)
==============================================================================

  => Image sees 9 / 1407 points
  => Triangulated 1 points

==============================================================================
Triangulating image #6 (5)
==============================================================================

  => Image sees 14 / 506 points
  => Triangulated 16 points

==============================================================================
Triangulating image #7 (6)
==============================================================================

  => Image sees 0 / 930 points
  => Triangulated 0 points

==============================================================================
Triangulating image #8 (7)
==============================================================================

  => Image sees 0 / 1464 points
  => Triangulated 0 points

==============================================================================
Triangulating image #9 (8)
==============================================================================

  => Image sees 20 / 1358 points
  => Triangulated 0 points

==============================================================================
Triangulating image #10 (9)
==============================================================================

  => Image sees 0 / 638 points
  => Triangulated 0 points

==============================================================================
Triangulating image #11 (10)
==============================================================================

  => Image sees 0 / 636 points
  => Triangulated 0 points

==============================================================================
Triangulating image #12 (11)
==============================================================================

  => Image sees 0 / 1488 points
  => Triangulated 0 points

==============================================================================
Triangulating image #13 (12)
==============================================================================

  => Image sees 6 / 648 points
  => Triangulated 24 points

==============================================================================
Triangulating image #14 (13)
==============================================================================

  => Image sees 2 / 461 points
  => Triangulated 5 points

==============================================================================
Triangulating image #15 (14)
==============================================================================

  => Image sees 9 / 688 points
  => Triangulated 0 points

==============================================================================
Triangulating image #16 (15)
==============================================================================

  => Image sees 0 / 961 points
  => Triangulated 0 points

==============================================================================
Triangulating image #17 (16)
==============================================================================

  => Image sees 0 / 557 points
  => Triangulated 0 points

==============================================================================
Triangulating image #18 (17)
==============================================================================

  => Image sees 12 / 970 points
  => Triangulated 0 points

==============================================================================
Triangulating image #19 (18)
==============================================================================

  => Image sees 44 / 500 points
  => Triangulated 0 points

==============================================================================
Triangulating image #20 (19)
==============================================================================

  => Image sees 0 / 1467 points
  => Triangulated 0 points

==============================================================================
Retriangulation
==============================================================================

  => Merged observations: 0
  => Completed observations: 0

==============================================================================
Bundle adjustment
==============================================================================

iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  3.040423e+05    0.00e+00    2.77e+02   0.00e+00   0.00e+00  1.00e+04        0    1.61e-02    1.64e-02
   1  2.355396e+05    6.85e+04    2.07e+01   7.38e+05   9.47e-01  3.00e+04        0    3.20e-02    4.84e-02
   2  2.343848e+05    1.15e+03    1.62e+00   4.71e+05   8.71e-01  5.08e+04        0    2.87e-04    4.89e-02
   3  2.343163e+05    6.85e+01    2.15e-01   2.76e+05   7.79e-01  6.16e+04        0    2.78e-04    4.92e-02


Bundle adjustment report
------------------------
    Residuals : 456
   Parameters : 333
   Iterations : 4
         Time : 0.0493292 [s]
 Initial cost : 25.8217 [px]
   Final cost : 22.6683 [px]
  Termination : Convergence

  => Merged observations: 0
  => Completed observations: 0
  => Filtered observations: 215
  => Changed observations: 0.942982

==============================================================================
Bundle adjustment
==============================================================================

iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  3.524065e+01    0.00e+00    3.16e-06   0.00e+00   0.00e+00  1.00e+04        0    2.86e-05    1.77e-04


Bundle adjustment report
------------------------
    Residuals : 26
   Parameters : 18
   Iterations : 1
         Time : 0.000349139 [s]
 Initial cost : 1.16422 [px]
   Final cost : 1.16422 [px]
  Termination : Convergence

  => Merged observations: 0
  => Completed observations: 0
  => Filtered observations: 0
  => Changed observations: 0.000000

==============================================================================
Extracting colors
==============================================================================

WARNING ::colmap_localize: Input data contains trajectories: they will be ignored
INFO    ::kapture: deleting already existing /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/colmap.db
INFO    ::kapture: deleting already existing /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction
20it [00:00, 355449.49it/s]
INFO    ::colmap_localize: Step 1: Add precomputed keypoints and matches to colmap db
INFO    ::colmap_localize: Step 2: Run geometric verification - skipped
INFO    ::colmap_localize: Step 3: Run image_registrator
INFO    ::colmap: ['colmap', 'image_registrator', '--database_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/colmap.db', '--input_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/reconstruction', '--output_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction', '--Mapper.ba_refine_focal_length', '0', '--Mapper.ba_refine_principal_point', '0', '--Mapper.ba_refine_extra_params', '0', '--Mapper.min_num_matches', '4', '--Mapper.init_min_num_inliers', '4', '--Mapper.abs_pose_min_num_inliers', '4', '--Mapper.abs_pose_min_inlier_ratio', '0.05', '--Mapper.ba_local_max_num_iterations', '50', '--Mapper.abs_pose_max_error', '20', '--Mapper.filter_max_reproj_error', '12']

==============================================================================
Loading database
==============================================================================

Loading cameras... 2 in 0.000s
Loading matches... 210 in 0.001s
Loading images... 21 in 0.003s (connected 21)
Building correspondence graph... in 0.004s (ignored 0)

Elapsed time: 0.000 [minutes]


==============================================================================
Registering image #21 (21)
==============================================================================

  => Image sees 3 / 1303 points
INFO    ::colmap_localize: Step 4: Export reconstruction results to txt
INFO    ::colmap: ['colmap', 'model_converter', '--input_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction', '--output_path', '/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction', '--output_type', 'TXT']
DEBUG   ::colmap: importing from database "/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/colmap.db"
DEBUG   ::colmap: loading colmap database /home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/colmap.db
DEBUG   ::colmap: parsing cameras in database.
INFO    ::colmap: parsing cameras  ...
DEBUG   ::colmap: parsing images and trajectories in database.
INFO    ::root: parsing images ...
21it [00:00, 11695.71it/s]
DEBUG   ::colmap: importing from reconstruction "/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction"
DEBUG   ::colmap: loading colmap reconstruction from:
        "/home/pjs4073/kapture_localsfm_h4tech/local_sfm/colmap/registered/reconstruction"
DEBUG   ::colmap: loading colmap reconstruction skipping Observations, Points3d, Keypoints
DEBUG   ::root: parsing cameras from:
        "cameras.txt"
DEBUG   ::root: loading images from:
        "images.txt"
INFO    ::root: 20220825-125110.603-11~12-Tr/stage_1/images/1170_1170_frame_000544.png was not localized
DEBUG   ::kapture: saving sensors ...
DEBUG   ::kapture: saving trajectories ...
DEBUG   ::kapture: wrote          938 <class 'kapture.core.Trajectories.Trajectories'> in 0.015 seconds
DEBUG   ::kapture: saving records_camera ...
DEBUG   ::kapture: wrote        1 065 <class 'kapture.core.Records.RecordsCamera'> in 0.002 seconds
DEBUG   ::kapture: saving keypoints : r2d2_WASF_N8_big ...
DEBUG   ::kapture: saving descriptors : r2d2_WASF_N8_big ...
DEBUG   ::kapture: saving global_features : Resnet101-AP-GeM-LM18 ...
INFO    ::kapture: Saved in 0.018 seconds in "/home/pjs4073/kapture_localsfm_h4tech/local_sfm/localized"

Tutorial error

Hi, I am trying out the Kapture-localization tutorial on google Colab. However, I keep getting errors shown below. Any idea why this is happening?
image

Pose estimation

The structure of the code is not clear. May I know how the pose is estimated in localization?

RGBD pipeline

Hello,
I would want to test kapture RGBD pipeline. But i couldn't find any implementation details as mentioned in the research paper. Can someone help me with this.

Thanks
Mukula

colmap_gv does not work.

Hi ! I have an issue about kapture_run_colmap_gv.py
Here is the entire error log.

INFO    ::root: mapping and localization for 20220825-123634.769-9c~9d-Tr/stage_1/images/0089_0089_frame_005406.png
INFO    ::kapture: deleting already existing /home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/colmap.db
INFO    ::kapture: deleting already existing /home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/reconstruction
INFO    ::kapture: deleting already existing /home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/priors_for_reconstruction
INFO    ::colmap_build_map: Using precomputed keypoints and matches
INFO    ::colmap_build_map: Step 1: Export kapture format to colmap
INFO    ::colmap: registering 1 sensors (cameras) in database...
INFO    ::colmap: registering 30 images in database...
INFO    ::colmap: registering 30 keypoints in database...
INFO    ::colmap: registering 435 matches in database...
INFO    ::colmap_build_map: Step 2: Run geometric verification - skipped
INFO    ::colmap_build_map: Step 3: Exporting priors for reconstruction.
INFO    ::colmap: creating colmap cameras.txt
INFO    ::colmap: creating colmap images.txt
DEBUG   ::root: timestamp:4859 not in trajectories
DEBUG   ::root: timestamp:4721 not in trajectories
INFO    ::root: creating colmap points3D.txt
INFO    ::colmap_build_map: Step 4: Triangulation
INFO    ::colmap: ['colmap', 'point_triangulator', '--database_path', '/home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/colmap.db', '--image_path', '/home/pjs4073/h4tech_result_8000_30/kapture_inputs/proxy_map_plus_query_gv/sensors/records_data', '--input_path', '/home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/priors_for_reconstruction', '--output_path', '/home/pjs4073/h4tech_result_8000_30/local_sfm/colmap/reconstruction']

==============================================================================
Loading model
==============================================================================


==============================================================================
Loading database
==============================================================================

Loading cameras... 1 in 0.000s
Loading matches... 435 in 0.007s
Loading images... 30 in 0.014s (connected 30)
Building correspondence graph... in 0.079s (ignored 0)

Elapsed time: 0.002 [minutes]


==============================================================================
Triangulating image #2 (0)
==============================================================================

  => Image sees 0 / 7484 points
  => Triangulated 0 points

==============================================================================
Triangulating image #3 (1)
==============================================================================

  => Image sees 0 / 7323 points
  => Triangulated 0 points

==============================================================================
Triangulating image #4 (2)
==============================================================================

  => Image sees 0 / 7860 points
  => Triangulated 0 points

==============================================================================
Triangulating image #5 (3)
==============================================================================

  => Image sees 0 / 7593 points
  => Triangulated 0 points

==============================================================================
Triangulating image #6 (4)
==============================================================================

  => Image sees 0 / 7453 points
  => Triangulated 0 points

==============================================================================
Triangulating image #7 (5)
==============================================================================

  => Image sees 0 / 7352 points
  => Triangulated 0 points

==============================================================================
Triangulating image #8 (6)
==============================================================================

  => Image sees 0 / 7513 points
  => Triangulated 0 points

==============================================================================
Triangulating image #9 (7)
==============================================================================

  => Image sees 0 / 6988 points
  => Triangulated 0 points

==============================================================================
Triangulating image #10 (8)
==============================================================================

  => Image sees 0 / 7724 points
  => Triangulated 0 points

==============================================================================
Triangulating image #11 (9)
==============================================================================

  => Image sees 0 / 7210 points
  => Triangulated 0 points

==============================================================================
Triangulating image #12 (10)
==============================================================================

  => Image sees 0 / 7501 points
  => Triangulated 0 points

==============================================================================
Triangulating image #13 (11)
==============================================================================

  => Image sees 0 / 6804 points
  => Triangulated 0 points

==============================================================================
Triangulating image #14 (12)
==============================================================================

  => Image sees 0 / 8133 points
  => Triangulated 0 points

==============================================================================
Triangulating image #15 (13)
==============================================================================

  => Image sees 0 / 6666 points
  => Triangulated 0 points

==============================================================================
Triangulating image #17 (14)
==============================================================================

  => Image sees 0 / 6714 points
  => Triangulated 0 points

==============================================================================
Triangulating image #18 (15)
==============================================================================

  => Image sees 0 / 7715 points
  => Triangulated 0 points

==============================================================================
Triangulating image #19 (16)
==============================================================================

  => Image sees 0 / 3140 points
  => Triangulated 0 points

==============================================================================
Triangulating image #20 (17)
==============================================================================

  => Image sees 0 / 7571 points
  => Triangulated 0 points

==============================================================================
Triangulating image #21 (18)
==============================================================================

  => Image sees 0 / 7784 points
  => Triangulated 0 points

==============================================================================
Triangulating image #22 (19)
==============================================================================

  => Image sees 0 / 7340 points
  => Triangulated 0 points

==============================================================================
Triangulating image #23 (20)
==============================================================================

  => Image sees 0 / 6405 points
  => Triangulated 0 points

==============================================================================
Triangulating image #24 (21)
==============================================================================

  => Image sees 0 / 3621 points
  => Triangulated 0 points

==============================================================================
Triangulating image #25 (22)
==============================================================================

  => Image sees 0 / 7095 points
  => Triangulated 0 points

==============================================================================
Triangulating image #26 (23)
==============================================================================

  => Image sees 0 / 7108 points
  => Triangulated 0 points

==============================================================================
Triangulating image #27 (24)
==============================================================================

  => Image sees 0 / 7642 points
  => Triangulated 0 points

==============================================================================
Triangulating image #28 (25)
==============================================================================

  => Image sees 0 / 6712 points
  => Triangulated 0 points

==============================================================================
Triangulating image #29 (26)
==============================================================================

  => Image sees 0 / 7474 points
  => Triangulated 0 points

==============================================================================
Triangulating image #30 (27)
==============================================================================

  => Image sees 0 / 3821 points
  => Triangulated 0 points

==============================================================================
Retriangulation
==============================================================================

  => Merged observations: 0
  => Completed observations: 0

==============================================================================
Bundle adjustment
==============================================================================

F0923 12:08:26.304461 307235 colmap.cc:1573] Check failed: bundle_adjuster.Solve(&reconstruction) 
*** Check failure stack trace: ***
    @     0x7fc2c569c1c3  google::LogMessage::Fail()
    @     0x7fc2c56a125b  google::LogMessage::SendToLog()
    @     0x7fc2c569bebf  google::LogMessage::Flush()
    @     0x7fc2c569c6ef  google::LogMessageFatal::~LogMessageFatal()
    @     0x558c922aa280  RunPointTriangulator()
    @     0x558c922a1eaf  main
    @     0x7fc2c3a9c083  __libc_start_main
    @     0x558c922a5f6e  _start
INFO    ::root: 20220825-123634.769-9c~9d-Tr/stage_1/images/0089_0089_frame_005406.png was not localized

I ran local sfm, so the number of matched images is 31.
The feature matching process is going well, but the triangulation is problem.
How can I resolve this problem?

How to apply netVLAD + SIFT with kapture-localization?

Hi! I'm student from south korea!
Firstly, thanks for your amazing work. I'm a beginner of localization :)..
I want to obtain global feature and local feature using netVLAD + SIFT on dataset from here.
I already read that kapture-localization only supports AP-GeM and R2D2 directly.
In the case of global feature, I can obtain the global feature from official code of netVLAD, but I don't know how to apply it appropriately to the kapture format. I know there is a rule of format in kapture, however, what is the right feature format of netVLAD?
And I used SIFT method from openCV. There is same problem with global feature.
How to match the images with local feature extracted from SIFT using kapture-localization?
Also I extracted the keypoints and descriptors, but the number of keypoints is smaller than the number of keypoint from R2D2.
I want to reproduce the score from here.

I appologize that the questions are not clear.

I realized that I should write down the information on text file such as global_features.txt.
Is there any problem if the dsize or type of feature is changed?
For example, the dsize of feature from netVLAD is not same with AP-GeM and so on local feature(e.g. number of keypoints).
What information should I put in the keypoint array obtained from SIFT?
According to the example on here , keypoint array contains [x y scale orientation]. But I can't find the scale parameter from keypoint obtained from SIFT on openCV .

I'm not sure if this additional explanation helped you to understand well.

Parameters for global descriptor extraction

Hi there,

I am writing regarding your 3DV paper "Benchmarking Image Retrieval for Visual Localization". Really cool paper with many insights! Thanks a lot for open-sourcing the code and for the extensive documentation.

I am looking for the pre- and post-processing parameters used to extract the global descriptors such as NetVLAD, AP-GeM, and DELG. I am particularly interested in the size of the input images, the exact scales (if multiscale extraction), whether whitening was applied, etc. I could not find such details in the paper and this repository does not seem to mention the arguments of the extraction scripts (e.g. extract_kapture.py for AP-GeM). These would be very useful to reproduce the benchmark results and compare them with other datasets.

Thanks a lot!

The pretrained feature extractor doesn't use multi GPU

Hi, I have an issue about using GPU during tutorial.
I try to benchmark the NAVERLABS dataset.
I set the parameter of gpu of dirtorch.extract_kapture to 0 1 2 3, but the extractor still use only 1 gpu(cuda:0 indeed).
I didn't change anything except the path of data.
I tried to modify the code about multi gpu(in dirtorch), but it is not resolved.
How can I use multi GPU during tutorial?
It is too slow to extract feature from the dataset.

Visualization of feature correspondence between query image to mapping image

Hi,

I am interested to see the keypoints' feature correspondences between query image and the associated mapping image from the top image retrieval entry after performing a localization.

Is there a way in which I could visualize the above using colmap gui?

Thank you for your guidance and help in advance!

Permission denied when downloading "GangnamStation_B2" dataset

I run the installation code , and get the info below:

kapture_download_dataset.py install "GangnamStation_B2*"
INFO ::downloader: 11 dataset will be installed.
INFO ::downloader: GangnamStation_B2_release_mapping: starting installation ...
INFO ::downloader: /home/wanshanshan16/kapture_datasets already exists: skipped
INFO ::downloader: GangnamStation_B2_release_mapping install: successful
INFO ::downloader: GangnamStation_B2_release_mapping_lidar_only: starting installation ...
INFO ::downloader: /home/wanshanshan16/kapture_datasets already exists: skipped
INFO ::downloader: GangnamStation_B2_release_mapping_lidar_only install: successful
INFO ::downloader: GangnamStation_B2_release_test: starting installation ...
INFO ::downloader: GangnamStation_B2_release_test.tar.gz is already downloaded.
CRITICAL::downloader: [Errno 13] Permission denied: '/home/wanshanshan16/kapture_datasets/GangnamStation/LICENSE.txt'
Traceback (most recent call last):
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 643, in
sys.exit(kapture_download_dataset_cli())
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 638, in kapture_download_dataset_cli
raise e
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 633, in kapture_download_dataset_cli
return kapture_download_dataset(args, index_filepath)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 540, in kapture_download_dataset
status = dataset.install(force_overwrite=args.force, no_cleaning=args.no_cleaning)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/bin/kapture_download_dataset.py", line 354, in install
untar_file(self._archive_filepath, self._install_dir.install_root_path)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/site-packages/kapture/converter/downloader/archives.py", line 23, in untar_file
archive.extractall(install_dirpath)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/tarfile.py", line 2002, in extractall
numeric_owner=numeric_owner)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/tarfile.py", line 2044, in extract
numeric_owner=numeric_owner)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/tarfile.py", line 2114, in _extract_member
self.makefile(tarinfo, targetpath)
File "/home/wanshanshan16/miniconda3/envs/py3.7_torch1.7.1/lib/python3.7/tarfile.py", line 2155, in makefile
with bltn_open(targetpath, "wb") as target:
PermissionError: [Errno 13] Permission denied: '/home/wanshanshan16/kapture_datasets/GangnamStation/LICENSE.txt'

After running the code, I got data structure below. So is there any thing wrong with my installation? How can I solve it?
to_show
to_show_2

DELG models used in benchmark paper

Hello,

I just saw your paper "Benchmarking Image Retrieval for Visual Localization". Thanks for the nice study, the results are very interesting. I have a couple of questions regarding the DELG models which were used.

  1. In the paper, I saw that you mention using the RN101 DELG model that was trained in GLDv1 (as per footnote 10). However, Tab1 mentions the retrieval mAP numbers for DELG's RN50 backbone. Eg, RO(m) for the RN101 version is 73.2 instead of 69.7. I am wondering if this was a typo, or if you were accidentally using the RN50 version.
  2. Have you attempted using the DELG model variants which were trained on GLDv2? They are available in our repository. I am just curious if training on this dataset could improve performance for your application.

Thanks, and again very nice work!

Andre

localize pipeline doesn't make trajectories.txt for localized result

Environment:

python 3.8.10
kapture==1.1.5
kapture-localization==0.1.4

Problem:

kapture_pipeline_localize.py doesn't make trajectories.txt in kapture_localized_recover directory when I change query's record_data structure.

I followed the virtual_gallery_tutorial, and tried modify query kapture's record_data folder structure a little.
This is original structure,
image
And This is virtual_gallery_tutorial_2, which I modified.
image
and I also modified records_camera.txt for virtual_gallery_tutorial2, like this.
image

As shown in the picture, when I try kapture_pipeline_localize.py, virtual_gallery_tutorial_2 doesn't make trajectories.txt for query data.

Here is terminal output for virtual_gallery_tutorial, which is original version.

$ kapture_pipeline_localize.py -v info \
      -i ./mapping \
      --query ./query \
      -kpt ./local_features/r2d2_500/keypoints \
      -desc ./local_features/r2d2_500/descriptors \
      -gfeat ./global_features/AP-GeM-LM18/global_features \
      -matches ./local_features/r2d2_500/NN_no_gv/matches \
      -matches-gv ./local_features/r2d2_500/NN_colmap_gv/matches \
      --colmap-map ./colmap-sfm/r2d2_500/AP-GeM-LM18_top5 \
      -o ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/ \
      --topk 5 \
      --config 2 \
> --skip evaluate export_LTVL2020
INFO    ::compute_image_pairs: compute_image_pairs. loading mapping: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
INFO    ::compute_image_pairs: computing pairs with AP-GeM-LM18...
INFO    ::compute_image_pairs: compute_image_pairs. loading query: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_query
INFO    ::compute_image_pairs: saving to file  ...
INFO    ::compute_image_pairs: all done
INFO    ::merge: Loading ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
INFO    ::merge: Loading ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_query
INFO    ::merge: Writing merged kapture data...
INFO    ::compute_matches: compute_matches. loading input: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query
INFO    ::compute_matches: compute_matches. entering main loop...
100%|███████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 142.20it/s]
INFO    ::compute_matches: all done
INFO    ::run_colmap_gv: run_colmap_gv...
INFO    ::run_colmap_gv: remove rigs notation.
INFO    ::colmap: registering 6 sensors (cameras) in database...
INFO    ::colmap: registering 16 images in database...
INFO    ::colmap: registering 16 keypoints in database...
INFO    ::colmap: registering 20 matches in database...
INFO    ::colmap: ['colmap', 'matches_importer', '--database_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query_gv/colmap.db', '--match_list_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query_gv/match_list.txt', '--match_type', 'pairs', '--SiftMatching.use_gpu', '0']

==============================================================================
Custom feature matching
==============================================================================

Matching block [1/1] in 0.050s
Elapsed time: 0.001 [minutes]
16it [00:00, 14134.13it/s]
INFO    ::colmap: keeps 100.0% of verified matches (20/20) ...
100%|██████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 2282.12it/s]
INFO    ::colmap_localize: loading kapture files...
WARNING ::colmap_localize: Input data contains trajectories: they will be ignored
INFO    ::colmap_localize: remove rigs notation.
12it [00:00, 41665.27it/s]
INFO    ::colmap_localize: Step 1: Add precomputed keypoints and matches to colmap db
INFO    ::colmap_localize: Step 2: Run geometric verification - skipped
INFO    ::colmap_localize: Step 3: Run image_registrator
INFO    ::colmap: ['colmap', 'image_registrator', '--database_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/colmap.db', '--input_path', './colmap-sfm/r2d2_500/AP-GeM-LM18_top5/reconstruction', '--output_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--Mapper.ba_refine_focal_length', '0', '--Mapper.ba_refine_principal_point', '0', '--Mapper.ba_refine_extra_params', '0', '--Mapper.min_num_matches', '4', '--Mapper.init_min_num_inliers', '4', '--Mapper.abs_pose_min_num_inliers', '4', '--Mapper.abs_pose_min_inlier_ratio', '0.05', '--Mapper.ba_local_max_num_iterations', '50', '--Mapper.abs_pose_max_error', '20', '--Mapper.filter_max_reproj_error', '12']

==============================================================================
Loading database
==============================================================================

Loading cameras... 6 in 0.000s
Loading matches... 50 in 0.000s
Loading images... 16 in 0.001s (connected 16)
Building correspondence graph... in 0.002s (ignored 0)

Elapsed time: 0.000 [minutes]


==============================================================================
Registering image #13 (13)
==============================================================================

  => Image sees 79 / 96 points

Pose refinement report
----------------------
    Residuals : 176
   Parameters : 6
   Iterations : 10
         Time : 0.00658488 [s]
 Initial cost : 0.627017 [px]
   Final cost : 0.617635 [px]
  Termination : Convergence


==============================================================================
Registering image #14 (14)
==============================================================================

  => Image sees 84 / 101 points

Pose refinement report
----------------------
    Residuals : 166
   Parameters : 6
   Iterations : 14
         Time : 0.00102401 [s]
 Initial cost : 0.707709 [px]
   Final cost : 0.664657 [px]
  Termination : Convergence


==============================================================================
Registering image #15 (15)
==============================================================================

  => Image sees 274 / 366 points

Pose refinement report
----------------------
    Residuals : 616
   Parameters : 6
   Iterations : 14
         Time : 0.00341797 [s]
 Initial cost : 0.755861 [px]
   Final cost : 0.653651 [px]
  Termination : Convergence


==============================================================================
Registering image #16 (16)
==============================================================================

  => Image sees 304 / 353 points

Pose refinement report
----------------------
    Residuals : 668
   Parameters : 6
   Iterations : 7
         Time : 0.00177789 [s]
 Initial cost : 0.514305 [px]
   Final cost : 0.512528 [px]
  Termination : Convergence

INFO    ::colmap_localize: Step 4: Export reconstruction results to txt
INFO    ::colmap: ['colmap', 'model_converter', '--input_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--output_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--output_type', 'TXT']
INFO    ::colmap: importing colmap ...
INFO    ::colmap: parsing cameras  ...
16it [00:00, 15505.74it/s]
INFO    ::colmap: saving to kapture  ...
INFO    ::recover_timestamps_and_ids: recover_timestamps_and_ids...
INFO    ::recover_timestamps_and_ids: loading data ...
INFO    ::recover_timestamps_and_ids: recover records and trajectories
INFO    ::recover_timestamps_and_ids: recover sensor ids in sensors
INFO    ::recover_timestamps_and_ids: recover rig ids in rigs
INFO    ::recover_timestamps_and_ids: saving results
INFO    ::recover_timestamps_and_ids: handle image files with a call to transfer_actions
INFO    ::recover_timestamps_and_ids: import image files ...
INFO    ::recover_timestamps_and_ids: done.

And This is terminal output for virtual_gallery_tutorial_2, which is modified version.

$ kapture_pipeline_localize.py -v info \
      -i ./mapping \
      --query ./query \
      -kpt ./local_features/r2d2_500/keypoints \
      -desc ./local_features/r2d2_500/descriptors \
      -gfeat ./global_features/AP-GeM-LM18/global_features \
      -matches ./local_features/r2d2_500/NN_no_gv/matches \
      -matches-gv ./local_features/r2d2_500/NN_colmap_gv/matches \
      --colmap-map ./colmap-sfm/r2d2_500/AP-GeM-LM18_top5 \
      -o ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/ \
      --topk 5 \
      --config 2 \
> --skip evaluate export_LTVL2020
INFO    ::compute_image_pairs: compute_image_pairs. loading mapping: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
INFO    ::compute_image_pairs: computing pairs with AP-GeM-LM18...
INFO    ::compute_image_pairs: compute_image_pairs. loading query: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_query
INFO    ::compute_image_pairs: saving to file  ...
INFO    ::compute_image_pairs: all done
INFO    ::merge: Loading ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping
INFO    ::merge: Loading ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_query
INFO    ::merge: Writing merged kapture data...
INFO    ::compute_matches: compute_matches. loading input: ./colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query
INFO    ::compute_matches: compute_matches. entering main loop...
0it [00:00, ?it/s]
INFO    ::compute_matches: all done
INFO    ::run_colmap_gv: run_colmap_gv...
INFO    ::run_colmap_gv: remove rigs notation.
INFO    ::colmap: registering 6 sensors (cameras) in database...
INFO    ::colmap: registering 16 images in database...
INFO    ::colmap: registering 12 keypoints in database...
INFO    ::colmap: registering 0 matches in database...
INFO    ::colmap: ['colmap', 'matches_importer', '--database_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query_gv/colmap.db', '--match_list_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/kapture_inputs/proxy_map_plus_query_gv/match_list.txt', '--match_type', 'pairs', '--SiftMatching.use_gpu', '0']

==============================================================================
Custom feature matching
==============================================================================

Elapsed time: 0.000 [minutes]
16it [00:00, 12977.93it/s]
0it [00:00, ?it/s]
INFO    ::colmap_localize: loading kapture files...
WARNING ::colmap_localize: Input data contains trajectories: they will be ignored
INFO    ::colmap_localize: remove rigs notation.
12it [00:00, 100865.03it/s]
INFO    ::colmap_localize: Step 1: Add precomputed keypoints and matches to colmap db
INFO    ::colmap_localize: Step 2: Run geometric verification - skipped
INFO    ::colmap_localize: Step 3: Run image_registrator
INFO    ::colmap: ['colmap', 'image_registrator', '--database_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/colmap.db', '--input_path', './colmap-sfm/r2d2_500/AP-GeM-LM18_top5/reconstruction', '--output_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--Mapper.ba_refine_focal_length', '0', '--Mapper.ba_refine_principal_point', '0', '--Mapper.ba_refine_extra_params', '0', '--Mapper.min_num_matches', '4', '--Mapper.init_min_num_inliers', '4', '--Mapper.abs_pose_min_num_inliers', '4', '--Mapper.abs_pose_min_inlier_ratio', '0.05', '--Mapper.ba_local_max_num_iterations', '50', '--Mapper.abs_pose_max_error', '20', '--Mapper.filter_max_reproj_error', '12']

==============================================================================
Loading database
==============================================================================

Loading cameras... 6 in 0.000s
Loading matches... 30 in 0.000s
Loading images... 16 in 0.000s (connected 12)
Building correspondence graph... in 0.001s (ignored 0)

Elapsed time: 0.000 [minutes]

INFO    ::colmap_localize: Step 4: Export reconstruction results to txt
INFO    ::colmap: ['colmap', 'model_converter', '--input_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--output_path', './colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/colmap_localized/reconstruction', '--output_type', 'TXT']
INFO    ::colmap: importing colmap ...
INFO    ::colmap: parsing cameras  ...
16it [00:00, 12230.52it/s]
INFO    ::colmap: saving to kapture  ...
INFO    ::recover_timestamps_and_ids: recover_timestamps_and_ids...
INFO    ::recover_timestamps_and_ids: loading data ...
INFO    ::recover_timestamps_and_ids: recover records and trajectories
INFO    ::recover_timestamps_and_ids: recover sensor ids in sensors
INFO    ::recover_timestamps_and_ids: recover rig ids in rigs
INFO    ::recover_timestamps_and_ids: saving results
INFO    ::recover_timestamps_and_ids: handle image files with a call to transfer_actions
INFO    ::recover_timestamps_and_ids: import image files ...
INFO    ::recover_timestamps_and_ids: done.

Why this difference happen? Does query data's folder structure and name matter when localize pipeline?
I think both query data are proper kapture format for localization, and there are no error msgs during process.

errors in running

When I use the scripts alone , for example kapture_compute_image_pairs.py, it always makes mistakes , Do you have any ideas about this ? By the way, as a new one of visual-based location, do you have any recommended books in coding?

/home/ljy/kapture/kapture-localization/tools/kapture_compute_image_pairs.py -v info --mapping /home/ljy/kapture/kapture-localization/samples/virtual_gallery_tutorial/mapping --query /home/ljy/kapture/kapture-localization/samples/virtual_gallery_tutorial/query -o /home/ljy/kapture/kapture-localization/samples/virtual_gallery_tutorial/colmap-localization/r2d2_500/AP-GeM-LM18_top5/AP-GeM-LM18_top5/ --topk 5
INFO ::compute_image_pairs: compute_image_pairs. loading mapping: /home/ljy/kapture/kapture-localization/samples/virtual_gallery_tutorial/mapping
Traceback (most recent call last):
File "/home/ljy/kapture/kapture-localization/tools/kapture_compute_image_pairs.py", line 158, in
compute_image_pairs_command_line()
File "/home/ljy/kapture/kapture-localization/tools/kapture_compute_image_pairs.py", line 154, in compute_image_pairs_command_line
compute_image_pairs(args.mapping, args.query, args.output, args.global_features_type, args.topk)
File "/home/ljy/kapture/kapture-localization/tools/kapture_compute_image_pairs.py", line 53, in compute_image_pairs
assert kdata_mapping.global_features is not None
AssertionError

Process finished with exit code 1

ValueError: operation parameter must be str

Hi,
I meet a question when I run kapture_pipeline_localize.py, do you know how to solve it,thanks
the error messages is
`Traceback (most recent call last):
File "/home/xx/miniconda3/envs/kapture_env/bin/kapture_run_colmap_gv.py", line 4, in
import('pkg_resources').run_script('kapture-localization==0.0.3', 'kapture_run_colmap_gv.py')
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/pkg_resources/init.py", line 651, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/pkg_resources/init.py", line 1448, in run_script
exec(code, namespace, namespace)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_run_colmap_gv.py", line 156, in
run_colmap_gv_command_line()
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_run_colmap_gv.py", line 152, in run_colmap_gv_command_line
run_colmap_gv(args.input, args.output, args.colmap_binary, args.pairsfile_path, args.skip, args.force)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_run_colmap_gv.py", line 41, in run_colmap_gv
force)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_run_colmap_gv.py", line 83, in run_colmap_gv_from_loaded_data
export_two_view_geometry=False)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture/converter/colmap/database_extra.py", line 615, in kapture_to_colmap
colmap_camera_ids = add_cameras_to_database(kapture_data.sensors, database)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture/converter/colmap/database_extra.py", line 346, in add_cameras_to_database
prior_focal_length=prior_focal_length)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture/converter/colmap/database.py", line 166, in add_camera
prior_focal_length))
ValueError: operation parameter must be str

Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/kapture_env/bin/kapture_pipeline_localize.py", line 4, in
import('pkg_resources').run_script('kapture-localization==0.0.3', 'kapture_pipeline_localize.py')
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/pkg_resources/init.py", line 651, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/pkg_resources/init.py", line 1448, in run_script
exec(code, namespace, namespace)
File "/home/xxx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_pipeline_localize.py", line 363, in
localize_pipeline_command_line()
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_pipeline_localize.py", line 353, in localize_pipeline_command_line
args.force)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/EGG-INFO/scripts/kapture_pipeline_localize.py", line 184, in localize_pipeline
run_python_command(local_run_colmap_gv_path, run_colmap_gv_args, python_binary)
File "/home/xx/miniconda3/envs/kapture_env/lib/python3.7/site-packages/kapture_localization-0.0.3-py3.7.egg/kapture_localization/utils/subprocess.py", line 67, in run_python_command
raise ValueError('\nSubprocess Error (Return code:' f' {python_process.returncode} )')
ValueError:
Subprocess Error (Return code: 1 )
`

Any link to the paper

It seems to be quite interesting from the abstract, anywhere to access the full-text?

Get higher Local SFM results than before

Image retrieval benchmark results for local SFM differ with those I got last year on another device.
The local SFM results for AP-GeM-LM18_top20 on Gangnam Station B2 are now:

Model: local_sfm

Found 856 / 916 image positions (93.45 %).
Found 856 / 916 image rotations (93.45 %).
Localized images: mean=(40.7525m, 42.4062 deg) / median=(0.1973m, 1.8176 deg)
All: median=(0.7966m, 2.1938 deg)
Min: 0.0068m; 0.0789 deg
Max: 3487.0642m; 179.9956 deg

(0.25m, 2.0 deg): 43.01%
(0.5m, 5.0 deg): 47.49%
(5.0m, 10.0 deg): 50.55%

whereas they used to be:

Model: local_sfm

Found 378 / 916 image positions (41.27 %).
Found 378 / 916 image rotations (41.27 %).
Localized images: mean=(23.2583m, 33.8776 deg) / median=(0.1011m, 1.5585 deg)
All: median=(infm, inf deg)
Min: 0.0063m; 0.1690 deg
Max: 245.3013m; 179.9680 deg

(0.25m, 2.0 deg): 22.38%
(0.5m, 5.0 deg): 25.55%
(5.0m, 10.0 deg): 27.18%

And the higher results are got through kapture_colmap_localize_localsfm.py

Pairsfile for Inloc dataset

Hi, authors. I would like to use the image retrieval benchmark with the InLoc dataset but get no pairsfile when creating the global map. Can you provide it?
Thanks a lot!

Question regarding pipeline_localization

I have a problem when trying to run the "pipeline_localization" script inside the C++ code. To call the script I use the system command. Problem is, each time I launch the code, the first time a script is called - everything runs fine. Then with the second, third, etc. script calls the colmap's image_registration starts working and then it simply skips the pose refinement completely. Because of that the trajectories.txt file is never created and the script falls into an error. It goes on like that until the program is terminated and launched all over again.
The proper output after first script call:

==============================================================================
[image_localization-1] Registering image #538 (538)
[image_localization-1] ==============================================================================
[image_localization-1]
[image_localization-1] => Image sees 4 / 1865 points
[image_localization-1]
[image_localization-1] Pose refinement report
[image_localization-1] ----------------------
[image_localization-1] Residuals : 8
[image_localization-1] Parameters : 6
[image_localization-1] Iterations : 6
[image_localization-1] Time : 0.000523492 [s]
[image_localization-1] Initial cost : 0.197273 [px]
[image_localization-1] Final cost : 0.156344 [px]
[image_localization-1] Termination : Convergence
[image_localization-1]
[image_localization-1] INFO ::colmap_localize: Step 4: Export reconstruction results to txt

The output after next calls:

==============================================================================
[image_localization-1] Registering image #538 (538)
[image_localization-1] ==============================================================================
[image_localization-1]
[image_localization-1] => Image sees 4 / 1865 points
[image_localization-1] INFO ::colmap_localize: Step 4: Export reconstruction results to txt

accuracy problem at the late fusion step

Question

During reproducing the late fusion step on the Aachen Day-Night v1.1 data set(with 4 global features that kapture provided, AP-GeM-LM18/ DELG/ densevlad_multi/ netvlad_vd16pitts/),I found the accuracy at night is very low compared to the results in the paper.

with gharm Top20 config2, I got

No. day night
mine 90.7 / 97.1 / 99.5 68.6 / 83.8 / 95.8
your paper 90.5 / 96.8 / 99.4 74.9 / 90.1 / 98.4
diff +0.2 / +0.3 / +0.1 -6.3 / - 6.3 / - 2.6

I wonder whether it‘s an implementation problem or is there any other trick?

What did I do:

Step1:
use full dataset to construct a single map according to https://github.com/naver/kapture-localization/blob/main/pipeline/examples/run_aachen-v11.sh

Step2:
do late fusion using https://github.com/naver/kapture-localization/blob/823f85430c4739b398b5a1cf11ef7d942b0e917d/tools/kapture_image_retrieval_late_fusion.py
Since there is no example of fusion script, I wrote one based on my understanding:

`# 0a) Define paths and params
PYTHONBIN=python3.8
WORKING_DIR=${PWD}
DATASETS_PATH=${WORKING_DIR}/datasets
DATASET=Aachen-Day-Night-v1.1
mkdir -p ${DATASETS_PATH}

TOPK=20 # number of retrieved images for mapping and localization
KPTS=20000 # number of local features to extract

#-gfeat ${WORKING_DIR}/${DATASET}/global_features/AP-GeM-LM18/global_features ${WORKING_DIR}/${DATASET}/global_features/DELG/global_features ${WORKING_DIR}/${DATASET}/global_features/densevlad_multi/global_features ${WORKING_DIR}/${DATASET}/global_features/netvlad_vd16pitts/global_features \

#-gfeat ${WORKING_DIR}/${DATASET}/global_features/AP-GeM-LM18 ${WORKING_DIR}/${DATASET}/global_features/DELG ${WORKING_DIR}/${DATASET}/global_features/densevlad_multi ${WORKING_DIR}/${DATASET}/global_features/netvlad_vd16pitts \

#1)kapture_image_retrieval_late_fusion
kapture_image_retrieval_late_fusion.py -v debug
-i ${WORKING_DIR}/${DATASET}/map_plus_query/
--query ${WORKING_DIR}/${DATASET}/query/
-o ${WORKING_DIR}/${DATASET}/pairs_fusion.txt
--topk ${TOPK}
'generalized_harmonic_mean' --weights 1 1 1 1`

Step3:
using the pairs_fusion.txt obtained in step2, I did:
`# 0a) Define paths and params
PYTHONBIN=python3.8
WORKING_DIR=${PWD}
DATASETS_PATH=${WORKING_DIR}/datasets
DATASET=Aachen-Day-Night-v1.1
mkdir -p ${DATASETS_PATH}

TOPK=20 # number of retrieved images for mapping and localization
KPTS=20000 # number of local features to extract

#7) localization pipeline
LOCAL=r2d2_WASF_N8_big
GLOBAL=Fusion
kapture_pipeline_localize.py -v debug -f
-s compute_image_pairs compute_matches geometric_verification
-i ${WORKING_DIR}/${DATASET}/mapping
--query ${WORKING_DIR}/${DATASET}/query
-kpt ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/keypoints
-desc ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/descriptors
--pairsfile-path ${WORKING_DIR}/${DATASET}/pairs_fusion.txt
-matches ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/NN_no_gv/matches
-matches-gv ${WORKING_DIR}/${DATASET}/local_features/${LOCAL}/NN_colmap_gv/matches
--colmap-map ${WORKING_DIR}/${DATASET}/colmap-sfm/${LOCAL}/Resnet101-AP-GeM-LM18
-o ${WORKING_DIR}/${DATASET}/colmap-localize/${LOCAL}/${GLOBAL}
--topk ${TOPK}
--config 2
`

additional questions:

1.how to correctly use the 'kapture_image_retrieval_late_fusion.py' in step2? I'm very confused especially with the -gfeat parameter; As shown above, what I passed to gfeat can't work properly so I use the default

Thanks in advance

What's the role of depth information in rgbd pipeline?

I am learning rgbd pipeline from 'kapture-localization/pipeline/examples/run_7scenes_rgbd.sh', create_3D_model_from_depth.py, kapture_pipeline_mapping, kapture_colmap_build_map.py

I notice that create_3D_model_from_depth.py can read depth image and save 2D point's depth information. However, in kapture_colmap_build_map.py will do triangulation by following code. I thik 'triangulation' not in skip_list is True in rgbd pipeline.

I'm confused that if I can get depth information from depth image, why I need to do triangulation? What's the role of depth information in rgbd pipeline?

Thank you!

    if kapture_data.trajectories is not None:
        # Generate priors for reconstruction
        os.makedirs(priors_txt_path, exist_ok=True)
        if 'priors_for_reconstruction' not in skip_list:
            logger.info('Step 3: Exporting priors for reconstruction.')
            colmap_db = COLMAPDatabase.connect(colmap_db_path)
            database_extra.generate_priors_for_reconstruction(kapture_data, colmap_db, priors_txt_path)
            colmap_db.close()

        # Point triangulator
        reconstruction_path = path.join(colmap_path, "reconstruction")
        os.makedirs(reconstruction_path, exist_ok=True)
        if 'triangulation' not in skip_list:
            logger.info("Step 4: Triangulation")
            colmap_lib.run_point_triangulator(
                colmap_binary,
                colmap_db_path,
                get_image_fullpath(kapture_path),
                priors_txt_path,
                reconstruction_path,
                point_triangulator_options
            )

Bug: FileNotFoundError: [WinError 206] The filename or extension is too long

When running the sample code in https://github.com/naver/kapture-localization/blob/main/doc/tutorial.adoc#1-mapping, i.e., running kapture_pipeline_mapping.py on the samples/virtual_gallery_tutorial, I get the following error:

FileNotFoundError: [WinError 206] The filename or extension is too long: 'colmap-sfm/r2d2_500/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping/reconstruction/matches/r2d2_500/training/gallery_light1_loop1/frames/rgb/camera_0/rgb_00223.jpg.overlapping/training'

This is triggered inside

kapture_compute_matches.py", line 133, in compute_matches_from_loaded_data
image_matches_to_file(matches_path, matches)

(full traceback below).

It appears that the concatenation of directories for different variants in kapture is causing the default 260 character filename limit on Windows to be exceeded.

I can work around this by using RegEdit to set the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem variable LongPathsEnabled to 1.

Is this the preferred solution? If so, it might be helpful to mention this in your documentation. (You already mention that the scripts need to be run in Administrator mode to correctly follow symlinks on Windows.)

I noticed that there's a symlink in this long path (the second r2d2_500 I believe), so if the code resolved these, the resulting pathname would be shorter, but I'm not sure this is doable or worth the bother.

Thanks.

Full stack trace:

Traceback (most recent call last):
File "C:\Temp\kapture-localization-main\tools\kapture_compute_matches.py", line 187, in
compute_matches_command_line()
File "C:\Temp\kapture-localization-main\tools\kapture_compute_matches.py", line 180, in compute_matches_command_line
compute_matches(args.input,
File "C:\Temp\kapture-localization-main\tools\kapture_compute_matches.py", line 72, in compute_matches
compute_matches_from_loaded_data(input_path,
File "C:\Temp\kapture-localization-main\tools\kapture_compute_matches.py", line 133, in compute_matches_from_loaded_data
image_matches_to_file(matches_path, matches)
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\site-packages\kapture\io\features.py", line 464, in image_matches_to_file
array_to_file(filepath, image_matches)
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\site-packages\kapture\io\binary.py", line 49, in array_to_file
os.makedirs(path.dirname(filepath), exist_ok=True)
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 1 more time]
File "C:\Users\szeli\AppData\Local\Programs\Python\Python39\lib\os.py", line 225, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 206] The filename or extension is too long: 'colmap-sfm/r2d2_500/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping/reconstruction/matches/r2d2_500/training/gallery_light1_loop1/frames/rgb/camera_0/rgb_00223.jpg.overlapping/training'

Time Profiling

Hello,
can anyone help me out to do time profiling for kapture pipeline.

Thanks
Mukula

Unsupported copy action link_absolute

Dear author:

While recode the custom localization pipeline using jupyter,

merged_kapture = merge_remap(
        kapture_data_list, skip_list, kapture_path_list, merged_path, images_import_strategy)

there has a error:

Unsupported copy action link_absolute

image

Implementation of precision@k and recall@k evaluation

Hi, authors. I am impressed by your great work. Thanks a lot for open-sourcing the code and for the detailed toolbox documentation. I am now trying to evaluate some other image retrieval networks using your benchmark. I see the toolbox could evaluate the pose estimation accuracy in local or global sfm as a metric for the image retrieval method. In your paper, you also directly evaluate the recall@k and precision@k of different image retrieval methods on all three datasets. Is the evaluation function (recall@k precision@k) included in this toolbox?

max-scale = 9999 in Aachen example script of R2D2 extraction

in examples of Aachen Day-Night v1.1, the shell script set max-scale to 9999, how does this make sense?

${PYTHONBIN} extract_kapture.py --model models/r2d2_WASF_N8_big.pt --kapture-root ${WORKING_DIR}/${DATASET}/map_plus_query/ --min-scale 0.3 --min-size 128 --max-scale 9999 --top-k ${KPTS}
https://github.com/naver/kapture-localization/blob/main/pipeline/examples/run_aachen-v11.sh#L70

but in kapture/r2d2, max-scale can't > 1:
assert max_scale <= 1
https://github.com/naver/r2d2/blob/d6777a9d6769448998e5abe11031ae05de28e49a/extract.py#L61

This will result in an error while running run_aachen-v11.sh

errors when running tutorial

While I was following the tutorial, I made the following error.How should I solve it?thank you
INFO ::colmap: ['colmap', 'point_triangulator', '--database_path', './colmap-sfm/r2d2_500/AP-GeM-LM18_top5/colmap.db', '--image_path', 'colmap-sfm/r2d2_500/AP-GeM-LM18_top5/kapture_inputs/proxy_mapping_gv/sensors/records_data', '--input_path', './colmap-sfm/r2d2_500/AP-GeM-LM18_top5/priors_for_reconstruction', '--output_path', './colmap-sfm/r2d2_500/AP-GeM-LM18_top5/reconstruction', '--Mapper.ba_refine_focal_length', '0', '--Mapper.ba_refine_principal_point', '0', '--Mapper.ba_refine_extra_params', '0'] ERROR: Failed to parse options: unrecognised option '--input_path'. Traceback (most recent call last): File "/home/jty/.local/bin/kapture_colmap_build_map.py", line 270, in <module> colmap_build_map_command_line() File "/home/jty/.local/bin/kapture_colmap_build_map.py", line 266, in colmap_build_map_command_line args.skip, args.force) File "/home/jty/.local/bin/kapture_colmap_build_map.py", line 71, in colmap_build_map force) File "/home/jty/.local/bin/kapture_colmap_build_map.py", line 177, in colmap_build_map_from_loaded_data point_triangulator_options File "/home/jty/.local/lib/python3.6/site-packages/kapture_localization/colmap/colmap_command.py", line 275, in run_point_triangulator run_colmap_command(colmap_binary_path, point_triangulator_args) File "/home/jty/.local/lib/python3.6/site-packages/kapture_localization/colmap/colmap_command.py", line 70, in run_colmap_command '\nSubprocess Error (Return code:' ValueError: Subprocess Error (Return code: 1 ) Traceback (most recent call last): File "/home/jty/.local/bin/kapture_pipeline_mapping.py", line 249, in <module> mapping_pipeline_command_line() File "/home/jty/.local/bin/kapture_pipeline_mapping.py", line 239, in mapping_pipeline_command_line args.force) File "/home/jty/.local/bin/kapture_pipeline_mapping.py", line 150, in mapping_pipeline run_python_command(local_build_map_path, build_map_args, python_binary) File "/home/jty/.local/lib/python3.6/site-packages/kapture_localization/utils/subprocess.py", line 67, in run_python_command raise ValueError('\nSubprocess Error (Return code:' f' {python_process.returncode} )') ValueError: Subprocess Error (Return code: 1 )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.