Code Monkey home page Code Monkey logo

kapture's Introduction

KAPTURE
Continuous Integration Status

1. Overview

Kapture is a pivot file format, based on text and binary files, used to describe SfM (Structure From Motion) and more generally sensor-acquired data.

It can be used to store sensor parameters and raw sensor data:

  • cameras

  • images

  • lidar and other sensor data

As well as computed data:

  • 2d features

  • 3d reconstruction

Finally, many popular datasets can directly be downloaded using the convenient downloader!

2. Specifications

The format specification is detailed in the kapture format specifications document.

3. Example File Structure

This is an example file structure of a dataset in the kapture format.

my_dataset                 # Dataset root path
├─ sensors/                # Sensor data root path
│  ├─ sensors.txt          # list of all sensors with their specifications (e.g. camera intrinsics)
│  ├─ rigs.txt             # geometric relationship between sensors (optional)
│  ├─ trajectories.txt     # extrinsics (timestamp, sensor, pose)
│  ├─ records_camera.txt   # all records of type 'camera' (timestamp, sensor and path to image)
│  ├─ records_SENSOR_TYPE.txt # all records of type SENSOR_TYPE (other sensors, eg: 'magnetic', 'pressure'...)
│  └─ records_data/            # image and lidar data path
│     ├─ map/cam_01/00001.jpg  # image path used in records_camera.txt (example)
│     ├─ map/cam_01/00002.jpg
│     ├─ map/lidar_01/0001.pcd # lidar data path used in records_lidar.txt
│     ├─ query/query001.jpg    # image path used in records_camera.txt
│     ├─ ...
└─ reconstruction/
   ├─ keypoints/                       # 2D keypoints files
   │  ├─ r2d2_WASF-N8_20k              # identify the type of keypoints
   │  │  ├─ keypoints.txt              # type of keypoint (shape and dtype)
   │  │  ├─ map/cam_01/00001.jpg.kpt   # keypoints for corresponding image (example)
   │  │  ├─ query/query001.jpg.kpt     # keypoints for corresponding image (example)
   │  │  ├─ ...
   │  ├─ d2_tf                         # identify the type of keypoints
   │  │  ├─ keypoints.txt              # type of keypoint (shape and dtype)
   │  │  ├─ keypoints.tar              # instead of regular files, you can use an archive
   │  │  │  ├─ map/cam_01/00001.jpg.kpt   # keypoints for corresponding image (example)
   │  │  │  ├─ query/query001.jpg.kpt     # keypoints for corresponding image (example)
   │  │  │  ├─ ...
   │  ├─ ...
   ├─ descriptors/                     # keypoint descriptors files
   │  ├─ r2d2_WASF-N8_20k              # identify the type of descriptors
   │  │  ├─ descriptors.txt            # type of descriptor (keypoints type, shape and dtype)
   │  │  ├─ map/cam_01/00001.jpg.desc  # descriptors for corresponding image (example)
   │  │  ├─ query/query001.jpg.desc    # descriptors for corresponding image (example)
   │  │  ├─ ...
   │  ├─ d2_tf                         # identify the type of descriptors
   │  │  ├─ descriptors.txt            # type of descriptor
   │  │  ├─ descriptors.tar            # instead of regular files, you can use an archive
   │  │  │  ├─ map/cam_01/00001.jpg.desc  # descriptors for corresponding image (example)
   │  │  │  ├─ query/query001.jpg.desc    # descriptors for corresponding image (example)
   │  │  │  ├─ ...
   │  ├─ ...
   ├─ points3d.txt                  # 3D points of the reconstruction
   ├─ observations.txt              # 2D/3D points corespondences
   ├─ matches/                      # matches files.
   │  ├─ r2d2_WASF-N8_20k           # identify the type of keypoints that are matched
   │  │  ├─ map/cam_01/00001.jpg.overlapping/cam_01/00002.jpg.matches # example
   │  │  ├─  ...
   │  ├─ d2_tf                      # identify the type of keypoints that are matched
   │  │  ├─ matches.tar             # instead of regular files, you can use an archive
   │  │  │  ├─ map/cam_01/00001.jpg.overlapping/cam_01/00002.jpg.matches # example
   │  │  │  ├─  ...
   │  ├─ ...
   └─ global_features/                 # global feature files
      ├─ AP-GeM-LM18                   # identify the type of global_features
      │  ├─ global_features.txt        # type of global feature
      │  ├─ map/cam_01/00001.jpg.gfeat # example of global feature for corresponding image
      │  ├─ query/query001.jpg.gfeat   # example of global feature for corresponding image
      │  └─ ...
      ├─ DELG                          # identify the type of global_features
      │  ├─ global_features.txt        # type of global feature
      │  ├─ global_features.tar        # instead of regular files, you can use an archive
      │  │  ├─ map/cam_01/00001.jpg.gfeat # example of global feature for corresponding image
      │  │  ├─ query/query001.jpg.gfeat   # example of global feature for corresponding image
      │  │  └─ ...
      ├─ ...

4. Software

The kapture format is provided with a Python library, as well as several conversion tools.

Install

pip install kapture

or see installation for more detailed instructions.

Using docker

Build the docker image:

# build the docker image : if you have already cloned the repository
docker build . -t kapture/kapture
# OR build the docker image directly from github
docker build git://github.com/naver/kapture -t kapture/kapture
# run unit tests
docker run -it --rm kapture/kapture python3 -m unittest discover -s /opt/src/kapture/tests

If you want to process your own data, you can bind directories between the host and the container using --volume or --mount option (See the docker documentation). The following example mounts /path/to/dataset/ from the host to /dataset inside the docker.

docker run -it \
    --rm \ # Automatically remove the container when it exits \
    --volume /path/to/dataset/:/dataset:ro \ #read only
    kapture/kapture

kapture Python library

See the tutorial for some examples using the kapture Python library.

kapture tools

In this repository, you will find a set of conversion tools to or from kapture format. Import results to kapture format, and conversely, export converts kapture data to other formats. Depending of the format, some data might not be converted, either because the other format does not support it () or because its was not implemented (). Here is a table summarizing the conversion capabilities:

Table 1. conversion capabilities
Format ← → cam rig img trj gps kpt dsc gft p3D obs mch

colmap

import

(✓)

export

(✓)

openmvg

import

(✓)

export

(✓)

(✓)

(✓)

OpenSfM

import

export

bundler

import

image_folder

import

image_list

import

nvm

import

IDL_dataset_cvpr17

import

RobotCar_Seasons

import

ROSbag cameras+trajectory

import

(✓)

(✓)

SILDa

import

virtual_gallery

import

stereolabs zed2

import

  • : supported, (✓) partially supported, : not implemented, : not supported by format.

  • cam: handle camera parameters, eg. intrisics

  • rig: handle rig structure.

  • img: handle the path to images.

  • trj: handle trajectories, eg. poses.

  • kpt: handle image keypoints locations.

  • dsc: handle image keypoints descriptors.

  • gft: handle global image feature descriptors.

  • p3D: handle 3D point clouds.

  • obs: handle observations, ie. 3D-points / 2D keypoints correspondences.

  • mch: handle keypoints matches.

Here, you can also find an utility tool for cropping input images of a kapture dataset. Thanks Jonathan Chemla for the contribution.

5. kapture support in other packages

Local Features

  • R2D2 local features can be directly generated in kapture format. See here

  • D2-Net features can also be extracted in kapture format. See instructions here.

Global Features

  • AP-GeM global feature extractor in kapture format: here

6. Datasets

The kapture package provides conversion tools for several data formats and datasets used in the domain. But it also provides a tool to download datasets already converted to kapture. See the kapture tutorial for instructions to use the dataset downloader.

Here is a list of datasets you can directly download in kapture format with the downloader tool:

7. kapture-localization

Checkout kapture-localization, our toolbox which contains implementations for various localization related algorithms.

  • mapping and localization pipelines with custom features

  • mapping and localization pipelines with SIFT and vocabulary tree matching (default colmap pipeline)

  • image retrieval benchmark (global sfm, local sfm, pose approximation)

8. Tutorial

See the kapture tutorial for a short introduction to:

  • conversion tools

  • using kapture in your code

  • dataset download

9. Contributing

There are many ways to contribute to the kapture project:

  • provide feedback and suggestions of improvements

  • submit bug reports in the project bug tracker

  • provide a dataset in kapture format that we can add to the downloader tool

  • implement a feature or bug-fix for an outstanding issue

  • add support of kapture format in other software packages (e.g. SfM pipelines…​), thus adding support for more datasets

  • provide scripts to create data in kapture format (e.g. local/global feature extraction)

  • propose a new feature and implement it

If you wish to contribute, please refer to the CONTRIBUTING page.

10. License

Software license is detailed in the LICENSE file.

11. Contact Us

You can contact us through GitHub, or at kapture at naverlabs + com

kapture's People

Contributors

ducha-aiki avatar humenbergerm avatar jo-chemla avatar jujumo avatar keunmo avatar mhumenbe avatar nguerin avatar yocabon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kapture's Issues

Multiple types of keypoints and descriptors

Hi,

I am trying to fit the kapture for my use case, which requires multiple types of local features, both detectors and descriptors, e.g. SuperPoint, R2D2, SIFT+HardNet.
The current kapture format assumes a single type of local feature -- which is hardcoded in the filenames "keypoints.txt", "descriptors.txt". If I would make a fork for support multiple feature types, would it be merged at some point, or do you not plan to change the format?

Best, Dmytro.

Bug: visual studio doesn't import relative paths in kapture/io/csv.py

The following two lines in csv.py cause Visual Studio Code to crash (fails to run):

from .tar import KAPTURE_TARABLE_TYPES, TarCollection, TarHandler
from .tar import get_feature_tar_fullpath, retrieve_tar_handler_from_collection

The fix is to use absolute paths:

from kapture.io.tar import KAPTURE_TARABLE_TYPES, TarCollection, TarHandler
from kapture.io.tar import get_feature_tar_fullpath, retrieve_tar_handler_from_collection

Gangnam Station Dataset Depthmap looks weird

Hi there,

I plot the depth map of Gangnam in color_jet using the following code, but the plot looks weird to me.

def plot_depth_heatmap(image, depth):   # heatmap RGB
    H, W = depth.shape
    if image.shape != depth.shape:
        image = cv2.resize(image, (W, H))

    heatmapimg = np.array(depth * 255, dtype = np.uint8)
    heatmapimg = cv2.applyColorMap(heatmapimg, cv2.COLORMAP_JET)

    heatmapimg[depth==0, :] = [0, 0, 0]

    img = np.concatenate((image, heatmapimg))

    return img


depth_r = np.fromfile(*.depth, dtype=np.float32).reshape(H, W)
returned_img = plot_depth_heatmap(RGBimg, depth_r)

I got figures like those,

Screenshot from 2021-09-09 19-25-19
Screenshot from 2021-09-09 19-25-29
Screenshot from 2021-09-09 19-25-50

Please advice whether I did something incorrectly or this dataset itself contains depth info not good enough? Thanks in advance.

Fail to convert from COLMAP to openMVG by using import_colmap and export_openmvg

Firstly, I would like to ask if it is possible to use these functions to convert the output from COLMAP sparse reconstruction to openMVG sfm_data.json for further reconstruction pipeline (planned to use with openMVS)

I try writing the python script based on these tools

from kapture.converter.colmap.import_colmap import import_colmap
from kapture.converter.openmvg.export_openmvg import export_openmvg

logger = logging.getLogger('colmap')

def colmap2kapture(kapture_output_dir,colamp_database_path,colmap_txt_dir,IMAGE_DIR) -> None:
    if not os.path.exists(kapture_output_dir):
        os.makedirs(kapture_output_dir)
    logger.info('importing colmap ...')
    kapture_data = import_colmap(kapture_output_dir,
                                 colamp_database_path,
                                 colmap_txt_dir,
                                 IMAGE_DIR,
                                 )
    logger.info('saving to kapture  ...')
    logger.debug(f'\t"{kapture_output_dir}"')
    kapture.io.csv.kapture_to_dir(kapture_output_dir, kapture_data)

def kapture2openMVG(openMVG_output_dir,kapture_input_dir,IMAGE_DIR):
    export_openmvg(kapture_input_dir,
                   openMVG_output_dir,
                   IMAGE_DIR)
            
if __name__ == '__main__':
    colmap2kapture(kapture_output_dir,colamp_database_path,colmap_txt_dir,IMAGE_DIR)
    kapture2openMVG(openMVG_output_dir,kapture_output_dir,IMAGE_DIR)

I was able to get the kapture output and the sfm_data.json file like these
Screenshot from 2022-04-20 10-41-55

However, when I try to convert this sfm_data.json to openMVS format with openMVG_main_openMVG2openMVS I found this error

ERROR: [sfm_data_io_cereal.cpp:217] JSON Parsing failed - provided NVP (width) not found
ERROR: [main_openMVG2openMVS.cpp:291] The input SfM_Data file "/home/peter/Documents/cpp_workspace/outputs/out_colmap_castle/openMVG/sfm_data.json" cannot be read.

what could possibly be the reason?

Providing minimum versions for dependencies is recommended

Hi all, thank you for sharing your code with everyone!

A mild request / comment. It is generally considered to be good practice to specify minimum versions in your install_requires, especially when you can expect users to be using conda.

For instance, unless a conda environment already has the latest version of numpy - as available on pypi - installing your package will install the latest version of numpy on top of the conda-supplied numpy. Not only does this lead to a mess, but no longer will the user benefit from the MKL-optimized numpy provided by conda.

point clouds of datasets are missing

Hi there, I've just downloaded your greate datasets in crowded indoor spaces. But after decompressed, I found only the images and dpeths file in mapping directories while the point clouds are missing.
Am I doing something wrong or do I need to reconstruct the point clouds follwing kapture-localization mapping pipeline?

Subprocess Error (Return code: -9 ) while bundle adjustment

When I run code at tutorial(https://github.com/naver/kapture/blob/master/doc/tutorial.adoc)

Custom local features and matching based on image retrieval

  1. Build map using COLMAP

kapture_colmap_build_map.py -v info -i ./mapping --pairsfile-path ./tutorial/mapping_pairs.txt \ -o ./tutorial/mapping_colmap --use-colmap-matches-importer \ --Mapper.ba_refine_focal_length 0 \ --Mapper.ba_refine_principal_point 0 \ --Mapper.ba_refine_extra_params 0

some error occurs.

==============================================================================
Retriangulation
==============================================================================
=> Completed observations: 1261026
=> Merged observations: 44269393
==============================================================================
Bundle adjustment
==============================================================================
Traceback (most recent call last):
File "/home/hms/anaconda3/envs/kapture/bin/kapture_colmap_build_map.py", line 247, in
colmap_build_map_command_line()
File "/home/hms/anaconda3/envs/kapture/bin/kapture_colmap_build_map.py", line 243, in colmap_build_map_command_line
args.skip, args.force)
File "/home/hms/anaconda3/envs/kapture/bin/kapture_colmap_build_map.py", line 62, in colmap_build_map
force)
File "/home/hms/anaconda3/envs/kapture/bin/kapture_colmap_build_map.py", line 157, in colmap_build_map_from_loaded_data
point_triangulator_options
File "/home/hms/anaconda3/envs/kapture/lib/python3.6/site-packages/kapture/converter/colmap/colmap_command.py", line 242, in run_point_triangulator
run_colmap_command(colmap_binary_path, point_triangulator_args)
File "/home/hms/anaconda3/envs/kapture/lib/python3.6/site-packages/kapture/converter/colmap/colmap_command.py", line 37, in run_colmap_command
'\nSubprocess Error (Return code:'
ValueError:
Subprocess Error (Return code: -9 )

Does it occur because of lack of GPU memory, or something?

Please tell me....

One more question is, how long does it take to do 'bundle adjustment'?

It took more than 10 hours with 14600 train images and 4000 test images. (i7-9700k, RTX 2060)

format versioning

Hi, I've created a dataset as described in the documentation (using my own SW stack). I'd like to use kapture python toolbox for read operation on dataset, but I've found out that the toolbox expects additional information in the headers of csv files describing dataset format version. It is no brainer to add an extra line for me in my CSV generation code, but I think the documentation now it outdated and needs to be refreshed.

Gangnam dataset

Hi I feel difficult to run kapture-localization pipeline on Gangnam dataset and get the same number on benchmark. I summarized my issue below, it would be great if you can take a look at this and help me!! Thanks.

  1. I downloaded the data using kapture_download_dataset.py following this instruction. I can do mapping using kapture-loc pipeline (after fixing some dataset path issues). BUT I am unable to do localization for query images because some dataset internal folders path are inconsistent with your code. For example, your code believe that query image and map image are under same folder, with only one trajectories.txt, records_camera.txt, but the downloaded Gangnam dataset has separate trejectories.txt, record_camera.txt for training/test/validation folder. I feel just fixing these myself doesn't make sense as I didn't see any tutorial/documentations about how to convert the downloaded Gangnam dataset into a proper format such that we can run kapture-loc on it. Here are actually some other issues (e.g. input argument conflicts, assert k_data.trajectories is not None etc.) Could you please let me know how to run kapture-loc on Gangnam successfully?

  2. Given the first issue, I tried to reproduced kapture-loc on Gangnam dataset myself, but I just got [0, 0, 0] as final evaluation result. I used 3D model build from kapture-loc, and I used r2d2, APGeM, geometric verification and colmap registration for localization. Currently I didn't see any bugs in the code (though I believe that I must miss something): r2d2 matching looks reasonable, gv works, register works. Could you point me some potential issues? And could you let me know how to get that result as on benchmark?

Thank you so much for the time!!
JS

Issue in installation

After the docker is build, I am facing the issue while testing the installation.
Tried upgrading numpy version. but didn't work. Can someone suggest some solution.

/kapture/tools$ python3 ./kapture_print.py -i ../samples/Aachen-Day-Night/kapture/training
RuntimeError: module compiled against API version 0xf but this version of numpy is 0xd
Traceback (most recent call last):
File "./kapture_print.py", line 18, in
import kapture
File "/home/mukula/kapture/kapture/init.py", line 8, in
from .core import * # noqa: F401 F403
File "/home/mukula/kapture/kapture/core/init.py", line 7, in
from .PoseTransform import PoseTransform # noqa: F401
File "/home/mukula/kapture/kapture/core/PoseTransform.py", line 9, in
import quaternion
File "/home/mukula/.local/lib/python3.8/site-packages/quaternion/init.py", line 27, in
from .numpy_quaternion import (
ImportError: numpy.core.multiarray failed to import

Issue with setting up Kapture.

Hello Everyone!

I was setting up kapture on my system Ubuntu 18.04 using docker.

I was using this Instructions page :
Installation

In this the installation shows no issues.

But while running the command in Section 4. test your installation:

python3 ./kapture_print.py -i ../samples/Aachen-Day-Night/kapture/training

This line throws an error as shown in the image below:

Kapture_Githubissue

The error reads :
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1-2: ordinal not in range(128)

What can I do to correct this?

module 'kapture.io.csv' has no attribute 'get_all_tar_handlers'

Hi !
I have an error "module 'kapture.io.csv' has no attribute 'get_all_tar_handlers'" after executing string in python

tar_handlers = csv.get_all_tar_handlers(dataset_path)

Here are my steps:

Have cloned this repo, created docker images in according to guide, then I had to correct launch of docker container in order to launch jupyter notebook inside docker container:

docker run -it --rm --runtime=nvidia \
                  --volume <path_to_all_my_dataset>:/dataset:rw kapture/kapture --privileged --net "host" -p 8888:8888

Then I had to install jupterlab inside docker container due to absence this package in DockerFile for kapture:

 pip install jupyterlab notebook

After that I launched jupyter notebook:

jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root

Then I opened jupyter notebook and executed the following strings:

import sys
REPO_ROOT_PATH = '/opt/src/kapture/'
sys.path.insert(0, REPO_ROOT_PATH)
import kapture
import kapture.io.csv as csv

dataset_path = '/dataset/NAVER_Labs/'
tar_handlers = csv.get_all_tar_handlers(dataset_path)

After the last command I got the error :

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-958e2eb94904> in <module>
----> 1 tar_handlers = csv.get_all_tar_handlers(dataset_path)

AttributeError: module 'kapture.io.csv' has no attribute 'get_all_tar_handlers'

How to resolve this error ?

Conversion to colmap produces RADIAL

I'm converting a bundler dataset with undistorted images to kapture, and then to colmap model files.

The resulting colmap model file cameras.txt uses the RADIAL camera model.

As the original dataset is an undistorted bundler, is there a way to define the resulting colmap files to be PINHOLE instead?

I tried converting the file in colmap using the "image_undistorter" command but this fails I presume because the RADIAL model has no values for lens distortion.

Any help would be much appreciated!

Exporting matches for bundle import

Hi there,
Context: I'm trying to convert a registration stored in the bundle.out to the colmap format, using kapture as an intermediate format.

When loaded into Colmap GUI (File -> New Project and select database colmap.db and input images folder), the resulting colmap registration shows no 3D points in the colmap viewer. This is probably because the intermediate kapture files store no matches so kapture_data.matches is None.

Each point in the bundle.out file stores this info, where view list stores for that keypoint the number of matches and then for each image, <camera> <key> <x> <y> where key is the index of that keypoint in the image features point, and xy its coordinates.

<position>      [a 3-vector describing the 3D position of the point]
<color>         [a 3-vector describing the RGB color of the point]
<view list>     [a list of views the point is visible in]

It would be great for the kapture_import_bundler.py to also export a match list in kapture parlance, made of all pairs within the dataset. The score could be the number of keypoints found for that pair. Would result in this reconstruction/matches/SIFT sample with overlapping files.

Edit:
What I need is actually simply a file pairsfile-path for the kapture_export_colmap.py to export these matches, and my feeling is that this one can be produced using only the content of the bundle.out, by aggregating a counter for each image pair for each observation - but I'm not sure this is the best thing to do as this list will become exponentially large.

--pairsfile-path PAIRSFILE_PATH: 
text file in the csv format; where each line is image_name1, image_name2, score which contains the matches to

matches-vs-inliers and F/E matrices

Hi,

Is there any way to store both tentative matches (which can be wrong) and inliers (after RANSAC, but not yet sure 3D observations)?
And, related to this - is there a plan for storing intermediate pairwise image relationships, like essental/findamental matrices? I can prepare a PR for this.

Best, Dmytro

Compatible output for visuallocalization benchmark

Hello,

Great work. One question. I was trying the whole localization pipeline.
However, the output of the Aachen is not the same as Torsten's official repo.
For the benchmark website (https://www.visuallocalization.net/),
there should be a txt file of the estimated query poses.
Since the benchmark does not provide gt, one should upload the query pose manually.
While the step 5) evaluate(https://github.com/naver/kapture/blob/master/doc/tutorial.adoc#evaluate) seems to me that it's not providing the same format.
Am I missing something here? Please let me know. Thank you!

Kind Regards,
Tsun-Yi

PyPi build and publish fails

Uploading ***-1.1.9-py3-none-any.whl
25l
  0% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/298.8 kB • --:-- • ?
  0% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/298.8 kB • --:-- • ?
 27% ━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.9/298.8 kB • 00:01 • 102.7 MB/s
 88% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━ 262.1/298.8 kB • 00:01 • 2.3 MB/s
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.8/298.8 kB • 00:00 • 1.8 MB/s
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.8/298.8 kB • 00:00 • 1.8 MB/s
25hINFO     Response from https://upload.pypi.org/legacy/:                         
         403 Invalid or non-existent authentication information. See            
         https://pypi.org/help/#invalid-auth for more information.              
INFO     <html>                                                                 
          <head>                                                                
           <title>403 Invalid or non-existent authentication information. See   
         https://pypi.org/help/#invalid-auth for more information.</title>      
          </head>                                                               
          <body>                                                                
           <h1>403 Invalid or non-existent authentication information. See      
         https://pypi.org/help/#invalid-auth for more information.</h1>         
           Access was denied to this resource.<br/><br/>                        
         Invalid or non-existent authentication information. See                
         https://pypi.org/help/#invalid-auth for more information.              
                                                                                
                                                                                
          </body>                                                               
         </html>                                                                
ERROR    HTTPError: 403 Forbidden from https://upload.pypi.org/legacy/          
         Invalid or non-existent authentication information. See                
         https://pypi.org/help/#invalid-auth for more information.              
Error: Process completed with exit code 1.

Permissions for scripts and dos formatting

Hi,

Thanks for all of your great work on this! I have a minor bug to report around the scripts in the tools/ folder. I am running Ubuntu 20.04 and have installed Kapture through pip using an empty conda environment with python=3.7.

Two issues:

  1. all scripts in the tools/ folder, e.g. kapture_download_dataset.py are not executable upon installation. I can manually fix this of course, but not ideal every time Kapture is installed/updated.
    kapture_download_dataset.py update
    zsh: permission denied: kapture_download_dataset.py
  2. After making the script executable within the package directory, executing the script again yields the following error:
    kapture_download_dataset.py update
    /usr/bin/env: ‘python3\r’: No such file or directory
    This is due to special characters or something through Windows or something. I can again easily fix this by running dos2unix on all tools but again this is suboptimal every time kapture is installed/updated.

Is there any way to address this for future releases? I remember a few releases ago this was not an issue.

Bug: tools/kapture_print.py crashes

I'm following along the instructions in https://github.com/naver/kapture/blob/main/doc/installation.adoc#4-test-your-installation and when I run the following:

cd kapture # use path of your cloned repository
cd tools
python ./kapture_print.py -i ../samples/Aachen-Day-Night/kapture/training

I get the following output:

nb sensors               : 3
nb trajectories          : 3
nb records_camera        : 3
nb types keypoints       : 1
 └─ nb images sift       : 3
Traceback (most recent call last):
  File "C:\Temp\kapture\kapture-main\tools\kapture_print.py", line 381, in <module>    print_command_line()
  File "C:\Temp\kapture\kapture-main\tools\kapture_print.py", line 340, in print_command_line
    do_print(kapture_data, args.input, args.output, args.detail, args.all,
  File "C:\Temp\kapture\kapture-main\tools\kapture_print.py", line 376, in do_print
    print_matches(kapture_data, output_stream, show_detail, show_all)
  File "C:\Temp\kapture\kapture-main\tools\kapture_print.py", line 261, in print_matches
    for kpt_type, matches in kapture_data.matches.items():
AttributeError: 'NoneType' object has no attribute 'items'

The output differs from the one in the documentation at the "nb images sift" line, and then the code crashes, I'm guessing because the matches.txt file is missing and this case isn't handled properly in this code path:

def print_matches(kapture_data, output_stream, show_detail, show_all) -> None:
    ...
    nb_kpts_types = None if kapture_data.matches is None else len(list(kapture_data.matches.keys()))
    if not show_detail:
        print_key_value('nb types ', nb_kpts_types, output_stream, show_all)
        for kpt_type, matches in kapture_data.matches.items():

BTW: thanks for producing such clean and extremely well documented code!

Kapture reading trajectory error when rig identifier two characters in size

Hi,
I've been using Kapture and have encountered a problem where the reading trajectory from file doesn't work when the rig id (also known as device_id) is only two characters or less.

From my debugging, what I've been able to figure out is that when the trajectory and rig files are created, a padding is defined which sets the minimum size of that field. By default, this is 3 elements (line 82 of csv.py). Therefore the new rig id becomes whitespace then the user defined rig id.

The problem is, when re-reading the trajectory file, the table_from_file function (line 195 of csv.py) is removing all whitespace with the strip command (line 214). The intention of strip is to remove the whitespace following the comma in the csv, however the strip command removes all whitespace and also removes the padding whitespace that is now a component of the rig id.

The error occurs when this stripped rig id is compared to the rig id produced by kapture_data.rigs.keys() (line 1487 of csv.py), which has no strip command and returns the rig id as whitespace plus the user defined characters.

I have not been able to figure out a solution, hopefully you are able to find a solution to this problem.

Improve kapture_import_image_folder.py

When I use kapture_import_image_folder.py, It feels bit uncomfortable that I can't assign a specific camera model and the following parameters. So I modify this code a little. It allows user assign specific sensor information about images.

I think people will store images in same folder if those are taken from same sensor. So, If user write sensors.txt about their image data, this will make kapture data with specified sensor info. sensors.txt should locate at images_path and sensors.txt's sensor_id should be each image folder's name.

For example:

images
├─ sensors.txt
├─ cam0
│  ├─ img1.jpg
│  ├─ img2.jpg         
│  ├─ ...
│  └─ imgN.jpg
├─ cam1   
│  ├─ img1.jpg
│  ├─ img2.jpg         
│  ├─ ...
│  └─ imgN.jpg
└─ cam2  
   ├─ img1.jpg
   ├─ img2.jpg         
   ├─ ...
   └─ imgN.jpg  

in this case, sensos.txt should be look like this:

# kapture format: 1.1
# sensor_id, name, sensor_type, [sensor_params]+
cam0, , camera, SENSOR_TYPE, P1, P2, P3, ...
cam1, , camera, SENSOR_TYPE, P1, P2, P3, ...
cam2, , camera, SENSOR_TYPE, P1, P2, P3, ...

or

cam0
├─ sensors.txt
├─ img1.jpg
├─ img2.jpg         
├─ ...
└─ imgN.jpg

in this case, sensors.txt should be look like this:

# kapture format: 1.1
# sensor_id, name, sensor_type, [sensor_params]+
cam0, , camera, SENSOR_TYPE, P1, P2, P3, ...

if user doesn't know sensor type and param about images, it's okay. If image folder doesn't have sensors.txt, it works in the old way. If sensors.txt only have partitial info(ex. only have cam0 in case1), it gives that params only for those images.

I send pr about this issue, so I'd be happy if you give any feedback.

Inconsistent file folder structure in documentation

If you look at the description of the file folder hierarchy at the top of the kapture: data format v1.1 document at https://github.com/naver/kapture/blob/main/kapture_format.adoc, it shows the images as all living in sub-folders of the sensors/records_data with both map (sic) and query folders inside that one records_data folder.

But if you look at the folder structure in the localization datasets downloaded with kapture_download_dataset, the mapping and query are top-level folders within which the sensors folders live.

Is the on-line documentation out of date with the new file folder structure?

Since the map(ping) and query images and meta-data are in separate top-level folders (and hence, presumably would be the created reconstruction folders), what is the recommended strategy for matching mapping vs. query images and storing these matches?

Thanks.

Don't downgrade pytorch when installing?

Hi,

First, thank you for the great initiative and package. That is a really great piece of work.
Second, may I ask you to clean-up the requirements? I was unlucky to run pip3 install kapture, which led to downgrading of the pytorch from 1.7 to 1.4. (?!), here is a copy-paste from install log.

Collecting bracex>=2.0
Downloading bracex-2.0.1-py3-none-any.whl (10 kB)
Installing collected packages: bracex, backrefs, wcmatch, torch, scipy, Pillow, piexif, numpy-quaternion, kapture
Attempting uninstall: torch
Found existing installation: torch 1.7.0
Uninstalling torch-1.7.0:
Successfully uninstalled torch-1.7.0
Attempting uninstall: scipy
Found existing installation: scipy 1.2.0
Uninstalling scipy-1.2.0:
Successfully uninstalled scipy-1.2.0
Attempting uninstall: Pillow
Found existing installation: Pillow 7.0.0
Uninstalling Pillow-7.0.0:
Successfully uninstalled Pillow-7.0.0

About inloc 3d lidar scan

Hello,

In the doc, it says there is a readme_kapture.txt to show how to use 3d scan in the kapture pipeline.
However, I didn't see it anywhere. Would you mind to point out some direction of how to use this with 3d scan data in inloc dataset evaluation? Thanks

About the ScanNet dataset

This project can be used the RGB-D image and camera pose of the ScanNet to generate the ground truth of the match pairs?

Extended_CMU_Seasons convert error

When i convert Extended_CMU_Seasons with 'kapture_import_Extended_CMU_Seasons.py' in docker, this error occured,

TypeError: merge_keep_ids() missing 1 required positional argument: 'tarcollection_list'.

and i check the '/usr/local/lib/python3.6/dist-packages/kapture-1.1.8-py3.6.egg/EGG-INFO/scripts/kapture_import_Extended_CMU_Seasons.py' , the details as follows

Screenshot from 2023-03-13 20-40-46

Can I directly access(import) 3D point ID which exist in both train image and query image.(image pair)

Hi,

I'm trying to obtain 6D of camera using PnP algorithm.

Can I directly access(import) 3D point ID which is in both train image and query image(image pair)?

I run code kapture_compute_matches.py -v info -i ./tutorial/mapping_query --pairsfile-path ./tutorial/query_pairs.txt, and I got matches kapture dataformat. Does it contain data about 3D point ID??

+++another question
In cameras.txt, does PARAMS[] means fx fy cx cy?

I'm sorry to bother you...

Using custom local keypoints and descriptors

Hi,
I am trying to replace the keypoints generated by R2D2 with keypoints generated by other custom local feature extraction methods. Just wondering, if this is possible on Kaputer? My approach is to prepare the keypoint folder and descriptor folder using the same structure provided by Kapture.

I will appreciate it a lot if you can let me know if this approach is feasible.

Thank you very much! Best!

How you generate the depthmaps in GangnamStation?

Hi, I wonder how the depth maps in GangnamStation and HyundaiDepartmentStore datasets are generated. In the trajectories-txt, you mentioned images and depth maps come from a single sensor. So are the images and depth maps collected from RGBD sensors? If not, how the intrinsic params for depth sensors in sensors.txt are got?

Reproducing the results of Aachen v1.1

Hi, thanks for the great project for building VL pipeline. It is really helpful for my research.
I want to ask some questions for reproducing results of Aachen day night v1.1.

I'm using off-the-shelf R2D2 and AP-GeM, and would you please share the specific parameters to reproduce the results?

For AP-GeM, Resnet101-AP-GeM-LM18 is used and
I extract key points using the following R2D2 script.
Also, I'm using config2 in kapture paper for kapture_colmap_build_map.py and kapture_colmap_localize.py,
Please tell me if I'm doing something wrong above.

And could you please share the number of retrieved images for each query? and also I wonder if you used global bundle adjustment after building the map.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.