Code Monkey home page Code Monkey logo

tanksandtemples's Introduction

Tanks and Temples

This repository is used for discussing issues regarding the website that hosts the Tanks and Temples dataset.
http://www.tanksandtemples.org

In order to evaluate your reconstruction algorithm on our benchmark, you need to download the dataset, reconstruct 3d geometry, submit your results, get evaluated, and be put on the leaderboard. Please follow the instructions on the website. If you encounter any problem, first check if the problem is listed on FAQ. If not, go to the issues page to search if there is any duplicate of your problem. If not, file an issue and we will respond as fast as we can. Alternatively, you can send an email to [email protected].

Python scripts

The python_toolbox folder includes the python scripts for downloading the dataset and uploading reconstruction results. The python scripts are under the MIT license. The dataset itself has a different license, see this page for details.

Usage of downloader:

> python download_t2_dataset.py [-h] [-s] [--modality MODALITY] [--group GROUP] [--unpack_off] [--calc_md5_off]

Example 1: download all videos for intermediate and advanced scenes
> python download_t2_dataset.py --modality video --group both

Example 2: download image sets for intermediate scenes (quick start setting)
> python download_t2_dataset.py --modality image --group intermediate

Example 3: show the status of downloaded data
> python download_t2_dataset.py -s

Usage of uploader:

> python upload_t2_results.py [-h] [--group GROUP]

Example 1: upload intermediate and advanced reconstruction results
> python upload_t2_results.py --group both

Example 2: upload only intermediate results
> python upload_t2_results.py --group intermediate

tanksandtemples's People

Contributors

arknapit avatar griegler avatar polarnick239 avatar qianyizh avatar syncle avatar yxlao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tanksandtemples's Issues

Existing submissions

The python program is not committed when the page is refreshed after it has finished running.

requests.exceptions.ConnectionError

Hello, thank you very much for your excellent work. I want to submit my own test data through your website in the near future, but I have tried many times and it will show "requests.exceptions.ConnectionError: HTTPConnectionPool(host='t2-website- userdata.storage.googleapis.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb6da55220>: Failed to establish a new connection: [Errno 110] Connection timed out'))
", please provide necessary help, I am from China
image

Upload script does not with Python 3.7

I've tried to use Python 3.7 to upload my results (Intermediate sets only) for evaluation. However, the script throws two errors. If I change line 146 to md5_check = open(md5_check_fn, 'w'), and lines 153 & 157 to md5_ply_file = b'', the errors are fixed. However, when I then upload my results, the server complains that the MD5 checksums are wrong.

The only way I've been able to upload the results with correct MD5 checksums is to use the original script with Python 2.7.

Incorrect poses for Courthouse scene?

Hi, I was visualizing the camera poses for the courthouse scene after converting the .log file into Colmap format. However, I noticed that some of the camera poses don't look right. For example, these are two images that are supposed to be from roughly the same viewpoint (the image planes highlighted in pink), but that doesn't seem to be the case.

The flag pole in the first image seems to suggest that the photo is taken from the left side of the scene, but the visualized camera pose is on the right side instead. (In fact there are no camera poses in the left side of the scene, which seems to be problematic)

image
image

I'm wondering if you could check if this is actually an issue with the dataset, or if I'm doing something incorrectly during the conversion. Thanks!

not all poses recovered

Hey, thanks for the great effort!

I am trying to evaluate my dense reconstruction pipeline.
However, for some datasets of the advanced set I cannot successfully recover all camera poses neither with COLMAP of OpenMVG.
I am actually missing 5 poses in the Palace set and 10 poses in the Auditorium.

My pipeline actually focuses on the dense reconstruction part, so I would need to start for some good poses, but I cannot recover them successfully with the available SfM pipelines.

Can I create the .log file anyway, so that I can upload and get the scores?

MD5 does not match in all the files

Hello,
I followed your instructions like this,

python download_t2_dataset.py [-h] [-s] [--modality MODALITY] [--group GROUP] [--unpack_off] [--calc_md5_off]

Example 1: download all videos for intermediate and advanced scenes

python download_t2_dataset.py --modality video --group both

Example 2: download image sets for intermediate scenes (quick start setting)

python download_t2_dataset.py --modality image --group intermediate

and in all the tries, MD5 have not matched. I've tried by windows, linux, and used different encodings.

Now i'm using 'calc_md5_off' mode. Is there any problem? or Am i missing something?

Thank you.

Confusion with alignment matrix

Hi, while I was going through the code in python_toolbox/evaluation/ to better understand how the evaluation metrics are computed I got a little confused by the way alignment / transformation matrices are applied.

From what I understand, the adopted convention is that the matrices align the reconstructed pose to the ground-truth (as mentioned in #12 (comment) and on section 3-1. of the tutorial), i.e., using Open3D's parameter names: "source = reconstructed / estimate" and "target = ground-truth".

Hence, in run_evaluation() the transformation matrix gt_trans should align the reconstruction to the ground-truth (right?).

However, in trajectory_alignment() the transformation is applied to the ground-truth trajectory:
https://github.com/intel-isl/TanksAndTemples/blob/90cd206d6991acec775cf8a2788517d7ecc30c2f/python_toolbox/evaluation/registration.py#L65-L69

Does it make sense to apply a "reference to ground-truth" transform to data in the ground-truth coordinate frame? Shouldn't this use the inverse transform, effectively taking "ground-truth to reference" (i.e. traj_pcd_col.transform(np.linalg.inv(gt_trans)))? Or instead, apply the transformation to the reference data (traj_to_register_pcd in this case)?

Thank you.

Church scene provided images and poses do not match

Dear authors,

Thank you very much for the dataset! In the provided training scene "Church", the number of images does not match the number of poses (507 vs 644). Quite a few images are missing in the provided training set.

I tried to extract the images myself using the given timestamp file. However, the extracted images look different from the provided images even if they have the same index, and COLMAP doesn't give a meaningful reconstruction result. I also tried ffmpeg extraction specified in this link (https://www.tanksandtemples.org/tutorial/), but again the number of images (only 643 images) and poses do not match.

I wonder if you know what the issue is. Thank you very much!

Training data not included in download script

I am in the process of downloading the tanks and temples data, but it seems like the download script download_t2_dataset.py includes only the testing set.

--group GROUP        (intermediate|advanced|both) choose if you want to
                       download intermediate or advanced dataset

Should there not be a parameter for the training data as well or am I missing something? In the tutorial for setting up a workspace it does urge me to use the download guidelines for the Ignatius dataset which is part of the training data.

Change of API in Open3D, use release v0.1.1

Hi,

The evaluation scripts don't work with the newest version of Open3D. There have been some changes in API (mainly the python module is now called open3d and not anymore py3d).

It is sufficient to checkout the release v0.1.1 to have the script work. You may want to update the readme :)

Camera matrices alignments

Hi,

Thanks for sharing the dataset. I have a few questions:

  1. In the image sets provided, the image starts with 000001.jpg but the Camera Poses start with index 0. How do they correspond to each other?
  2. I tried rendering the mesh with the camera poses and the suggested principal point offset & focal length, but the result is not aligned with the image sets (i.e., 000001.jpg). Is this expected?

Thanks a lot!

Cannot upload .ply files with more than 50 MB

Hi,
I am using the python up loader, scene Family, Francis, Horse from intermediate set is getting uploaded perfectly however I cannot upload Lighthouse, Train, Panther, etc whose .ply files are more than 50 MB. Their log files are however getting uploaded perfectly. I am using python 2.7

Any help is highly appreciated.

Thanks.

Download Google drive link

Hi
I'm thinking of doing a project using your dataset, but it seems like the Google drive link in download_t2_dataset.py is not valid.
I get 404 error when I try to access via Chrome browser / fails to download when I run the script.

Is there a new link replacing this old one?

thanks

Ground truth point clouds for intermediate and advanced test sets

Thank you for providing the evaluation code along with the ground truth data for training sets. In the python evaluation code, you provide links for downloading point clouds created with COLMAP and ground truth ply files. For example, for Ignatius set, you provide Ignatius.ply & Ignatius_COLMAP.ply. Here the Ignatius.ply is the ground truth point cloud while Ignatius_COLMAP.ply is created with COLMAP's 3D reconstruction. The evaluation code for calculating F-Score works fine with provided training sets.

However, I cannot find the ground truth data for intermediate and advanced test sets anywhere. Can you please suggest how/where to look for it?

Self occlusion in ground truth

Hi, and thanks for the great dataset.

I noticed that in some scenes of the training dataset there is a significant amount of self occlusion in the LiDAR ground truth (e.g. the chest in Ignatius, or the load floor on the truck. Some of these areas are however visible in (potentially overlapping) images. If I understand the evaluation procedure correctly, if points are reconstructed in these areas, it will drive down the F-Score.

Additionally, I noticed that in the Truck scene, there are some sort of fairy lights above the truck that are also not present in the ground truth. Since the bounding volume for clipping is only 2D (a polygon), this area is still evaluated in the reconstruction.

Are my assumptions correct and do these problems also occur in the test datasets (where we do not have the ground truth)?

Outdated get_colmap_reconstruction.sh

Hi,
Just fyi.
It seems there are some name changes in colmap. So the code in get_colmap_reconstruction.sh needs to be updated at least for my environment, linux ubuntu version:

colmap dense_stereo now becomes patch_match_stereo
colmap dense_fuser now becomes stereo_fusion

Hi,about upload error

When the upload fails and the rest of the files are uploaded again, the next ply MD5 error occurs,Is there any way to solve this problem?Thank you!!!

Can't pass MD5 Check

Hi, thanks for your nice code! I met a problem after uploading all intermediate reconstruction files, it showed "MD5 wrong, repeat upload" on each MD5 check columns. I have tried making new submission, using python2 or python3 to execute the upload tool, but not work. Could you help me to find the solutions? Looking forward to your reply!

ModuleNotFoundError: No module named 'read_model'

hello, I followed the tutorial and try to obtain a *.log file from colmap result by runing the python script:
python convert_to_logfile.py Ignatius_COLMAP/sparse/0/ cameras.log Ignatius_COLMAP COLMAP jpg
but there is a error:
Traceback (most recent call last): File "convert_to_logfile.py", line 48, in <module> import read_model ModuleNotFoundError: No module named 'read_model'
and I don't know where to find the module read_model, can you help me? thank you very much

Offset in provided mapping files?

From the 'Download' page:

The image sets are sampled at a frame rate of 1 fps from the video while the video was recorded with 29.97 fps. To find the corresponding frame F to the image I you need to calculate: F = int(I*29.97), starting with I=0.

But, when looking into the 'training_data.zip' package, often times the values don't agree with this formula. For instance, in the 'Courthouse_mapping_reference.txt' file:
...
19 568 <-- should be int(19 * 29.97) = 569
20 598 <-- should be int(20 * 29.97) = 599
21 628 <-- should be int(21 * 29.97) = 629
...

Is this offset intentional? Am I missing something here?
I'm trying to test my own functions for generating a mapping file, but am having trouble recreating the mapping files that have been provided.

ConnectionResetError(104, 'Connection reset by peer')

Hello, in the past week, I have been encountering the above error while uploading results.
During each upload process, this issue will occur after uploading 1 to 3 log files.
I have tried uploading repeatedly, changing servers, and updating the credentials, but I have encountered this problem all the time.
I have occasionally uploaded results before, but I have not encountered this problem. Can you tell me how to solve it

Ground truth object mesh

Hey

I recently started experimenting with NeRF and my project requires a ground truth mesh file of the given object. Is that available for any or all of the Tanks&Temple dataset objects?

Thanks

Offline evaluation reference alignment

Hi,
I am trying to evaluate a 3D model produced with COLMAP and I got a little bit confused by the files needed to execute correctly the evaluation.
I executed COLMAP on the Bart dataset producing Bart_COLMAP_my.ply file and Barn_COLMAP_SfM_my.log (from the binary files using the script "convert_to_logfile.py").
Then I set up the parameters in setup.py but I have two doubts:

  1. Should I provide both .log files (namely the python variables in run.py "colmap_ref_logfile" and "new_logfile") to align the trajectory?
  2. The file "Barn_trans.txt" is relative only to the provided Barn_COLMAP_SfM.log file or can I use it also with my log file (Barn_COLMAP_SfM_my.log)

Thank you in advange and congrats for your work.

Alessandro

Why is the uploading slow sometimes?

Hi,

I'm submitting the results to your benchmark websites. The first time is fast. But in next time, the uploading is very slow. It takes several hours.
Do you know any problems here?

Thank you.
Khang.

Conection Error

Thank you for your great work!
I found that these days I cannot upload point cloud because of connection error. I changed the network and PC but still could not upload file. Some other researchers cannot, neither. Could you please check the website if it works?
Thank you for your time!

<<<<<<<<<
start Lighthouse.ply upload
Exiting due to receiving 502 status code when expecting 204.

Incorrect alignment after trajectory_alignment?

I was trying to evaluate the Colmap meshes using the evaluation script for the Truck scene, and I found that the transformed Colmap mesh (Truck.precision.ply) is not correctly aligned with the ground truth mesh, and the resulting F1 scores is very low.

snapshot00

However, I manually verified that the transformation provided in the Truck_trans.txt file is correct, and indeed after removing the line trajectory_transform = trajectory_alignment(map_file, traj_to_register, gt_traj_col, gt_trans, scene) and directly using gt_trans instead of trajectory_transform, the alignment looks correct.

snapshot01

I'm wondering if this is caused by some issue with the alignment code, or if I'm doing something wrong. Moreover, given that we already have the ground truth transformation from Colmap, why do we still need to perform ICP registration on the point cloud? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.