Code Monkey home page Code Monkey logo

rpg_trajectory_evaluation's Introduction

rpg_trajectory_evaluation

This repository implements common used trajectory evaluation methods for visual(-inertial) odometry. Specifically, it includes

  • Different trajectory alignment methods (rigid-body, similarity and yaw-only rotation)
  • Commonly used error metrics: Absolute Trajectory Error (ATE) and Relative/Odometry Error (RE)

The relative error is implemented in the same way as in KITTI since it is the most widely used version.

Since trajectory evaluation involves many details, the toolbox is designed for easy use. It can be used to analyze a single trajectory estimate, as well as compare different algorithms on many datasets (e.g., this paper and used in IROS 2019 VIO competition) with one command. The user only needs to provide the groundtruths and estimates of desired format and specify the trajectory alignment method. The toolbox generates (almost) paper-ready plots and tables. In addition, the evaluation can be easily customized.

If you use this code in an academic context, please cite the following paper:

Zichao Zhang, Davide Scaramuzza: A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry, IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), 2018. PDF

@InProceedings{Zhang18iros,
  author = {Zhang, Zichao and Scaramuzza, Davide},
  title = {A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry},
  booktitle = {IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS)},
  year = {2018}
}
  1. Install
  2. Prepare the Data
  3. Run the Evaluation
  4. Utilities
  5. Customization: Trajectory class
  6. Credits

Install

The package is written in python and tested in Ubuntu 16.04 and 18.04. Currently only python2 is supported. The package can be used as a ROS package as well as a standalone tool. To use it as a ROS package, simply clone it into your workspace. It only depends on catkin_simple to build.

Dependencies: You will need install the following:

Prepare the Data

Each trajectory estimate (e.g., output of a visual-inertial odometry algorithm) to evaluate is organized as a self-contained folder. Each folder needs to contain at least two text files specifying the groundtruth and estimated poses with timestamps.

  • stamped_groundtruth.txt: groundtruth poses with timestamps
  • stamped_traj_estimate.txt: estimated poses with timestamps
  • (optional) eval_cfg.yaml: specify evaluation parameters
  • (optional) start_end_time.yaml: specify the start and end time (in seconds) for analysis.

For analyzing results from N runs, the estimated poses should have suffixes 0 to N-1. You can see the folders under results for examples. These files contains all the essential information to reproduce quantitative trajectory evaluation results with the toolbox.

Poses

The groundtruth (stamped_groundtruth.txt) and estimated poses (stamped_traj_estimate.txt) are specified in the following format

# timestamp tx ty tz qx qy qz qw
1.403636580013555527e+09 1.258278699999999979e-02 -1.561510199999999963e-03 -4.015300900000000339e-02 -5.131151899999999988e-02 -8.092916900000000080e-01 8.562779200000000248e-04 5.851609599999999523e-01
......

Note that the file is space separated, and the quaternion has the w component at the end. The timestamps are in the unit of second and used to establish temporal correspondences.

There are some scripts under scripts/dataset_tools to help you convert your data format (EuRoC style, ROS bag) to the above format. See the corresponding section below for details.

Evaluation parameters

Currently eval_cfg.yaml specifies two parameters for trajectory alignment (used in absolute errors):

  • align_type:
    • sim3: a similarity transformation (for vision-only monocular case)
    • se3: a rigid body transformation (for vision-only stereo case)
    • posyaw: a translation plus a rotation around gravity (for visual-inertial case)
    • none: do not align the trajectory
  • align_num_frames: the number of poses (starting from the beginning) that will be used in the trajectory alignment. -1 means all poses will be used.

If this file does not exist, trajectory alignment will be done using sim3 and all the poses.

Start and end times

start_end_time.yaml can specify the following (according to groundtruth time):

  • start_time_sec: only poses after this time will be used for analysis
  • end_time_sec: only poses before this time will be used for analysis

If this file does not exist, analysis be done for all the poses in stamped_traj_estimate.txt.

Run the Evaluation

We can run the evaluation on a single estimate result or for multiple algorithms and datasets.

Single trajectory estimate

As a ROS package, run

rosrun rpg_trajectory_evaluation analyze_trajectory_single.py <result_folder>

or as a standalone package, run

python2 analyze_trajectory_single.py <result_folder> 

<result_folder> should contain the groundtruth, trajectory estimate and optionally the evaluation configuration as mentioned above.

Output

After the evaluation is done, you will find two folders under <result_folder>:

  • saved_results/traj_est: text files that contains the statistics of different errors
    • absolute_err_statistics_<align_type>_<align_frames>.yaml: the statistics of the absolute error using the specified alignment.
    • relative_error_statistics_<len>.yaml: the statistics of the relative error calculated using the sub-trajectories of length <len>.
    • cached_rel_err.pickle: since the relative error is time consuming to compute, we cache the relative error for different sub-trajectory lengths so that we can directly use them next time.
  • plots: plots of absolute errors, relative (odometry) errors and the trajectories.

For multiple trials, the result for trial n will have the corresponding suffix, and the statistics of multiple trials will be summarized in files with the name mt_*.

As an example, after executing:

python2 scripts/analyze_trajectory_single.py results/euroc_mono_stereo/laptop/vio_mono/laptop_vio_mono_MH_05

you will find the following in plots:

top_traj abs_err rel_traj

Parameters

  • --recalculate_errors: will remove the error cache file mentioned above and re-calculate everything. Default: False.
  • --png: save plots as png instead of pdf. Default: False
  • --mul_trials: will analyze n runs. In the case of n > 1, the estimate files should end with a number suffix (e.g., stamped_traj_estimate0.txt). Default: None

Advanced: Different estimation type

Sometimes, a SLAM algorithm outputs different types of trajectories, such as real-time poses and optimized keyframe poses (e.g., pose graph, bundle adjustment). By specifying the estimation type (at the end of the command line), you can ask the script to analyze different files, for example

  • --est_type traj_est: analyze stamped_traj_estimate.txt
  • --est_type pose_graph: analyze stamped_pose_graph_estimate.txt
  • --est_type traj_est pose_graph: analyze both. In this case you can find the results in corresponding sub-directories in saved_results and plots.

The mapping from the est_type to file names (i.e., stamped_*.txt) is defined in scripts/fn_constants.py. This is also used for analyzing multiple trajectories. You can find an example in results/euroc_vislam_mono for comparing real-time poses and bundle adjustment estimates.

Multiple trajectory estimates

Similar to the case of single trajectory evaluation, for ROS, run

rosrun rpg_trajectory_evaluation analyze_trajectories.py \
  euroc_vislam_mono.yaml --output_dir=./results/euroc_vislam_mono --results_dir=./results/euroc_vislam_mono --platform laptop --odometry_error_per_dataset --plot_trajectories --rmse_table --rmse_boxplot --mul_trials=10

otherwise, run

python2 scripts/analyze_trajectories.py \
  euroc_vislam_mono.yaml --output_dir=./results/euroc_vislam_mono --results_dir=./results/euroc_vislam_mono --platform laptop --odometry_error_per_dataset --plot_trajectories --rmse_table --rmse_boxplot --mul_trials=10

These commands will look for <platform> folder under results_dir and analyze the algorithms and datasets combinations specified in analyze_trajectories.py, as described below.

Datasets organization

The datasets under results are organized as

<platform>
├── <alg1>
│   ├── <platform>_<alg1>_<dataset1>
│   ├── <platform>_<alg1>_<dataset2>
│   └── ......
└── <alg2>
│   ├── <platform>_<alg2>_<dataset1>
│   ├── <platform>_<alg2>_<dataset2>
    ├── ......
......

Each sub-folder is of the same format as mentioned above. You need to specify the algorithms and datasets to analyze for the script analyze_trajectories.py. We use a configuration file under scripts/analyze_trajectories_config to sepcify the details. For example, in euroc_vio_mono_stereo.yaml

Datasets:
  MH_01:        ---> dataset name
    label: MH01 ---> plot label for the dataset
  MH_03:
    label: MH03
  MH_05:
    label: MH05
  V2_01:
    label: V201
  V2_02:
    label: V202
  V2_03:
    label: V203
Algorithms:
  vio_mono:         ---> algorithm name
    fn: traj_est    ---> estimation type to find the correct file name
    label: vio_mono ---> plot label for the algorithm
  vio_stereo: 
    fn: traj_est
    label: vio_stereo
RelDistances: []   ---> used to specify the sub-trajectory lengths in the relative error, see below.
RelDistancePercentages: []

will analyze the following folders

├── vio_mono
│   ├── laptop_vio_mono_MH_01
│   ├── laptop_vio_mono_MH_03
│   ├── laptop_vio_mono_MH_05
│   ├── laptop_vio_mono_V2_01
│   ├── laptop_vio_mono_V2_02
│   └── laptop_vio_mono_V2_03
└── vio_stereo
    ├── laptop_vio_stereo_MH_01
    ├── laptop_vio_stereo_MH_03
    ├── laptop_vio_stereo_MH_05
    ├── laptop_vio_stereo_V2_01
    ├── laptop_vio_stereo_V2_02
    └── laptop_vio_stereo_V2_03
Specifying sub-trajectory lengths for relative pose errors

The relative pose error is calculated from subtrajectories of different lengths, which can be specified by the following fields in the configuration file

  • RelDistances: a set of lengths that will be used for all the datasets
  • RelDistancePercentages: the lengths will be selected independently as certain percentages of the total length for each dataset. This will be overwritten by RelDistances. If none of the above is specified, the default percentages (see src/trajectory.py) will be used.

Output

The evaluation process will generate the saved_results folder in each result folder, same as the evaluation for single trajectory estimate. In addition, it will generate plots and text files under results folder comparing different algorithms:

  • Under <platform>_<dataset>_results folder you can find the trajectory top/side views and the boxplots of the relative pose errors on this dataset.
  • all_translation/rotation_rmse.pdf: boxplots of the RMSE of multiple runs for all datasets.
  • <platform>_translation_rmse<algorithm_dataset_string>.txt: Latex table summarizing the RMSE of all configurations. <algorithm_dataset_string> is an identifier generated from the algorithms and datasets evaluated. The values in the table are formated as mean, median (min, max) from the errors of multiple trials.

The tables can be readily used in Latex files.

Several example plots (from analyze_trajectories_config/euroc_vislam_mono.yaml) comparing the performance of different algorithms are

overall_top_traj_mh03 overall_rel_err_mh03

Parameters

Configuration:

  • config configuration file under scripts/analyze_trajectories_config

Paths:

  • --results_dir: the folder where the <platform> folder will be found. Default: results folder in the toolbox folder.
  • --output_dir: the folder for all the plots and text files. Default: results folder in the toolbox folder.
  • --platform: the folder of results to be found under the <results_dir>. Default: laptop

Analysis options:

  • --mul_trials: how many trials we want to analyze. Default: None. If some algorithm-dataset configuration has less runs, only the available ones will be considered.

  • --odometry_error_per_dataset: whether to compute the relative error for each dataset. Default: False.

  • --overall_odometry_error: whether to compute and compare the overall odometry error on all datasets for different algorithms. Default: False.

  • --rmse_table: whether to generate the table of translation RMSE (absolute error). Default: False.

    • --rmse_table_median_only: per default, the --rmse_table option saves median/mean/min/max of multiple runs. Use this option to only save the median.
    • --rmse_boxplot: whether to plot boxplot for RMSE of different datasets (only valid for analysing results from multiple trials). Default: False.
  • --plot_trajectories: whether to plot trajectories. Default: False. By default, many plots are generated, and some of them can be turned off by the following options.

    • --no_plot_side: do not plot side views
    • --no_plot_aligned: do not plot the alignment connection between the groundtruth and estimate
    • --no_plot_traj_per_alg: do not generate plots for each algorithm

Misc:

  • --recalculate_errors: whether to clear the cache and recalculate everything. Default: False.
  • --png: save plots as png instead of pdf. Default: False
  • --dpi: allow saving at a higher dpi. Default: 300
  • --no_sort_names: do not sort the names of datasets and algorithms (using the order in the configuration file) when plotting/writing the results. Default: names will be sorted.

Utilities

Dataset tools

Under scripts/dataset_tools, we provide several scripts to prepare your dataset for analysis. Specifically:

  • asl_groundtruth_to_pose.py: convert EuRoC style format to the format used in this toolbox.
  • bag_to_pose.py: extract PoseStamped/PoseWithCovarianceStamped in a ROS bag to the desired format.
  • transform_trajectory.py: transformed a pose file of our format by a given transformation, useful for applying hand-eye calibration to groundtruth/estimate before analysis.

Misc scripts

Under scripts, there are also some scripts for conveniece:

  • recursive_clean_results_dir.py: remove all saved_results directories recursively.
  • change_eval_cfg_recursive.py: recursively changing the evaluation parameter within a given result folder.

Customization

Most of the error computing is done via the class Trajectory (src/rpg_trajectory_evaluation/trajectory.py). If you would like to customize your evaluation, you can use this class directly. The API of this class is quite simple

# init with the result folder.
# You can also specify the subtrajecotry lengths and alignment parameters in the initialization.
traj = Trajectory(result_dir)

# compute the absolute error
traj.compute_absolute_error()

# compute the relative errors at `subtraj_lengths`.
traj.compute_relative_errors(subtraj_lengths)

# compute the relative error at sub-trajectory lengths computed from the whole trajectory length.
traj.compute_relative_errors()

# save the relative error to `cached_rel_err.pickle`
traj.cache_current_error()

# write the error statistics to yaml files
traj.write_errors_to_yaml()

# static method to remove the cached error from a result folder
Trajectory.remove_cached_error(result_dir)

After the error is computed, the absolute and relative errors are stored in the trajectory object and can be accessed afterwards:

  • absolute error: traj.abs_errors. It is a dictionary stores the absolute errors of all poses as well as their statistics. See Trajectory.compute_absolute_error function for details.

  • relative error: traj.rel_errors. It is a dictionary stores the relative errors of different distances (using the distance as the key). See Trajectory.compute_relative_error_at_subtraj_len function for details.

With the interface, it should be easy to access all the computed errors for customized analysis.

Credits

See package.yaml for the list of authors that have contributed to this toolbox.

It might happen that some open-source code is incorporated into the toolbox but we missed the license/copyright information. If you recognize such a situation, please open an issue.

Acknowledgements

The development of this toolbox was supported by the the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics, the SNSF-ERC Starting Grant, and the DARPA FLA program.

rpg_trajectory_evaluation's People

Contributors

danielgehrig18 avatar davsca avatar jhidalgocarrio avatar zhangzichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpg_trajectory_evaluation's Issues

Dont have timestamp for ground truth only image number

Hi thanks for you contribution. i am generating my own dataset in which i have the image number and its pose. I then create a bag file with the images and then send to ORB-SLAM. ORB-SLAM generates the keyframe trajectory with timestamps. However i dont understand how to use this tool to compare with groundtruth that doesn't have timestamp. The groundtruth pose file is like below but the last three values show pitch, roll and yaw that are replaced with quaternions. Any help will be greatly appreciated. Many thanks.

Screenshot from 2021-08-10 12-44-59

Empty estimate file

I got a bug that my stamped_est_gt_matched.txt file is empty, when I try to evaluate the result I got from VINS-FUSION and VINS-MONO. Terminal shows :
UserWarning: loadtxt: Empty input file: "plot_folder/saved_results/traj_est/stamped_est_gt_matches.txt"
matches = np.loadtxt(fn_matches, dtype=int)
Saved matching results to plot_folder/saved_results/traj_est/stamped_est_gt_matches.txt.
Empty estimate file.
Loading data failed.
Trials 0 fails, will not count.
===> MulTrajError: summarized 0 trajectories.

  • Successs indices: []
    No success runs, not plotting.

Plotting absolute error for one trajectory...
Traceback (most recent call last):
File "catkin_rpg_trajectory_evaluation/src/rpg_trajectory_evaluation/scripts/analyze_trajectory_single.py", line 176, in
pu.plot_trajectory_top(ax, plot_traj.p_es_aligned, 'b', 'Estimate')
NameError: name 'plot_traj' is not defined

I have also checked that my stamped_groundtruth.txt and stamped_traj_estimat.txt, the format is right that contains timestamp, translation vector and quaternion.

Empty estimate file error

I did everything as proposed in the instructions to run the code, however, I get "Empty estimate file" error and it does not plot trajectories. I hope you can help me solve this issue.

Best regards

Inconsistency between RTE and ATE

@zhangzichao
First, thank you for open-source this package.
When analyzing my work, I have a highly contradict report between RTE and ATE as in the below figures:
aligned_trajectory
ate
rte
Given the 2 trajectories are well aligned, I don't understand why ATE and RTE report contradict result?
Thanks for your help!

Computing square root of sum of squared errors vs square root of mean squared errors

Hi, in the paper referenced here: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6385773&tag=1, absolute_trajectory_error is defined as the root mean squared error, but instead in the code it is computed as square root of sum of squared errors as referenced below.

Is there a specific reason for doing this?

Why are there two more lines when using 'Multiple trajectory estimates'?

Just like the red and orange lines in the picture.

image

This is the command I used in Ubuntu 18.04:
python2 analyze_trajectories.py euroc_vislam_mono.yaml --output_dir=../results/euroc_vislam_mono/ --results_dir=../results/euroc_vislam_mono/ --platform laptop --odometry_error_per_dataset --plot_trajectories --rmse_table --rmse_boxplot --mul_trials=10

One question about visual-inertial case?

Hi, I have read the paper A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry, which is a great paper with useful tool written in Python.

But I have one question about visual-inertial case:
the paper recommends to use a 4-DOF transform, but the groundtruth may not be provided in world frame(e.g.,VICON), should I transform the groundtruth to world frame? Or it would be all right to use a 6-DOF transform? How different is it?

IndexError: too many indices for array

Im currently running SVO on a subset of the Euroc datasets by ignoring the first 10 timesteps to avoid initialization errors i.e. rosbag play <file.bag> -s 10.
I record the estimated trajectory and extract to a text file.
For the ground truth I use the supplied ground truth in ASL format and parse it using the supplied matlab code.
As a result, there is a mismatch in the number of steps in estimate vs ground truth.

I assume that for the trajectory evaluation to work, there must be equal number of steps in both?
Can anyone confirm whether this is the case?

The aligned plot is not being created

Thanks a lot for the library :)

After executing the example:

python2 scripts/analyze_trajectory_single.py results/euroc_mono_stereo/laptop/vio_mono/laptop_vio_mono_MH_05

I got the following plot:

image

After seing the output from the example I guess that the grountruth trajectory visually matches the aligned one. But I am not sure if the aligned trajectory has been created and taken into account to compute the error metrics. Is there a way to check this?

Thanks in advance!

How is it possible to generate the datasets for multiple runs/trials?

I'd like to use your function of running trajectory evaluation for multiple trials given the specific algorithm and dataset, by enabling the argument --mul_trials. However, what I want to ask is how did you generate the estimate files of several trials (e.g., "stamped_traj_estimate0.txt", "stamped_traj_estimate1.txt", "stamped_traj_estimate2.txt", ...) corresponding to the same ground-truth file? After checking your template trajectory files, I still have no clue of generating them in experiment, but only able to generate one trial as usual. Or is it only possible to generate such datasets in simulation?
Many thanks in advance!

A question about equation (23) in the paper

In the paper, the absolute trajectory error can be calculated as Equation 23.

Screenshot from 2023-09-20 15-29-20

Note that in the equation for the position error, there's the rotation error multiplying with the aligned position.

However, according to the code, the position error is calculated as:

e_trans_vec = (p_gt-p_es_aligned)
e_trans = np.sqrt(np.sum(e_trans_vec**2, 1))

without the multiplication of the rotation error.

I wonder which one is correct.

Should we have to install LaTeX software before using this tool?

The following errors occurred at runtime:

$ python2 analyze_trajectory_single.py --png ./laptop_vio_mono_MH_01/
Going to analyze the results in ./laptop_vio_mono_MH_01/.
The plots will saved in ./laptop_vio_mono_MH_01/plots.

Calculating errors...
Find evaluation configuration, will overwrite default.
The current evaluation configuration is {'align_type': 'posyaw', 'align_num_frames': -1}
Loading trajectory data...
loading dataset in ./laptop_vio_mono_MH_01/
Loading cached relative (odometry) errors from ./laptop_vio_mono_MH_01/saved_results/cached_rel_err.pickle
Loaded odometry error calcualted at [8.0, 16.0, 32.0, 24.0, 40.0]
...done.
Computing preset subtrajectory lengths for relative errors...
Use percentage [0.1, 0.2, 0.3, 0.4, 0.5] of trajectory length.
...done. Computed preset subtrajecory lengths: [8.0, 16.0, 24.0, 32.0, 40.0]
Aliging the trajectory estimate to the groundtruth...
Alignment type is posyaw.
To align all frames.
... trajectory alignment done.
Calculating RMSE...
Trajectory already aligned
...RMSE calculated.
Computing the relative errors based on preset subtrajectory lengths...
Relative error at sub-trajectory length 8.0 is already computed or loaded from cache.
Relative error at sub-trajectory length 16.0 is already computed or loaded from cache.
Relative error at sub-trajectory length 24.0 is already computed or loaded from cache.
Relative error at sub-trajectory length 32.0 is already computed or loaded from cache.
Relative error at sub-trajectory length 40.0 is already computed or loaded from cache.
Plotting absolute error...
sh: 1: latex: not found
Traceback (most recent call last):
File "analyze_trajectory_single.py", line 69, in
fig.tight_layout()
File "/usr/lib/python2.7/dist-packages/matplotlib/figure.py", line 1754, in tight_layout
rect=rect)
File "/usr/lib/python2.7/dist-packages/matplotlib/tight_layout.py", line 349, in get_tight_layout_figure
pad=pad, h_pad=h_pad, w_pad=w_pad)
File "/usr/lib/python2.7/dist-packages/matplotlib/tight_layout.py", line 126, in auto_adjust_subplotpars
tight_bbox_raw = union([ax.get_tightbbox(renderer) for ax in subplots])
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 3679, in get_tightbbox
bb_xaxis = self.xaxis.get_tightbbox(renderer)
File "/usr/lib/python2.7/dist-packages/matplotlib/axis.py", line 1075, in get_tightbbox
renderer)
File "/usr/lib/python2.7/dist-packages/matplotlib/axis.py", line 1058, in _get_tick_bboxes
extent = tick.label1.get_window_extent(renderer)
File "/usr/lib/python2.7/dist-packages/matplotlib/text.py", line 961, in get_window_extent
bbox, info, descent = self._get_layout(self._renderer)
File "/usr/lib/python2.7/dist-packages/matplotlib/text.py", line 352, in _get_layout
ismath=False)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_agg.py", line 229, in get_text_width_height_descent
renderer=self)
File "/usr/lib/python2.7/dist-packages/matplotlib/texmanager.py", line 675, in get_text_width_height_descent
dvifile = self.make_dvi(tex, fontsize)
File "/usr/lib/python2.7/dist-packages/matplotlib/texmanager.py", line 422, in make_dvi
report))
RuntimeError: LaTeX was not able to process the following string:
'lp'
Here is the full report generated by LaTeX:

plot failed.

@zhangzichao
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:1331: UserWarning: findfont: Font family [u'serif'] not found. Falling back to DejaVu Sans
....
OSError: [Errno 2] No such file or directory: 'latex'

hope get your advice.

How to use the parameters "recalculate_errors" and "png"?

Hi, I'm trying to use the parameters "recalculate_errors" and "png" with python2 analyze_trajectory_single.py --recalculate_errors=True --png=True MyResultDir,but it showing the error as follows:
usage: analyze_trajectory_single.py [-h] [--plots_dir PLOTS_DIR]
[--recalculate_errors] [--png]
result_dir
analyze_trajectory_single.py: error: argument --recalculate_errors: ignored explicit argument 'True'
So, Can you show me how to use this two parameters correctly?

Plots are not created and getting errors while evaluating

Hi,
once I run the evaluation results are saved into the daved_results folder as expected, which seems to be running well, however plots are not generated. Also, I attach the whole terminal output which shows some errors. Also another question, I would like to compare the estimated trajectory not only with the ground truth trayectory but also with the odometry trayectory, is this possible at once? or should I just use as estimated trayectory either the real estimated trayectory and after the odometry trayectory?
image
I hope I can get some help, thanks!!

port over to python3

Hi
I really like this package way better than the EVO, but it is less easy to run it anymore due to most of the existing hardware running on ubuntu 20.04 /python3. How complex is it to upgrade to python3?

plot of trajectory,where is the magenta color

from the trajectory plotted, i dont really see the ground truth as Magenta as specified in the legend of the plot? is it simply covered up by the aligned trajectory?
and does the grey line indicated the aligned points? or??
Why is that?

How to modify the order of labels?

Thank you very much for your excellent work. I would like to ask how to modify the label order displayed on the graph when validating multiple algorithms.

How to use the dataset_tools "transform_trajectory.py"?

Hello, this tool is really useful for us, but it is first time for me to use it. From our odometry, we could obtain the absolute coordinates in the world coordinate system and we can also get the 44 relative transformation matrix. To use this tool, we need to transfer our 44 matrix to the format of quaternion, but we do not know how to use the tool "transform_trajectory.py" , what is the true input format (it only written "Comma separated rows" in the code) and how to output the available data? Could you please give a detailed example to use it? thank you!

How do I fix the scaling issue when plotting the results?

Hello!
I was trying to run this library but am facing issues regarding scaling. For simplicity i made a small example using a square.
figure_1
I basically generated a blue square and rotated it and scaled it up by 2. all the points for these squares are saved into their appropriate files like so:
Ground Truth (Original Square):
0 -2 -2 0 0 0 0 1
1 -2 2 0 0 0 0 1
2 2 2 0 0 0 0 1
3 2 -2 0 0 0 0 1
4 -2 -2 0 0 0 0 1

Estimate (Rotated and up-scaled Square):
0 5.36828111892 -1.78369218989 0.0 0.0 0.0 0.850903524534 0.525321988818
1 -1.78369218989 -5.36828111892 0.0 0.0 0.0 0.850903524534 0.525321988818
2 -5.36828111892 1.78369218989 0.0 0.0 0.0 0.850903524534 0.525321988818
3 1.78369218989 5.36828111892 0.0 0.0 0.0 0.850903524534 0.525321988818
4 5.36828111892 -1.78369218989 0.0 0.0 0.0 0.850903524534 0.525321988818

After I run the library on these files the library gives me this output.

Screenshot from 2022-09-07 21-10-09

As you can see it manages to correct the rotation but does not resize it.

I have ran the library on one of the examples (the one mentioned in the readme.md) and I get the correct result.

Inconsistency between latex table and boxplot for rel_trans_perc

Hi Zichao,

I am using the master branch to compare several algorithms in the multi trial mode on the EUROC dataset. In particular, every method runs 5 times on one data mission. I find that the number in "laptop_rel_err....txt" does not agree with the boxplots in overall_rel_trans_perc.pdf.
Below are the numbers in "laptop_rel_err...txt". Notice the translation entry for OKVIS is about 70%.

      &      Translation (\%) &  Rotation (deg/meter)
MSCKF &     0.720 &  0.030 
MSCKF_async &     0.751 &  0.032 
MSCKF_brisk_b2b &     1.001 &  0.028 
MSCKF_klt &     2.030 &  0.047 
OKVIS &     69.306 &  0.074 
OKVIS_nframe &     0.543 &  0.025 

The overall_rel_trans_perc.pdf is attached below. Notice the relative translation errors for OKVIS at different distance intervals are all below 1%.
overall_rel_trans_perc.pdf

I am curious why the translation error for OKVIS reported in "laptop_rel_err...txt" shoots up to 70%?

As I understand, a number reported in "laptop_rel_err...txt" in column Translation is the mean of all rel translation errors for a method, and a middle line in one box of the boxplot in overall_rel_trans_perc.pdf shows the median of all rel translation errors. Is it possible that the mean deviates so much from the median?

How to build this tool? OF COURSE, I HAVE READ README. BUT I NEED MORE DETAILS.

Hi,
I want to use this trajectory evaluation tool. Need someone's guidances, i have installed all dependencies and git clone "catkin_simple" what you mentioned. Then i catkin_make src of "catkin_simple", git clone "rpg_trajectory_evaluation", cd rpg_trajectory_evaluation. BUT WHEN I INPUT COMMAND"catkin_simple", IT SAYS "command no found". SO WHAT'S NEXT STEP or COMMAND? WHAT should i do?
THANK YOU!!!

Unexpected relative error.

Hi, Thanks for providing this useful tool. But something wrong happened. I evaluate the VINS-Mono on this tool. The alignment is correct, while the translation errors seem to be larger than the result showed in the aligned trajectory.
Could anyone offer some advices? Great Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.