Code Monkey home page Code Monkey logo

driving_log_replayer's Introduction

Driving Log Replayer for ROS 2 Autoware.Universe

Driving Log Replayer is a ROS package that evaluates the functionality of Autoware.Universe

Requirements

  • ROS 2 humble
  • Python 3.10
  • pipx
    • pipx is installed automatically in Autoware setup.

Optional

If you want to change the rosbag format from ros1 to ros2.

Installation

You need to install driving_log_replayer and driving_log_replayer_cli package.

How to install driving_log_replayer package

Use colcon build

colcon build --symlink-install --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release --packages-up-to driving_log_replayer

How to install driving_log_replayer_cli package

Use pipx Do not use pip

# install
pipx install git+https://github.com/tier4/driving_log_replayer.git

# upgrade
pipx upgrade driving-log-replayer

# uninstall
pipx uninstall driving-log-replayer

Why pipx, not pip

For ros, driving_log_replayer uses numpy 1.22.0 See requirements.txt

On the other hand, cli uses pandas, and the version of numpy required by pandas is different from the version we want to use in ros. If you install with pip, the pandas-dependent numpy is installed under $HOME/.local/lib/python3.10/site-packages. This will cause version mismatch, so you need to install cli on an independent venv using pipx.

Shell Completion

Execute the following command so that you can complete the command in the shell.

bash

_DLR_COMPLETE=bash_source dlr > $HOME/.dlr-complete.bash
_DLR_COMPLETE=bash_source dlr > $HOME/.dlr-analyzer-complete.bash

echo "source $HOME/.dlr-complete.bash" >> ~/.bashrc
echo "source $HOME/.dlr-analyzer-complete.bash" >> ~/.bashrc

fish

_DLR_COMPLETE=fish_source dlr > $HOME/.config/fish/completions/dlr.fish
_DLR_ANALYZER_COMPLETE=fish_source dlr-analyzer > $HOME/.config/fish/completions/dlr-analyzer.fish

Usage

refer document

(For Developer) Release Process

This package uses catkin_pkg to manage releases.

Refer this page

Release command

Can only be executed by users with repository maintainer privileges

# create change log
catkin_generate_changelog
# edit CHANGELOG.rst
# update package version in pyproject.toml
# edit ReleaseNotes.md
# commit and create pull request
# merge pull request
catkin_prepare_release
# When you type the command, it automatically updates CHANGELOG.rst and creates a git tag
git checkout main
git merge develop
git push origin main

driving_log_replayer's People

Contributors

hayato-m126 avatar ktro2828 avatar kminoda avatar kosuke55 avatar yoshiri avatar dependabot[bot] avatar keisukeshima avatar sakodashintaro avatar motsu-san avatar vios-fish avatar takahironishioka avatar

Stargazers

Hammad Ali Khan avatar Albers Franz avatar  avatar  avatar Kenji Funaoka avatar Ryuta Kambe avatar Yi-Hsiang Fang (Vivid) avatar  avatar Changnam Hong avatar Toyozo Shimada avatar Taekjin LEE avatar  avatar a03 avatar tsungpo sun avatar  avatar Jun Zhan avatar Taiga avatar  avatar  avatar Kenzo Lobos Tsunekawa avatar sho yoshida avatar Takayuki Murooka avatar  avatar Maxime CLEMENT avatar Akihisa Nagata avatar Tomoya Kimura avatar Kento Yabuuchi avatar Naophis avatar  avatar  avatar  avatar  avatar  avatar  avatar Shunsuke Miura avatar  avatar  avatar Satoshi OTA avatar  avatar  avatar Yusuke FUJII avatar Eiji Sekiya avatar  avatar Shintaro Tomie avatar  avatar Kazuki Matsumoto avatar  avatar

Watchers

Nobuo Kawaguchi avatar Shinpei Kato avatar Makoto Kurihara avatar Yasuyuki Takahashi avatar Dai Utsui avatar Akihito Ohsato avatar Yusuke FUJII avatar Takuya Azumi avatar Yuki Iida avatar Manato Hirabayashi avatar Yuki Kitsukawa avatar yabuta avatar Eiji Sekiya avatar Kenzo Lobos Tsunekawa avatar  avatar RyuYamamoto avatar kuwabara avatar Guolong Zhang avatar Yoshifumi Hayashi avatar Takenobu Tani avatar Fumiya Watanabe avatar Takanori Ishibashi avatar  avatar AkiTakeuchi avatar Go Sakayori avatar K.Hoshi avatar  avatar Tsuyoshi Hatta avatar Naophis avatar Junya Sasaki avatar Alexander Carballo avatar Adam Dąbrowski avatar  avatar Takahiro Ishikawa avatar Shohei Sakai avatar Szymon Lis avatar  avatar Tadasuke KURAMOCHI avatar  avatar  avatar  avatar

driving_log_replayer's Issues

dlr installation

According to the installation docs there is a dlr executable available. However a pip package don't have such file:

pip install dlr
Collecting dlr
  Downloading dlr-1.10.0-py3-none-any.whl (18 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from dlr) (1.26.4)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from dlr) (2.31.0)
Requirement already satisfied: distro in /usr/lib/python3/dist-packages (from dlr) (1.7.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (3.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (2024.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (2.2.0)
Installing collected packages: dlr
Successfully installed dlr-1.10.0
gigi@e5806e681675:~/workspace/autoware$ dlr simulation run -p perception  -l "play_rate:=0.5"
bash: dlr: command not found

How can I install it?

update diagnostic status name

/autoware/sensing/node_alive_monitoring/topic_status/ad_service_state_monitor: sensing_topic_status

/autoware/perception/node_alive_monitoring/topic_status/topic_state_monitor_obstacle_segmentation_pointcloud: perception_topic_status

diag_humble

refactor: analyzer

Currently, only OBSTACLE_SEGMENTATION is supported.
Reorganize so that the same mechanism can be used for analysis of other modules.
Add documentation

localization: fix initial result value

fixes a problem that caused success even when the initial position did not fit and was not evaluated.

self.__convergence_result = True
# reliability
self.__reliability_condition: Dict = condition["Reliability"]
self.__reliability_ng_seq = 0
self.__reliability_total = 0
self.__reliability_msg = "NotTested"
self.__reliability_result = True
self.__reliability_list = []
# availability
self.__ndt_availability_error_status_list = ["Timeout", "NotReceived"]
self.__ndt_availability_msg = "NotTested"
self.__ndt_availability_result = True

initial value of result must be false

test case 1 / 1 : use case: sample
--------------------------------------------------
TestResult: Passed
Passed: Convergence (Passed): NotTested, Reliability (Passed): NotTested, NDT Availability (Passed): NDT available

add error handling for `footprint_from_ros_msg`

[perception_evaluator_node.py-78]   File "/home/autoware/autoware.proj/install/driving_log_replayer/lib/driving_log_replayer/perception_evaluator_node.py", line 220, in perception_cb
[perception_evaluator_node.py-78]     estimated_objects: list[DynamicObject] = self.list_dynamic_object_from_ros_msg(
[perception_evaluator_node.py-78]   File "/home/autoware/autoware.proj/install/driving_log_replayer/lib/driving_log_replayer/perception_evaluator_node.py", line 196, in list_dynamic_object_from_ros_msg
[perception_evaluator_node.py-78]     footprint=eval_conversions.footprint_from_ros_msg(
[perception_evaluator_node.py-78]   File "/home/autoware/autoware.proj/install/driving_log_replayer/local/lib/python3.10/dist-packages/driving_log_replayer/perception_eval_conversions.py", line 65, in footprint_from_ros_msg
[perception_evaluator_node.py-78]     return Polygon(coords)
[perception_evaluator_node.py-78]   File "/usr/local/lib/python3.10/dist-packages/shapely/geometry/polygon.py", line 261, in __init__
[perception_evaluator_node.py-78]     ret = geos_polygon_from_py(shell, holes)
[perception_evaluator_node.py-78]   File "/usr/local/lib/python3.10/dist-packages/shapely/geometry/polygon.py", line 539, in geos_polygon_from_py
[perception_evaluator_node.py-78]     ret = geos_linearring_from_py(shell)
[perception_evaluator_node.py-78]   File "shapely/speedups/_speedups.pyx", line 346, in shapely.speedups._speedups.geos_linearring_from_py
[perception_evaluator_node.py-78] ValueError: A LinearRing must have at least 3 coordinate tuples

perception: output database evaluation result

The current implementation only outputs evaluation results for each dataset, even if multiple datasets are described in the perception.
In database evaluation, we would like to output the results of multiple datasets together.

perception: Remove waiting for conversion of onnx files

The trained machine learning models used in PERCEPTION are specified in the launch file, which specifies the output directory and file name.

https://github.com/autowarefoundation/autoware.universe/blob/main/launch/tier4_perception_launch/launch/object_recognition/detection/lidar_based_detection.launch.xml#L14-L15

The current implementation checks in lidar_centrepoint direcotry, and if a different directory is specified, an infinite wait occurs.
So, delete the file conversion wait in launch as if the conversion has already been done in advance.

perception: When all TP/FP/FN objects are missing, the evaluation result is Success.

  • TP/FP/FNオブジェクトがすべて無い場合に,Successと出てしまう

  • この場合,autoware_perception_evaluationにおけるPerceptionFrameResult.pass_fail_resultのTP/FP/FNオブジェクトが無くて解析処理ができないが,failが0のため,Successと出てしまう

  • 理想的には,pickleを出力する際に,TP/FP/FNオブジェクトがすべて無い場合は,エラーを吐いてほしい(configの設定が適切でないと考えられるため)

  • Success is displayed when all TP/FP/FN objects are missing

  • In this case, the TP/FP/FN objects in PerceptionFrameResult.pass_fail_result in autoware_perception_evaluation are missing and cannot be analyzed, but the fail is 0, so Success is printed.

  • Ideally, when outputting a pickle, if all TP/FP/FN objects are missing, an error should be thrown (because the config settings may not be appropriate).

dump evaluation results into pickle with version info

What

Currently, driving_log_replayer saves evaluation results with pickle format in result_archive, but it does not contain version information of perception_eval.
Therefore, it sometimes causes an error by version mismatch when users want to use PerceptionAnalyzer3D.add_from_pkl(...).

Then, I want to dump python objects into pickle containing version information of perception_eval with perception_eval.util.dump_to_pkl(...) function.
This function serializes objects as dict, whose keys and values are {'version': str, 'data': Any}.
It allows users to notice the mismatching the version of perception_eval between they are using and they used to evaluate.
You can deserialize pkl with perception_eval.util.load_pkl(...).

As a rule of releases on perception_eval, I'm planning the following policy.

  • Major updates : Breaking changes or directory tree updates
  • Minor updates : Updates including incompatible python objects in pickle
  • Micro updates : Otherwise, or tiny changes

I'm working on this in the branch shown as below

Perception evaluation with sample_dataset

Hello everybody,

I followed the installation guide to set up the perception evaluation and used the sample_dataset to check the evaluation, as described in Quick Start/ Perception Evaluation.

The results of the simulation are always like this:

$ driving_log_replayer simulation run -p perception --rate 0.5
.
.
.
test case 1 / 1 : use case: sample_dataset


TestResult: Failed
Failed: 6 / 19 -> 31.58%

The number of executed tests varies between "NoData" and ~ 50. But the result is "Failed" every time. It seems like the point cloud only refreshes sporadically in rviz2.
The /sensing/lidar/concatenated/pointcloud Topic has an average rate of 0.3-0.4 Hz while testing.

The workload of the used PC is not a problem.
Am I doing something wrong?

Thank you!

obstacle segmentation: Set evaluation periods for each bounding box

If a bounding box exists, the point cloud is judged NG if it does not appear in the bounding box.
Annotated, but distant objects may not be problematic even if they are not visible.

Therefore, it should be possible to set from the scenario whether the bounding box is enabled in the evaluation or not, based on time

test mode for obstacle_segmentation

support test modes below and switch the test mode by the scenario

  • detection only
  • non_detection only
  • detection and non_detection (Currently implemented)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.