tier4 / driving_log_replayer Goto Github PK
View Code? Open in Web Editor NEWan evaluation package for autoware
License: Apache License 2.0
an evaluation package for autoware
License: Apache License 2.0
/planning/scenario_planning/status/stop_reasons
↓
/awapi/autoware/get/status --field stop_reason
need to launch deprecated api
Add exception handling when doing lookup transform
add except TransformException
sample code
https://docs.ros.org/en/rolling/Tutorials/Intermediate/Tf2/Writing-A-Tf2-Listener-Py.html
Hello everybody,
I followed the installation guide to set up the perception evaluation and used the sample_dataset to check the evaluation, as described in Quick Start/ Perception Evaluation.
The results of the simulation are always like this:
$ driving_log_replayer simulation run -p perception --rate 0.5
.
.
.
test case 1 / 1 : use case: sample_dataset
TestResult: Failed
Failed: 6 / 19 -> 31.58%
The number of executed tests varies between "NoData" and ~ 50. But the result is "Failed" every time. It seems like the point cloud only refreshes sporadically in rviz2.
The /sensing/lidar/concatenated/pointcloud Topic has an average rate of 0.3-0.4 Hz while testing.
The workload of the used PC is not a problem.
Am I doing something wrong?
Thank you!
The current implementation only outputs evaluation results for each dataset, even if multiple datasets are described in the perception.
In database evaluation, we would like to output the results of multiple datasets together.
fixes a problem that caused success even when the initial position did not fit and was not evaluated.
initial value of result must be false
test case 1 / 1 : use case: sample
--------------------------------------------------
TestResult: Passed
Passed: Convergence (Passed): NotTested, Reliability (Passed): NotTested, NDT Availability (Passed): NDT available
tier4/autoware_perception_evaluation#15
A 2D perception evaluation function has been added.
Allow existing PERCEPTION functionality to work in the PR branch.
Currently, only OBSTACLE_SEGMENTATION is supported.
Reorganize so that the same mechanism can be used for analysis of other modules.
Add documentation
TP/FP/FNオブジェクトがすべて無い場合に,Successと出てしまう
この場合,autoware_perception_evaluation
におけるPerceptionFrameResult.pass_fail_result
のTP/FP/FNオブジェクトが無くて解析処理ができないが,failが0のため,Successと出てしまう
理想的には,pickleを出力する際に,TP/FP/FNオブジェクトがすべて無い場合は,エラーを吐いてほしい(configの設定が適切でないと考えられるため)
Success is displayed when all TP/FP/FN objects are missing
In this case, the TP/FP/FN objects in PerceptionFrameResult.pass_fail_result in autoware_perception_evaluation are missing and cannot be analyzed, but the fail is 0, so Success is printed.
Ideally, when outputting a pickle, if all TP/FP/FN objects are missing, an error should be thrown (because the config settings may not be appropriate).
Erase the options that were included to address the following issue problems
Correct z-coordinate with map_height_fitter when sending initial pose via api
implement
This avoids lidar_centerpoint errors
Converged condition is no longer necessary because of the PR below.
autowarefoundation/autoware.universe#1873
According to the installation docs there is a dlr
executable available. However a pip package don't have such file:
pip install dlr
Collecting dlr
Downloading dlr-1.10.0-py3-none-any.whl (18 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from dlr) (1.26.4)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from dlr) (2.31.0)
Requirement already satisfied: distro in /usr/lib/python3/dist-packages (from dlr) (1.7.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (3.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (2024.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->dlr) (2.2.0)
Installing collected packages: dlr
Successfully installed dlr-1.10.0
gigi@e5806e681675:~/workspace/autoware$ dlr simulation run -p perception -l "play_rate:=0.5"
bash: dlr: command not found
How can I install it?
Currently, driving_log_replayer
saves evaluation results with pickle format in result_archive
, but it does not contain version information of perception_eval
.
Therefore, it sometimes causes an error by version mismatch when users want to use PerceptionAnalyzer3D.add_from_pkl(...)
.
Then, I want to dump python objects into pickle containing version information of perception_eval
with perception_eval.util.dump_to_pkl(...)
function.
This function serializes objects as dict
, whose keys and values are {'version': str, 'data': Any}
.
It allows users to notice the mismatching the version of perception_eval
between they are using and they used to evaluate.
You can deserialize pkl with perception_eval.util.load_pkl(...)
.
As a rule of releases on perception_eval
, I'm planning the following policy.
I'm working on this in the branch shown as below
In the current implementation, the pose passed from the scenario is thrown directly to the intiialpose service.
Since initialpose is supposed to be thrown with 2D esitmate pose and z=0, it is assumed to be used with z correction.
Fix so that different colors are applied to tp_gt, tp_est, fp and fn
To compare the timestamp of the point cloud with the timestamp of the bounding box, record the respective timestamps in result.jsonl.
obstacle_segmentation
is a 2-node configuration in C++ and python.
Use lanelet2_extesion_python to make it a one-node configuration of python.
support test modes below and switch the test mode by the scenario
[perception_evaluator_node.py-78] File "/home/autoware/autoware.proj/install/driving_log_replayer/lib/driving_log_replayer/perception_evaluator_node.py", line 220, in perception_cb
[perception_evaluator_node.py-78] estimated_objects: list[DynamicObject] = self.list_dynamic_object_from_ros_msg(
[perception_evaluator_node.py-78] File "/home/autoware/autoware.proj/install/driving_log_replayer/lib/driving_log_replayer/perception_evaluator_node.py", line 196, in list_dynamic_object_from_ros_msg
[perception_evaluator_node.py-78] footprint=eval_conversions.footprint_from_ros_msg(
[perception_evaluator_node.py-78] File "/home/autoware/autoware.proj/install/driving_log_replayer/local/lib/python3.10/dist-packages/driving_log_replayer/perception_eval_conversions.py", line 65, in footprint_from_ros_msg
[perception_evaluator_node.py-78] return Polygon(coords)
[perception_evaluator_node.py-78] File "/usr/local/lib/python3.10/dist-packages/shapely/geometry/polygon.py", line 261, in __init__
[perception_evaluator_node.py-78] ret = geos_polygon_from_py(shell, holes)
[perception_evaluator_node.py-78] File "/usr/local/lib/python3.10/dist-packages/shapely/geometry/polygon.py", line 539, in geos_polygon_from_py
[perception_evaluator_node.py-78] ret = geos_linearring_from_py(shell)
[perception_evaluator_node.py-78] File "shapely/speedups/_speedups.pyx", line 346, in shapely.speedups._speedups.geos_linearring_from_py
[perception_evaluator_node.py-78] ValueError: A LinearRing must have at least 3 coordinate tuples
https://github.com/tier4/driving_log_replayer/actions/runs/6428222697
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
The trained machine learning models used in PERCEPTION are specified in the launch file, which specifies the output directory and file name.
The current implementation checks in lidar_centrepoint direcotry, and if a different directory is specified, an infinite wait occurs.
So, delete the file conversion wait in launch as if the conversion has already been done in advance.
In the current implementation, /initialpose is published for initial positioning, but change to use the service with ad-api.
There are values, NaN and Infinity, that are not allowed in json in jsonl in perception.
use simplejson and set ignore_nan
https://stackoverflow.com/questions/6601812/sending-nan-in-json
In the current implementation, if the ground truth is zero and the number of recognized results is also zero, it is judged as a failure.
https://github.com/tier4/driving_log_replayer/blob/develop/driving_log_replayer/driving_log_replayer/criteria/perception.py#L178-L179
When the ground truth is zero, return SUCCESS if the number of recognized results is also zero.
Translate Japanese documents under docs into English
To be implemented after galactic support ends.
use perception_online_evalutor
autowarefoundation/autoware.universe#6493
update for
If a bounding box exists, the point cloud is judged NG if it does not appear in the bounding box.
Annotated, but distant objects may not be problematic even if they are not visible.
Therefore, it should be possible to set from the scenario whether the bounding box is enabled in the evaluation or not, based on time
As explained in this issue, the interface of traffic light message in autoware will be updated.
I'm working in this branch.
Fix runtime error when parsed value from a scenario should be a float but the actual value is an integer
Add topic publish and json output for results visualisation
Even when localization is false, the evaluation node waits for map fit services to start, resulting in infinite wait times.
Porting features developed in private repositories
Use the same documentation tools as Autoware.
Currently, there are multiple required nodes in the obstacle segmentation and the exit status is 1.
https://github.com/hayato-m126/launch_exit_status
Scenario simulator v2 avoids this problem by using ShutdownOnce instead of Shutdown
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.