Code Monkey home page Code Monkey logo

autoware_ai_perception's People

Contributors

amc-nu avatar andoh104 avatar aohsato avatar atinfinity avatar dejanpan avatar drwnz avatar esteve avatar gbiggs avatar hakuturu583 avatar hatem-darweesh avatar icolwell-as avatar k0suke-murakami avatar kfunaoka avatar kuriking avatar luca-fancellu avatar masa1228masa avatar mitsudome-r avatar n-patiphon avatar ompugao avatar pdsljp avatar sgermanserrano avatar syohex avatar syouji avatar takahoribe avatar tomohitoando avatar yamatoando avatar yk-fujii avatar yn-mrse avatar ypicchi-arm avatar yukkysaito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoware_ai_perception's Issues

Support for Noetic

Feature request

Support for ROS Noetic

Description

We're using the points_processor package and are transitioning to ROS Noetic.

Implementation considerations

A separate branch for Noetic changes would already help. I've made some changes for the new velodyne release (Noetic uses 1.6.1). It would be nice if I could target these changes against a noetic branch so other people don't have to go to the same trouble.

Alternatives

Everybody repeats this effort 👓

Additional information

pcl_omp_registration not building on Ubuntu 20.04 / Noetic

Bug report

Required information:

  • Operating system and version:
    • Ubuntu 20.04
  • Autoware installation type:
    • from source
  • Autoware version or commit hash
  • ROS distribution and version:
    • ROS Noetic
  • ROS installation type:
    • binary installation (add ppa, sudo apt install)
  • Package or library, if applicable:
    • pcl_omp_registration

Description of the bug

After cloning and installing dependencies using rosdep I would expect everything to compile fine. Running catkin_make runs into an error on the pcl_omp_registration package and stalls out.

Steps to reproduce the bug

  1. git clone this repo & repos stated in build_depends.repo file
  2. rosdep install --from-paths src --ignore-src -r -y
  3. missing one dependency still so clone tmv_vendor to fix it
  4. run catkin_make

Expected behavior

builds clean

Actual behavior

errors out on pcl_omp_registration

Additional information

Copy & pasted some of the errors for reference:

code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:23: error: expected identifier before string constant
  181 |       PCL_DEPRECATED ("[pcl::registration::Registration::setInputCloud] setInputCloud is deprecated. Please use setInputSource instead.")
      |                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’ cannot be overloaded with ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’
  186 |       PCL_DEPRECATED ("[pcl::registration::Registration::getInputCloud] getInputCloud is deprecated. Please use getInputSource instead.")
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:7: note: previous declaration ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’
  181 |       PCL_DEPRECATED ("[pcl::registration::Registration::setInputCloud] setInputCloud is deprecated. Please use setInputSource instead.")
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:375:45: error: ‘boost::function’ has not been declared
  375 |       registerVisualizationCallback (boost::function<FunctionSignature> &visualizerCallback)
      |                                             ^~~~~~~~
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:575:14: error: ‘function’ in namespace ‘boost’ does not name a template type; did you mean ‘is_function’?
  575 |       boost::function<void(const pcl::PointCloud<PointSource> &cloud_src,
      |              ^~~~~~~~
      |              is_function
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:377:13: error: ‘visualizerCallback’ was not declared in this scope
  377 |         if (visualizerCallback != NULL)

Release to ros buildfarm

Feature request

Release points_preprocessor to the ROS buildfarm.

Description

We're using the ray_ground_filter from the points_preprocessor package. But installation and compilation takes quite some time because of the dependencies. So would it be possible to release for melodic and noetic?

Implementation considerations

I can imagine it might be undesired to release a single package from this complete repository, but it might be a nice starting point.

Alternatives

Fork the single points_preprocessor package, strip the autoware dependencies such as autoware_health_checker. This last one has heavy dependencies and is not required for operation.
But I'd rather stick to the original version here of course. For credibility and maintenance reasons :)

Additional information

ray_ground_filter crashes on empty clouds

Bug report

Required information:

  • Operating system and version:
    • Ubuntu 18.04
  • Autoware installation type:
    • None
  • Autoware version or commit hash
    • core_perception latest master
  • ROS distribution and version:
    • melodic
  • ROS installation type:
    • git clone
  • Package or library, if applicable:
    • points_preprocessor

Description of the bug

ray_ground_filter crashes on empty clouds

Steps to reproduce the bug

Send an empty cloud to the ray_ground_filter

Expected behavior

Not crashing, just forward the cloud (no ground in an empty message).

Actual behavior

Crash

Screenshots

Additional information

ndt_mapping doesn't compute transformation matrix

Bug report

Required information:

  • Operating system and version:
    • Ubuntu 18.04
  • Autoware installation type:
    • From source
  • Autoware version or commit hash
    • V1.14 master
  • ROS distribution and version:
    • melodic
  • ROS installation type:
    • From source
  • Package or library, if applicable:
    • ndt_mapping

Description of the bug

I run ndt_mapping on the demo bag file (sample_moriyama_150324.bag). It shows /point_raw on rviz, but /points_map can't be visualized and /current_pose is alwasys zero matrix. Does anyone have an idea why?

Steps to reproduce the bug

  1. Launch runtime_manager
  2. Launch tf of velodyne and map
  3. Launch ndt_mapping
  4. Launch Rviz
  5. Cancel the pause of .bag file

Expected behavior

have mapping output

Actual behavior

Console output: ....

"-----------------------------------------------------------------"
(Processed/Input): (306 / 308)
"-----------------------------------------------------------------"
Sequence number: 86918
Number of scan points: 39581 points.
Number of filtered scan points: 3091 points.
transformed_scan_ptr: 39581 points.
map: 39928 points.
NDT has converged: 1
Fitness score: 0
Number of iteration: 0
(x,y,z,roll,pitch,yaw):
(0, 0, 0, 0, -0, 0)
Transformation Matrix:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
shift: 0

Additional information

https://answers.ros.org/question/365415/autoware-ndt_mapping-doesnt-compute-transformation-matrix/?answer=373505#post-id-373505

Ray ground filter : Spotty ground removal

Bug report

Required information:

  • Operating system and version:
  • Autoware installation type: Source
  • Autoware version or commit hash : 411330c
  • ROS distribution and version: Melodic
  • ROS installation type: apt
  • Package or library, if applicable: ray_ground_filter

Description of the bug

I have noticed that the no ground pointcloud is very sparse and that a lot of points on the obstacles are classified as ground even though they shouldn't.

Steps to reproduce the bug

  • Use ray_ground_filter on a high density pointcloud

Expected behavior

No 'spots' of ground points between vertical points on obvious obstacles.

Actual behavior

The obstacles are scattered with ground points and vertical points.

Screenshots

Source of pointcloud : Ouster OS1 128 beams.

  • Man at 40m : (Orange = vertical points, grey : raw cloud)
    image

  • Man at 3m : (green = ground, pink = vertical)
    image

I have tried to reduce the radial_divider_angle to 0.006 as sensor is running at 1024 samples per rotation and : (2*PI / 0.006 ≈ 1024).

  • Man at 40m : (green = ground, pink = vertical)
    image

  • Man at 3m : (green = ground, pink = vertical)
    image

Additional information

I have tried before and after PR #12 and the result is the same.

Here is the config we use :

lidar_center:
  ray_ground_filter:
    base_frame: base_link
    ground_point_topic: /lidar_center/points_ground_test
    input_point_topic: /lidar_center/points_raw
    no_ground_point_topic: /lidar_center/points_no_ground_test
    clipping_height: 1.8
    # min_point_distance in platform_params
    radial_divider_angle: 0.05
    concentric_divider_distance: 0
    local_max_slope: 8
    general_max_slope: 8
    min_height_threshold: 0.4
    reclass_distance_threshold: 0.5
    sensor_height: 2.23
    min_point_distance: 2.0

I can provide a bag if needed.

Thanks !

Ndt_mapping not working

Required information:
• Operating system and version:
◦ Ubuntu 18.04
• Autoware installation type:
◦ From source
• Autoware version or commit hash
◦ Master version
• ROS distribution and version:
◦ Melodic
• ROS installation type:
◦ Binary
• Package or library, if applicable:
◦ Ndt_mapping

Description of the bug
Ndt_mapping did not work. When I start the bag file which includes /points_raw and /vehicle_twist topics, the transformation matrix seems to be empty(zero matrix). When the ndt_mapping process terminated via requesting a pcd output with runtime manager, generated pcd can not be loaded with points_map_loader. Points_map_loader returns “no loaded points” as a warning.

Steps to reproduce the bug
Start ndt_mapping using by Runtime Manager GUI, after starting the bag file.

Expected behavior
Ndt_mapping should create pcd file.

Actual behavior
Created pcd file is broken or empty.

Screenshots
Autoware-1.15 behavior:
Screenshot from 2021-04-08 17-43-34

Autoware-1.12 behavior:
Screenshot from 2021-04-08 17-51-07

Additional information
I’ve tried ndt_mapping different version of autoware.ai(1.12 and 1.14) with same bag file and it has worked.

Bug: Publish the wrong map message.

Bug report

Hi! Thanks for your excellent work!

I don't understand why the map at last frame was published? Is it better to publish current map (map) instead of previous map (*map_ptr)? If so, the code at line 788 of ndt_mapping.cpp may be changed to
pcl::toROSMsg(map, *map_msg_ptr);
Or maybe I misunderstand something. :)

dead lock in ndt_gpu

Bug report

Required information:

  • Operating system and version:
    • ubuntu 16.04
  • Autoware installation type:
    • source
  • Autoware version or commit hash
    • master
  • ROS distribution and version:
    • kinetic
  • ROS installation type:
    • http://wiki.ros.org/kinetic/Installation
  • Package or library, if applicable:
    • ndt_gpu

Description of the bug

using ndt_gpu package, some times it get trapped at buildParent.

It seems a dead lock, caused by an unnecessary __syncthreads() at line 1303, under an if block.

Steps to reproduce the bug

I met this bug running on my own data, at one computer, but did not at another computer. It may depends on hardware.

Expected behavior

Actual behavior

Screenshots

Additional information

Autoware fails to recognize small objects on the road

Bug report

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Docker image autoware/autoware:1.14.0-melodic-cuda, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware at commit 975b6f3.
  • Autoware version or commit hash - 1.14.0
  • ROS distribution and version:
    • ROS Melodic (pre-installed in the docker image)
  • ROS installation type:
    • Pre-installed in the docker image

Description of the bug

  • I am running Autoware 1.14.0 (docker image: autoware/autoware:1.14.0-melodic-cuda), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1) as a simulator and carla-autoware-bridge at 975b6f3 as a bridge.
  • There is an accident that leaves a debris on the path, which is large enough that it should be avoided.
  • The vehicle doesn't recognize the debris and hits it: Camera.
  • Based on the point clouds, neither the LiDAR nor fusion data seem to have recognized the object: rviz.
  • Please check the rosbag file for a detailed trace.

Steps to reproduce the bug

  1. Replay the attached rosbag files.

Expected behavior

Vehicle recognizes the debris and avoids hitting it.

Actual behavior

Vehicle fails to recognize the debris and hits it.

Screenshots

Videos attached above.

Additional information

traffic_light_recognition died.

Bug report

Required information:

  • Operating system and version:

    • Ubuntu 18.04
  • Autoware installation type:

    • Docker with mounted source code directory
  • Autoware version or commit hash

    • 0.14.0 version
  • ROS distribution and version:

    • melodic
  • ROS installation type:

    • Docker
  • Package or library, if applicable:

    • trafficlight_recognizer

Description of the bug

traffic_light_recognition process has died as follows:

$ roslaunch traffic_light_recognition.launch                            
...

started roslaunch server http://ARC-TaeHyung-19:38169/

SUMMARY
========

PARAMETERS
 * /rosdistro: melodic
 * /rosversion: 1.14.6
 * /tl_switch/ams_light_color_topic: /ams_light_color
 * /tl_switch/camera_light_color_topic: //camera_light_color
 * /tl_switch/light_color_topic: /light_color
 * /traffic_light_recognition/camera_light_color_topic: //camera_light_color
 * /traffic_light_recognition/image_raw_topic: //image_raw

NODES
  /
    tl_switch (trafficlight_recognizer/tl_switch)
    traffic_light_recognition (trafficlight_recognizer/region_tlr)

ROS_MASTER_URI=http://localhost:11311

process[traffic_light_recognition-1]: started with pid [8943]

(superimpose result:8943): dbind-WARNING **: 20:21:39.405: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
process[tl_switch-2]: started with pid [8944]
Attempt to unlock mutex that was not locked
[traffic_light_recognition-1] process has died [pid 8943, exit code -6, cmd /home/autoware/Autoware/install/trafficlight_recognizer/lib/trafficlight_recognizer/region_tlr __name:=traffic_light_recognition __log:=/home/autoware/.ros/log/9da67ef8-dc03-11ea-a975-54bf647d3a59/traffic_light_recognition-1.log].
log file: /home/autoware/.ros/log/9da67ef8-dc03-11ea-a975-54bf647d3a59/traffic_light_recognition-1*.log
^C[tl_switch-2] killing on exit
^Cshutting down processing monitor...
... shutting down processing monitor complete
done

Steps to reproduce the bug

1.Download vector, point map.

$ cd ~/shared_dir
$ git clone https://github.com/lgsvl/autoware-data.git
$ git checkout test_0.14.0

2.Run docker container

# Change directory to docker/generic
$ ./run.sh -t local # in my case, I used local tag. Instead you may use 1.14.0 tag.

3.Start Runtime Manager

$ roslaunch runtime_manager runtime_manager.launch

4.Start LGSVL Simulator
4-1.Download LGSVL Simulator
https://github.com/lgsvl/simulator/releases/download/2020.05/lgsvlsimulator-linux64-2020.05.zip
4-2.Unzip it.
4-3.Run simulator.
4-4.Click Open Browser
4-5.In Maps tap, add BorregasAve map as follows:

Map Name: BorregasAve
Map URL: https://assets.lgsvlsimulator.com/65fac0d499d716bb9a7ca89803f672d97b366cf9/environment_BorregasAve

4-6.In Vehicles tab, add Jaguar2015XE vehicle as follows:

Vehicle Name: Jaguar2015XE
Vehicle URL: https://assets.lgsvlsimulator.com/d49c88a93b11b047a3e21e0c7d4b2ec6964fb16d/vehicle_Jaguar2015XE

4-7.Add JSON sensor configuration
In Vehicles tab, click wrench button on Jaguar2015XE and select Bridge Type as ROS and copy and paste the following JSON configuration into Sensors window.

[
  {
    "type": "GPS Device",
    "name": "GPS",
    "params": {
      "Frequency": 12.5,
      "Topic": "/nmea_sentence",
      "Frame": "gps",
      "IgnoreMapOrigin": true
    },
    "transform": {
      "x": 0,
      "y": 0,
      "z": 0,
      "pitch": 0,
      "yaw": 0,
      "roll": 0
    }
  },
  {
    "type": "GPS Odometry",
    "name": "GPS Odometry",
    "params": {
      "Frequency": 12.5,
      "Topic": "/odom",
      "Frame": "gps",
      "IgnoreMapOrigin": true
    },
    "transform": {
      "x": 0,
      "y": 0,
      "z": 0,
      "pitch": 0,
      "yaw": 0,
      "roll": 0
    }
  },
  {
    "type": "IMU",
    "name": "IMU",
    "params": {
      "Topic": "/imu_raw",
      "Frame": "imu"
    },
    "transform": {
      "x": 0,
      "y": 0,
      "z": 0,
      "pitch": 0,
      "yaw": 0,
      "roll": 0
    }
  },
  {
    "type": "Lidar",
    "name": "Lidar",
    "params": {
      "LaserCount": 32,
      "MinDistance": 0.5,
      "MaxDistance": 100,
      "RotationFrequency": 10,
      "MeasurementsPerRotation": 360,
      "FieldOfView": 41.33,
      "CenterAngle": 10,
      "Compensated": true,
      "PointColor": "#ff000000",
      "Topic": "/points_raw",
      "Frame": "velodyne"
    },
    "transform": {
      "x": 0,
      "y": 2.312,
      "z": -0.3679201,
      "pitch": 0,
      "yaw": 0,
      "roll": 0
    }
  },
  {
    "type": "Color Camera",
    "name": "Main Camera",
    "params": {
      "Width": 1920,
      "Height": 1080,
      "Frequency": 15,
      "JpegQuality": 75,
      "FieldOfView": 50,
      "MinDistance": 0.1,
      "MaxDistance": 1000,
      "Topic": "/simulator/camera_node/image/compressed",
      "Frame": "camera"
    },
    "transform": {
      "x": 0,
      "y": 1.7,
      "z": -0.2,
      "pitch": 0,
      "yaw": 0,
      "roll": 0
    }
  },
  {
    "type": "Manual Control",
    "name": "Manual Car Control"
  },
  {
    "type": "Vehicle Control",
    "name": "Autoware Car Control",
    "params": {
      "Topic": "/vehicle_cmd"
    }
  }
]

4-8.In Simulations tab, add one as follows:

In General tab, Simulation Name: BorregasAve(Autoware)
In General tab, Select Cluster: Local Machine
In Map & Vehicle tab, Select Map: BorregasAve
In Select Vehicles, Select Jaguar2015XE and enter localhost:9090 next to it.
# Click Submit

4-9.Select BorregasAve(Autoware) simulation and click play button in the bottom.

5.Set launch files in Runtime Manager as follows:

In Quick Start tab, 
Map: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_map.launch
Sensing: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_sensing_simulator.launch
Localization: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_localization.launch
  1. Start Map, Sensing, Localzation modules in Runtime Manager
    Click Map, Sensing, Localization buttons.

7.In the terminal, start tmux

In one window, 
$ roslaunch trafficlight_recognizer feat_proj_option.launch
In the other window,
$ $ roslaunch trafficlight_recognizer traffic_light_recognition.launch
# You can see error message that traffic_light_recognition died.

Expected behavior

traffic_light_recognition node runs without error.

Actual behavior

traffic_light_recognition node died.

Screenshots

image

Additional information

lidar_euclidean_cluster_detect output topics are not transformed properly.

/label ~bug

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Source
  • Autoware version or commit hash OAP7
  • ROS distribution and version: Ros melodic
  • ROS installation type: APT
  • Package or library, if applicable: lidar_euclidean_cluster_detect

Description of the bug

lidar_euclidean_cluster_detect output topics are not fully transformed.

CloudCluster in /detection/lidar_detector/cloud_clusters

Transformed properly :

  • min_point
  • max_point
  • avg_point
  • centroid_point

Not transformed :

  • convex_hull
  • bounding_box

Because this message is not TFed properly the messages published in /detection/lidar_detector/objects are also wrong :

DetectedObject in /detection/lidar_detector/objects

Transformed properly :

  • None

Not transformed :

  • pose
  • convex_hull

Steps to reproduce the bug

  • Configure lidar_euclidean_cluster_detect output frame different from the input frame.
  • Publish pointclouds to the /points_raw topic.
  • Open rqt and notice that some data is not tfed.

Screenshots

image
The points are still in Lidar frame and not map frame, even if the header says so ! It really looks like a bug.

Additional information

I have corrected the bug.
One question though, what is the best way to TF a Point32 ? Because the TF listener has no transformPoint32 function.

Thanks !

Using Local Cone Height threshold only, While Checking a Point is a Ground Point

Hello,

Required information:

Operating system and version

  • Ubuntu 18.

Autoware installation type:

  • Source

Autoware version or commit:

  • 0.13.0 version

ROS distribution and version

  • melodic

ROS installation :

  • Source build

Package or library, if applicable

  • ray_ground_filter

If current point is not in the local cone of the previous point:

  • If they are closer than reclass_distance_threshold_, current point is not ground.
  • If they are far from each other and current point's height is in the general_height_threshold (Global Cone), current_point is a ground point.

But above logic is not what implemented in the code. You can see we are using only height_threshold rather than using general_height_threshold while we are checking for Global Cone.

https://github.com/Autoware-AI/core_perception/blob/402ea985234c0fe30c3f3f944cb8663039295912/points_preprocessor/nodes/ray_ground_filter/ray_ground_filter.cpp#L195-L197

Shouldn't it be:

 // check if previous point is too far from previous one, if so classify again
if (points_distance > reclass_distance_threshold_ &&
            (current_height <= general_height_threshold && current_height >= -general_height_threshold))

Am I missing something?

Have a great day.

error: could not convert ‘cv::Scalar_<double>((double)0, (double)255, (double)255, (double)0)’ from ‘cv::Scalar {aka cv::Scalar_<double>}

Hello,I met the problem.How can I solve it?

In file included from /usr/local/include/opencv2/imgproc/imgproc.hpp:48:0,
from /opt/ros/melodic/include/cv_bridge/cv_bridge.h:44,
from /home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:37:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect

/vision_lane_detect.cpp: In function ‘void process_image_common(IplImage*)’:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:514:58: error: could not convert ‘cv::Scalar_((double)0, (double)255, (double)255, (double)0)’ from ‘cv::Scalar {aka cv::Scalar_}’ to ‘CvScalar’
cvPoint(frame_size.width/2, frame_size.height), CV_RGB(255, 255, 0), 1);
^
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp: In function ‘void lane_cannyhough_callback(const Image&)’:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:544:30: error: conversion from ‘cv::Mat’ to non-scalar type ‘IplImage {aka _IplImage}’ requested
IplImage frame = cv_image->image;
~~~~~~~~~~^~~~~
make[2]: *** [CMakeFiles/vision_lane_detect.dir/nodes/vision_lane_detect/vision_lane_detect.cpp.o] Error 1
make[1]: *** [CMakeFiles/vision_lane_detect.dir/all] Error 2
make: *** [all] Error 2

open planner can't make lane change

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware install From source
  • Autoware version: 1.14
  • ROS distribution and version: melodic 1.14.12
  • ROS install From source(apt)
  • Package or library, if applicable: op_global_planner, op_local_planner

Description of the bug

open planner doesn't work correctly on vehicle

Steps to reproduce the bug

  1. Launch the runtime_manager node using roslaunch runtime_manager runtime_manager.launch
  2. start the nodes by runtime_manager(displayed in screenshot below).

Expected behavior

global planner can help vehicle make lane change by a given goal

Actual behavior

it doesn't change, and just go to the end of the lane

Screenshots

  • the nodes we launch below:
    0
  • the rviz below:
    1

How to tune ekf_localizer

Hello,
I want to tune ekf_localizer pose_stddev_x and pose_stddev_y. Is there any suggestion range to change? For example, It's default value is 0,05, if I want to increase, should I try 0.1 or 1 or 100?

TF breakdown during turns

Bug report

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Docker image autoware/autoware:1.14.0-melodic-cuda, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware at commit 975b6f3.
  • Autoware version or commit hash - 1.14.0
  • ROS distribution and version:
    • ROS Melodic (pre-installed in the docker image)
  • ROS installation type:
    • Pre-installed in the docker image

Description of the bug

  • I am running Autoware 1.14.0 (docker image: autoware/autoware:1.14.0-melodic-cuda), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1) as a simulator and carla-autoware-bridge at 975b6f3 as a bridge.
  • There are multiple instances where frames get "shattered" during turns.
  • The effect are immediate; the vehicle makes a weird, clumsy turn, gets disoriented, often invades lanes. In the worst case, it keeps turning at an intersection.
  • One thing I noticed is that TF breakdown mostly happens at the last corner before reaching the destination.
  • Instance 1
  • Instance 2
  • Instance 3
  • Instance 4
  • Instance 5
  • Instance 6
  • Instance 7
  • Instance 8
  • Instance 9 - in this case, the TFs get re-aligned after a while, but then gets shattered again
  • Instance 10 - chaotic case. The vehicle keeps looping at an intersection

Steps to reproduce the bug

  1. Replay the attached rosbag files.

Expected behavior

TFs should be aligned, moving together as a piece.

Actual behavior

TFs gets separated, and vehicle gets lost.

Screenshots

Videos attached above.

Additional information

Autoware interpreting the road ahead as an object at a steep downhill

Bug report

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Docker image autoware/autoware:1.14.0-melodic-cuda, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware at commit 975b6f3.
  • Autoware version or commit hash - 1.14.0
  • ROS distribution and version:
    • ROS Melodic (pre-installed in the docker image)
  • ROS installation type:
    • Pre-installed in the docker image

Description of the bug

  • I am running Autoware 1.14.0 (docker image: autoware/autoware:1.14.0-melodic-cuda), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1) as a simulator and carla-autoware-bridge at 975b6f3 as a bridge.
  • When the vehicle reaches the end of a steep downhill, the lidar misinterprets the road ahead as an object and the vehicle stops moving.
  • Please see the camera data from 00:10.
  • This is the rosbag file that has the entire data. We can find that the road is perceived as an object by looking at the lidar point clouds.
  • Map files can be downloaded at: autoware-contents.

Steps to reproduce the bug

  1. Replay the self-contained rosbag file.

Expected behavior

The street should not be perceived as an object.

Actual behavior

The street is perceived as an object, and Autoware vehicle refuses to move forward as it thinks its way is blocked.

Screenshots

Additional information

[pixel-cloud-fusion] How to keep uncolored points ?

Hello everyone!
Thanks for taking the time to check my post, I wonder what modifications would be necessary to keep the not-projected/uncolored points on the pointcloud. I ask this as I do not know cpp programming. Thanks for any ideas or tips!

Where is the .clang-format for this repository?

Hi. I have used a bit of code here and would like to contribute back (a little bit of refactoring for the shape fitter if the repository owner does not mind). Then, I notice I can not find an open clang-formatter anywhere though the log said clang-formatter has been applied.
https://github.com/Autoware-AI/core_perception/blob/f14a0ca32d57ab87477cd5fb3e96656320bc497e/lidar_fake_perception/CHANGELOG.rst#180-2018-08-31

If able, can you add an open format for clang-formatter, which makes it easier for contributors?

Sorry if it is wrong to label this issue as "bug" :)

naive_motion_predict subscribed topic

Bug report

Required information:

  • Operating system and version:
    Ubuntu 18.04
  • Autoware installation type: Source
  • Autoware version or commit hash: 1.14.0
  • ROS distribution and version: Melodic
  • ROS installation type: Package manager
  • Package or library, if applicable: naive_motion_predict

Description of the bug

The subscribed topic is outdated in documentation page of naive_motion_predict. It should be updated to /detection/objects from /detection/lidar_tracker/objects.

Localization error underneath a bridge

Bug report

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Docker image autoware/autoware:1.14.0-melodic-cuda, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware at commit 975b6f3.
  • Autoware version or commit hash - 1.14.0
  • ROS distribution and version:
    • ROS Melodic (pre-installed in the docker image)
  • ROS installation type:
    • Pre-installed in the docker image

Description of the bug

  • I am running Autoware 1.14.0 (docker image: autoware/autoware:1.14.0-melodic-cuda), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1) as a simulator and carla-autoware-bridge at 975b6f3 as a bridge.
  • When the ego_vehicle is passing underneath a bridge in Carla's "Town 04" map, the base_link frame (i.e., the velodyne frame) gets stranded, while all the other frames including the camera, gnss, imu keep moving forward along the global path. As a result of this discrepancy, the vehicle gets disoriented and crashes into a fence.
  • The camera data (from 1:40) shows that the camera sensor is moving forward without any problem underneath the bridge.
  • However, rviz (from 2:34) shows that the vehicle's estimated position (pink arrow) stops moving right before the bridge.
  • Please check the rosbag file for a detailed trace.
  • Map files can be downloaded at: autoware-contents.

Steps to reproduce the bug

  1. Replay the self-contained rosbag file.

Expected behavior

All TFs move along the global path without any of them getting stranded.

Actual behavior

base_frame stops moving, resulting in a localization error and a collision.

Screenshots

Additional information

Unknown CMake command "AW_CHECK_CUDA".

ase
Starting >>> lidar_point_pillars
--- stderr: lidar_point_pillars
CMake Error at CMakeLists.txt:9 (AW_CHECK_CUDA):
Unknown CMake command "AW_CHECK_CUDA".


Failed <<< lidar_point_pillars [0.13s, exited with code 1]

Summary: 0 packages finished [0.45s]
1 package failed: lidar_point_pillars
1 package had stderr output: lidar_point_pillars

vision_darknet_detect fails to "tag" traffic lights, stop signs, and speed signs

Bug report

Required information:

  • Operating system and version: Ubuntu 18.04
  • Autoware installation type: Docker image autoware/autoware:1.14.0-melodic-cuda, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware at commit 975b6f3.
  • Autoware version or commit hash - 1.14.0
  • ROS distribution and version:
    • ROS Melodic (pre-installed in the docker image)
  • ROS installation type:
    • Pre-installed in the docker image

Description of the bug

  • I am running Autoware 1.14.0 (docker image: autoware/autoware:1.14.0-melodic-cuda), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1) as a simulator and carla-autoware-bridge at 975b6f3 as a bridge.
  • Using vision_darknet_detect with files yolov3.cfg, yolov3.weight, and coco.names provided in the repository, object detection works well with cars and walkers; they get detected and tagged properly as cars and walkers. However, the speed signs, stop signs, and traffic lights are being recognized, but not semantically tagged; they are tagged as "unknown #"
    • This can be observed in the captured rviz.
  • Please refer to the rosbag file for detailed trace.

Steps to reproduce the bug

  1. Replay the self-contained rosbag file.

Expected behavior

Traffic lights, stop signs, and speed signs are recognized, and tagged as what they are.

Actual behavior

They are recognized, but get "unknown" tags.

Screenshots

speed_signs
traffic_light

Additional information

P.S. Thank you for prompt replies!

nmea2tfpose sim mode not working.

Bug report

Required information:

  • Operating system and version:

    • Ubuntu 18.04
  • Autoware installation type:

    • Locally generated docker container
  • Autoware version or commit hash

  • ROS distribution and version:

    • melodic
  • ROS installation type:

    • ROS was installed by docker script.
  • Package or library, if applicable:

    • gnss_localizer

Description of the bug

In gnss_localizer, nmea2tfpose has Simulation Mode. When I clicked it, /gnss_pose topic was not applied for simulation mode. With simulation mode activated, plane number 0 should be applied internally. However, regardless of simulation mode activation, existed plane number is still be applying to the gnss_localizer node.

Steps to reproduce the bug

  1. Launch lgsvl simulator.
  2. Launch runtime_manager
$ roslaunch runtime_manager runtime_manager.launch
  1. On the Quick Start tab, activate Map, Sensing, Localization.
  2. On Computing tab, click app for nmea2tfpose in gnss_localizer.
  3. Click Simulation Mode.
  4. Check rostopic echo like the following
$ rostopic echo /gnss_pose -n 1

Expected behavior

/gnss_pose has pose without adding origin coordinate
When plane number is 7, simulation mode is activated

$ rostopic echo /gnss_pose -n 1
header: 
  seq: 78
  stamp: 
    secs: 1595346468
    nsecs: 615610624
  frame_id: "map"
pose: 
  position: 
    x: -9.03081818061
    y: 50.3633886972
    z: -1.122381
  orientation: 
    x: 0.550936004788
    y: 0.569652790116
    z: 0.42817032443
    w: 0.434321759319
---

Actual behavior

When plane number is 7, simulation mode is activated

$ rostopic echo /gnss_pose -n 1
header: 
  seq: 0
  stamp: 
    secs: 1595346281
    nsecs: 445789184
  frame_id: "map"
pose: 
  position: 
    x: -86029096.0418
    y: -3981885.24795
    z: -1.122381
  orientation: 
    x: 0.550936004788
    y: 0.569652790116
    z: 0.42817032443
    w: 0.434321759319
---

Screenshots

Additional information

Is the angular velocity of imu Euler angle representation in ndt_mapping.cpp?

 In the line 298 of core_perception/lidar_localizer/nodes/ndt_mapping/ndt_mapping.cpp, 

the function static void imu_calc(ros::Time current_time) calculate
the diff_imu_roll、diff_imu_pitch、diff_imu_yaw with the following statements:
double diff_imu_roll = imu.angular_velocity.x * diff_time;
double diff_imu_pitch = imu.angular_velocity.y * diff_time;
double diff_imu_yaw = imu.angular_velocity.z * diff_time;
I am confused with the mu.angular_velocity. is the mu.angular_velocity Euler angle representation? I think generally speaking
this is not. Could someone tell me why in here the angular_velocity of imu is Euler angle representation?

Pixel Cloud Fusion Unnecessary Iterations

Bug report

Required information:

  • Operating system and version:
    • Ubuntu 18.04
  • Autoware installation type:
    • Source Build
  • Autoware version or commit hash
    • 15.0 - master branch
  • ROS distribution and version:
    • Melodic
  • ROS installation type:
    • Binaries

Description of the bug

  • After associating the 3D LiDAR points with respective pixels, we are iterating over all the pixels in the frame itself where I am confident that is not necessary.

Steps to reproduce the bug

  1. Launch the pixel_cloud_fusion node
  2. Launch rviz using rviz
  3. Add the /points_fused topic

Expected behavior

  1. You will see it is so slow to handle any mid-size images and pointclouds, especially images since main iteration is throug the image coordinates.

Have a great day.

The link for the weights file in trafficlight_recognizer/region_tlr_ssd is unavailable

Originally reported as Autoware-AI/autoware.ai#2348

Bug report

Required information:

  • Operating system and version:
    • Ubuntu 18.04
  • Autoware installation type:
    • source
  • Autoware version or commit hash
    • 1.14.0
  • ROS distribution and version:
    • melodic
  • ROS installation type:
    • binary
  • Package or library, if applicable:
    • trafficlight_recognizer/region_tlr_ssd

Description of the bug

The link for the ssd model is broken.
doc: https://github.com/Autoware-AI/core_perception/tree/master/trafficlight_recognizer/nodes/region_tlr_ssd
link: http://db3.ertl.jp/autoware/tlr_trained_model/data.tar.bz2

what does mean mult = (points_num - 1.0) / points_num in line 408 in the VoxelGrid.cu of ndt_gpu?

I am studying the ndt_gpu of autoware_ai_perception. I found a mult is multiplied to the covariace matrix in line 408 in VoxelGrid.cu of ndt_gpu. the snip of sources code which includes the mult is from line 408 to line 411 is as follow:

double mult = (points_num - 1.0) / points_num;

cov(0, 0) = ((cov(0, 0) - 2.0 * p0 * c0) / points_num + c0 * c0) * mult;
cov(0, 1) = ((cov(0, 1) - 2.0 * p0 * c1) / points_num + c0 * c1) * mult;

Could someone know this mult means?
Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.