autowarefoundation / autoware_ai_perception Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Support for ROS Noetic
We're using the points_processor
package and are transitioning to ROS Noetic.
A separate branch for Noetic changes would already help. I've made some changes for the new velodyne release (Noetic uses 1.6.1). It would be nice if I could target these changes against a noetic branch so other people don't have to go to the same trouble.
Everybody repeats this effort 👓
pcl_omp_registration
After cloning and installing dependencies using rosdep
I would expect everything to compile fine. Running catkin_make
runs into an error on the pcl_omp_registration
package and stalls out.
build_depends.repo
filetmv_vendor
to fix itcatkin_make
builds clean
errors out on pcl_omp_registration
Copy & pasted some of the errors for reference:
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:23: error: expected identifier before string constant
181 | PCL_DEPRECATED ("[pcl::registration::Registration::setInputCloud] setInputCloud is deprecated. Please use setInputSource instead.")
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’ cannot be overloaded with ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’
186 | PCL_DEPRECATED ("[pcl::registration::Registration::getInputCloud] getInputCloud is deprecated. Please use getInputSource instead.")
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:7: note: previous declaration ‘int pcl_omp::Registration<PointSource, PointTarget, Scalar>::PCL_DEPRECATED(int)’
181 | PCL_DEPRECATED ("[pcl::registration::Registration::setInputCloud] setInputCloud is deprecated. Please use setInputSource instead.")
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:375:45: error: ‘boost::function’ has not been declared
375 | registerVisualizationCallback (boost::function<FunctionSignature> &visualizerCallback)
| ^~~~~~~~
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:575:14: error: ‘function’ in namespace ‘boost’ does not name a template type; did you mean ‘is_function’?
575 | boost::function<void(const pcl::PointCloud<PointSource> &cloud_src,
| ^~~~~~~~
| is_function
code/ros/noetic/src/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:377:13: error: ‘visualizerCallback’ was not declared in this scope
377 | if (visualizerCallback != NULL)
Release points_preprocessor
to the ROS buildfarm.
We're using the ray_ground_filter
from the points_preprocessor
package. But installation and compilation takes quite some time because of the dependencies. So would it be possible to release for melodic and noetic?
I can imagine it might be undesired to release a single package from this complete repository, but it might be a nice starting point.
Fork the single points_preprocessor
package, strip the autoware dependencies such as autoware_health_checker
. This last one has heavy dependencies and is not required for operation.
But I'd rather stick to the original version here of course. For credibility and maintenance reasons :)
ray_ground_filter crashes on empty clouds
Send an empty cloud to the ray_ground_filter
Not crashing, just forward the cloud (no ground in an empty message).
Crash
I run ndt_mapping on the demo bag file (sample_moriyama_150324.bag). It shows /point_raw on rviz, but /points_map can't be visualized and /current_pose is alwasys zero matrix. Does anyone have an idea why?
have mapping output
Console output: ....
"-----------------------------------------------------------------"
(Processed/Input): (306 / 308)
"-----------------------------------------------------------------"
Sequence number: 86918
Number of scan points: 39581 points.
Number of filtered scan points: 3091 points.
transformed_scan_ptr: 39581 points.
map: 39928 points.
NDT has converged: 1
Fitness score: 0
Number of iteration: 0
(x,y,z,roll,pitch,yaw):
(0, 0, 0, 0, -0, 0)
Transformation Matrix:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
shift: 0
411330c
ray_ground_filter
I have noticed that the no ground
pointcloud is very sparse and that a lot of points on the obstacles are classified as ground even though they shouldn't.
ray_ground_filter
on a high density pointcloudNo 'spots' of ground points between vertical points on obvious obstacles.
The obstacles are scattered with ground points and vertical points.
Source of pointcloud : Ouster OS1 128 beams.
I have tried to reduce the radial_divider_angle to 0.006
as sensor is running at 1024 samples per rotation and : (2*PI / 0.006 ≈ 1024)
.
I have tried before and after PR #12 and the result is the same.
Here is the config we use :
lidar_center:
ray_ground_filter:
base_frame: base_link
ground_point_topic: /lidar_center/points_ground_test
input_point_topic: /lidar_center/points_raw
no_ground_point_topic: /lidar_center/points_no_ground_test
clipping_height: 1.8
# min_point_distance in platform_params
radial_divider_angle: 0.05
concentric_divider_distance: 0
local_max_slope: 8
general_max_slope: 8
min_height_threshold: 0.4
reclass_distance_threshold: 0.5
sensor_height: 2.23
min_point_distance: 2.0
I can provide a bag if needed.
Thanks !
Required information:
• Operating system and version:
◦ Ubuntu 18.04
• Autoware installation type:
◦ From source
• Autoware version or commit hash
◦ Master version
• ROS distribution and version:
◦ Melodic
• ROS installation type:
◦ Binary
• Package or library, if applicable:
◦ Ndt_mapping
Description of the bug
Ndt_mapping did not work. When I start the bag file which includes /points_raw and /vehicle_twist topics, the transformation matrix seems to be empty(zero matrix). When the ndt_mapping process terminated via requesting a pcd output with runtime manager, generated pcd can not be loaded with points_map_loader. Points_map_loader returns “no loaded points” as a warning.
Steps to reproduce the bug
Start ndt_mapping using by Runtime Manager GUI, after starting the bag file.
Expected behavior
Ndt_mapping should create pcd file.
Actual behavior
Created pcd file is broken or empty.
Screenshots
Autoware-1.15 behavior:
Additional information
I’ve tried ndt_mapping different version of autoware.ai(1.12 and 1.14) with same bag file and it has worked.
Hi! Thanks for your excellent work!
I don't understand why the map at last frame was published? Is it better to publish current map (map) instead of previous map (*map_ptr)? If so, the code at line 788 of may be changed to
pcl::toROSMsg(map, *map_msg_ptr);
Or maybe I misunderstand something. :)
using ndt_gpu package, some times it get trapped at buildParent.
It seems a dead lock, caused by an unnecessary __syncthreads() at line 1303, under an if
block.
I met this bug running on my own data, at one computer, but did not at another computer. It may depends on hardware.
autoware/autoware:1.14.0-melodic-cuda
, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware
at commit 975b6f3
.autoware/autoware:1.14.0-melodic-cuda
), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1
) as a simulator and carla-autoware-bridge at 975b6f3
as a bridge.Vehicle recognizes the debris and avoids hitting it.
Vehicle fails to recognize the debris and hits it.
Videos attached above.
Operating system and version:
Autoware installation type:
Autoware version or commit hash
ROS distribution and version:
ROS installation type:
Package or library, if applicable:
traffic_light_recognition process has died as follows:
$ roslaunch traffic_light_recognition.launch
...
started roslaunch server http://ARC-TaeHyung-19:38169/
SUMMARY
========
PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.6
* /tl_switch/ams_light_color_topic: /ams_light_color
* /tl_switch/camera_light_color_topic: //camera_light_color
* /tl_switch/light_color_topic: /light_color
* /traffic_light_recognition/camera_light_color_topic: //camera_light_color
* /traffic_light_recognition/image_raw_topic: //image_raw
NODES
/
tl_switch (trafficlight_recognizer/tl_switch)
traffic_light_recognition (trafficlight_recognizer/region_tlr)
ROS_MASTER_URI=http://localhost:11311
process[traffic_light_recognition-1]: started with pid [8943]
(superimpose result:8943): dbind-WARNING **: 20:21:39.405: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
process[tl_switch-2]: started with pid [8944]
Attempt to unlock mutex that was not locked
[traffic_light_recognition-1] process has died [pid 8943, exit code -6, cmd /home/autoware/Autoware/install/trafficlight_recognizer/lib/trafficlight_recognizer/region_tlr __name:=traffic_light_recognition __log:=/home/autoware/.ros/log/9da67ef8-dc03-11ea-a975-54bf647d3a59/traffic_light_recognition-1.log].
log file: /home/autoware/.ros/log/9da67ef8-dc03-11ea-a975-54bf647d3a59/traffic_light_recognition-1*.log
^C[tl_switch-2] killing on exit
^Cshutting down processing monitor...
... shutting down processing monitor complete
done
1.Download vector, point map.
$ cd ~/shared_dir
$ git clone https://github.com/lgsvl/autoware-data.git
$ git checkout test_0.14.0
2.Run docker container
# Change directory to docker/generic
$ ./run.sh -t local # in my case, I used local tag. Instead you may use 1.14.0 tag.
3.Start Runtime Manager
$ roslaunch runtime_manager runtime_manager.launch
4.Start LGSVL Simulator
4-1.Download LGSVL Simulator
https://github.com/lgsvl/simulator/releases/download/2020.05/lgsvlsimulator-linux64-2020.05.zip
4-2.Unzip it.
4-3.Run simulator.
4-4.Click Open Browser
4-5.In Maps tap, add BorregasAve map as follows:
Map Name: BorregasAve
Map URL: https://assets.lgsvlsimulator.com/65fac0d499d716bb9a7ca89803f672d97b366cf9/environment_BorregasAve
4-6.In Vehicles tab, add Jaguar2015XE vehicle as follows:
Vehicle Name: Jaguar2015XE
Vehicle URL: https://assets.lgsvlsimulator.com/d49c88a93b11b047a3e21e0c7d4b2ec6964fb16d/vehicle_Jaguar2015XE
4-7.Add JSON sensor configuration
In Vehicles tab, click wrench button on Jaguar2015XE and select Bridge Type as ROS and copy and paste the following JSON configuration into Sensors window.
[
{
"type": "GPS Device",
"name": "GPS",
"params": {
"Frequency": 12.5,
"Topic": "/nmea_sentence",
"Frame": "gps",
"IgnoreMapOrigin": true
},
"transform": {
"x": 0,
"y": 0,
"z": 0,
"pitch": 0,
"yaw": 0,
"roll": 0
}
},
{
"type": "GPS Odometry",
"name": "GPS Odometry",
"params": {
"Frequency": 12.5,
"Topic": "/odom",
"Frame": "gps",
"IgnoreMapOrigin": true
},
"transform": {
"x": 0,
"y": 0,
"z": 0,
"pitch": 0,
"yaw": 0,
"roll": 0
}
},
{
"type": "IMU",
"name": "IMU",
"params": {
"Topic": "/imu_raw",
"Frame": "imu"
},
"transform": {
"x": 0,
"y": 0,
"z": 0,
"pitch": 0,
"yaw": 0,
"roll": 0
}
},
{
"type": "Lidar",
"name": "Lidar",
"params": {
"LaserCount": 32,
"MinDistance": 0.5,
"MaxDistance": 100,
"RotationFrequency": 10,
"MeasurementsPerRotation": 360,
"FieldOfView": 41.33,
"CenterAngle": 10,
"Compensated": true,
"PointColor": "#ff000000",
"Topic": "/points_raw",
"Frame": "velodyne"
},
"transform": {
"x": 0,
"y": 2.312,
"z": -0.3679201,
"pitch": 0,
"yaw": 0,
"roll": 0
}
},
{
"type": "Color Camera",
"name": "Main Camera",
"params": {
"Width": 1920,
"Height": 1080,
"Frequency": 15,
"JpegQuality": 75,
"FieldOfView": 50,
"MinDistance": 0.1,
"MaxDistance": 1000,
"Topic": "/simulator/camera_node/image/compressed",
"Frame": "camera"
},
"transform": {
"x": 0,
"y": 1.7,
"z": -0.2,
"pitch": 0,
"yaw": 0,
"roll": 0
}
},
{
"type": "Manual Control",
"name": "Manual Car Control"
},
{
"type": "Vehicle Control",
"name": "Autoware Car Control",
"params": {
"Topic": "/vehicle_cmd"
}
}
]
4-8.In Simulations tab, add one as follows:
In General tab, Simulation Name: BorregasAve(Autoware)
In General tab, Select Cluster: Local Machine
In Map & Vehicle tab, Select Map: BorregasAve
In Select Vehicles, Select Jaguar2015XE and enter localhost:9090 next to it.
# Click Submit
4-9.Select BorregasAve(Autoware) simulation and click play button in the bottom.
5.Set launch files in Runtime Manager as follows:
In Quick Start tab,
Map: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_map.launch
Sensing: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_sensing_simulator.launch
Localization: /home/autoware/shared_dir/autoware-data/BorregasAve/my_launch/my_localization.launch
7.In the terminal, start tmux
In one window,
$ roslaunch trafficlight_recognizer feat_proj_option.launch
In the other window,
$ $ roslaunch trafficlight_recognizer traffic_light_recognition.launch
# You can see error message that traffic_light_recognition died.
traffic_light_recognition node runs without error.
traffic_light_recognition node died.
/label ~bug
lidar_euclidean_cluster_detect output topics are not fully transformed.
/detection/lidar_detector/cloud_clusters
Transformed properly :
min_point
max_point
avg_point
centroid_point
Not transformed :
convex_hull
bounding_box
Because this message is not TFed properly the messages published in /detection/lidar_detector/objects
are also wrong :
/detection/lidar_detector/objects
Transformed properly :
Not transformed :
pose
convex_hull
/points_raw
topic.
The points are still in Lidar frame and not map frame, even if the header says so ! It really looks like a bug.
I have corrected the bug.
One question though, what is the best way to TF a Point32 ? Because the TF listener has no transformPoint32
function.
Thanks !
the overview introduce the fusion_detector module (with MV3D), but I cannot find the source code.
Hello,
If current point is not in the local cone of the previous point:
But above logic is not what implemented in the code. You can see we are using only height_threshold rather than using general_height_threshold while we are checking for Global Cone.
Shouldn't it be:
// check if previous point is too far from previous one, if so classify again
if (points_distance > reclass_distance_threshold_ &&
(current_height <= general_height_threshold && current_height >= -general_height_threshold))
Am I missing something?
Have a great day.
Hello,I met the problem.How can I solve it?
In file included from /usr/local/include/opencv2/imgproc/imgproc.hpp:48:0,
from /opt/ros/melodic/include/cv_bridge/cv_bridge.h:44,
from /home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:37:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect
/vision_lane_detect.cpp: In function ‘void process_image_common(IplImage*)’:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:514:58: error: could not convert ‘cv::Scalar_((double)0, (double)255, (double)255, (double)0)’ from ‘cv::Scalar {aka cv::Scalar_}’ to ‘CvScalar’
cvPoint(frame_size.width/2, frame_size.height), CV_RGB(255, 255, 0), 1);
^
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp: In function ‘void lane_cannyhough_callback(const Image&)’:
/home/alpha/autoware.ai-1.14.0/src/autoware/core_perception/vision_lane_detect/nodes/vision_lane_detect/vision_lane_detect.cpp:544:30: error: conversion from ‘cv::Mat’ to non-scalar type ‘IplImage {aka _IplImage}’ requested
IplImage frame = cv_image->image;
~~~~~~~~~~^~~~~
make[2]: *** [CMakeFiles/vision_lane_detect.dir/nodes/vision_lane_detect/vision_lane_detect.cpp.o] Error 1
make[1]: *** [CMakeFiles/vision_lane_detect.dir/all] Error 2
make: *** [all] Error 2
open planner doesn't work correctly on vehicle
roslaunch runtime_manager runtime_manager.launch
global planner can help vehicle make lane change by a given goal
it doesn't change, and just go to the end of the lane
Hello,
I want to tune ekf_localizer pose_stddev_x and pose_stddev_y. Is there any suggestion range to change? For example, It's default value is 0,05, if I want to increase, should I try 0.1 or 1 or 100?
autoware/autoware:1.14.0-melodic-cuda
, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware
at commit 975b6f3
.autoware/autoware:1.14.0-melodic-cuda
), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1
) as a simulator and carla-autoware-bridge at 975b6f3
as a bridge.TFs should be aligned, moving together as a piece.
TFs gets separated, and vehicle gets lost.
Videos attached above.
autoware/autoware:1.14.0-melodic-cuda
, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware
at commit 975b6f3
.autoware/autoware:1.14.0-melodic-cuda
), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1
) as a simulator and carla-autoware-bridge at 975b6f3
as a bridge.The street should not be perceived as an object.
The street is perceived as an object, and Autoware vehicle refuses to move forward as it thinks its way is blocked.
Hello everyone!
Thanks for taking the time to check my post, I wonder what modifications would be necessary to keep the not-projected/uncolored points on the pointcloud. I ask this as I do not know cpp programming. Thanks for any ideas or tips!
Hi. I have used a bit of code here and would like to contribute back (a little bit of refactoring for the shape fitter if the repository owner does not mind). Then, I notice I can not find an open clang-formatter anywhere though the log said clang-formatter has been applied.
https://github.com/Autoware-AI/core_perception/blob/f14a0ca32d57ab87477cd5fb3e96656320bc497e/lidar_fake_perception/CHANGELOG.rst#180-2018-08-31
If able, can you add an open format for clang-formatter, which makes it easier for contributors?
Sorry if it is wrong to label this issue as "bug" :)
The subscribed topic is outdated in documentation page of naive_motion_predict. It should be updated to /detection/objects from /detection/lidar_tracker/objects.
autoware/autoware:1.14.0-melodic-cuda
, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware
at commit 975b6f3
.autoware/autoware:1.14.0-melodic-cuda
), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1
) as a simulator and carla-autoware-bridge at 975b6f3
as a bridge.base_link
frame (i.e., the velodyne frame) gets stranded, while all the other frames including the camera, gnss, imu keep moving forward along the global path. As a result of this discrepancy, the vehicle gets disoriented and crashes into a fence.All TFs move along the global path without any of them getting stranded.
base_frame
stops moving, resulting in a localization error and a collision.
ase
Starting >>> lidar_point_pillars
--- stderr: lidar_point_pillars
CMake Error at CMakeLists.txt:9 (AW_CHECK_CUDA):
Unknown CMake command "AW_CHECK_CUDA".
Failed <<< lidar_point_pillars [0.13s, exited with code 1]
Summary: 0 packages finished [0.45s]
1 package failed: lidar_point_pillars
1 package had stderr output: lidar_point_pillars
autoware/autoware:1.14.0-melodic-cuda
, as specified in the Dockerfile of https://github.com/carla-simulator/carla-autoware
at commit 975b6f3
.autoware/autoware:1.14.0-melodic-cuda
), Carla 0.9.10.1 (docker image: carlasim/carla:0.9.10.1
) as a simulator and carla-autoware-bridge at 975b6f3
as a bridge.vision_darknet_detect
with files yolov3.cfg
, yolov3.weight
, and coco.names
provided in the repository, object detection works well with cars and walkers; they get detected and tagged properly as cars and walkers. However, the speed signs, stop signs, and traffic lights are being recognized, but not semantically tagged; they are tagged as "unknown #"
Traffic lights, stop signs, and speed signs are recognized, and tagged as what they are.
They are recognized, but get "unknown" tags.
P.S. Thank you for prompt replies!
Operating system and version:
Autoware installation type:
Autoware version or commit hash
ROS distribution and version:
ROS installation type:
Package or library, if applicable:
In gnss_localizer, nmea2tfpose has Simulation Mode. When I clicked it, /gnss_pose topic was not applied for simulation mode. With simulation mode activated, plane number 0 should be applied internally. However, regardless of simulation mode activation, existed plane number is still be applying to the gnss_localizer node.
$ roslaunch runtime_manager runtime_manager.launch
$ rostopic echo /gnss_pose -n 1
/gnss_pose has pose without adding origin coordinate
When plane number is 7, simulation mode is activated
$ rostopic echo /gnss_pose -n 1
header:
seq: 78
stamp:
secs: 1595346468
nsecs: 615610624
frame_id: "map"
pose:
position:
x: -9.03081818061
y: 50.3633886972
z: -1.122381
orientation:
x: 0.550936004788
y: 0.569652790116
z: 0.42817032443
w: 0.434321759319
---
When plane number is 7, simulation mode is activated
$ rostopic echo /gnss_pose -n 1
header:
seq: 0
stamp:
secs: 1595346281
nsecs: 445789184
frame_id: "map"
pose:
position:
x: -86029096.0418
y: -3981885.24795
z: -1.122381
orientation:
x: 0.550936004788
y: 0.569652790116
z: 0.42817032443
w: 0.434321759319
---
In the line 298 of core_perception/lidar_localizer/nodes/ndt_mapping/ndt_mapping.cpp,
the function static void imu_calc(ros::Time current_time) calculate
the diff_imu_roll、diff_imu_pitch、diff_imu_yaw with the following statements:
double diff_imu_roll = imu.angular_velocity.x * diff_time;
double diff_imu_pitch = imu.angular_velocity.y * diff_time;
double diff_imu_yaw = imu.angular_velocity.z * diff_time;
I am confused with the mu.angular_velocity. is the mu.angular_velocity Euler angle representation? I think generally speaking
this is not. Could someone tell me why in here the angular_velocity of imu is Euler angle representation?
rviz
/points_fused
topicHave a great day.
Originally reported as Autoware-AI/autoware.ai#2348
The link for the ssd model is broken.
doc: https://github.com/Autoware-AI/core_perception/tree/master/trafficlight_recognizer/nodes/region_tlr_ssd
link: http://db3.ertl.jp/autoware/tlr_trained_model/data.tar.bz2
I am studying the ndt_gpu of autoware_ai_perception. I found a mult is multiplied to the covariace matrix in line 408 in VoxelGrid.cu of ndt_gpu. the snip of sources code which includes the mult is from line 408 to line 411 is as follow:
double mult = (points_num - 1.0) / points_num;
cov(0, 0) = ((cov(0, 0) - 2.0 * p0 * c0) / points_num + c0 * c0) * mult;
cov(0, 1) = ((cov(0, 1) - 2.0 * p0 * c1) / points_num + c0 * c1) * mult;
Could someone know this mult means?
Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.