Code Monkey home page Code Monkey logo

dig-into-apollo's Introduction

Dig into Apollo GitHub Documentation Status

Dig into Apollo is mainly to help learning Apollo autopilot system. We first introduce the functions of each module in detail, and then analyzed the code of each module. If you like autonomous driving and want to learn it, let's start this project and discuss anything you want to know!

readthedocs

google groups

Friendly tips

  • If "git clone" on github is too slow, pls try apollo mirror.
  • If you have any questions, please feel free to ask in git issue.
  • If you need to add new documents or give suggestions, welcome to participate in git discussion

Table of Contents

Getting Started

If you are like me before and don’t know how to start the Apollo project. Here are some suggestions.

  1. First understand the basic module functions. If you are not clear about the general function of the module, it's difficult for you to understand what's the code doing. Here is an beginner level tutorial
  2. Then you need to understand the specific methods according to the module, which will be documented in this tutorial. We will analyze the code in depth next. You can learn step by step according to our tutorial.
  3. I know it will be a painful process, especially when you are first learning Apollo. If you persist in asking questions and studying, it will become easier in 1-2 months.
  4. Last but not least, the method in Apollo is almost perfect, but there will be some problems. Try to implement and improve it, find papers, try the latest methods, and hone your skills. I believe you will enjoy this process.

How to learn?

Basic learning

  1. Watching some introductory tutorials will help you understand the Apollo autopilot system more quickly. I highly recommend tutorial
  2. Try to ask some questions, read some blogs, papers, or go to github to add some issues.
  3. If you don’t have a self-driving car, you can try to deploy a simulation environment, I highly recommend lgsvl. Its community is very friendly!
  4. If there are any problems with the simulation environment, such as map creation, or some other problems, welcome to participate in this project Flycars. We will help as much as possible!

Code learning

  1. First of all, you must understand c++. If you are not very familiar with it, I recommend the book "c++ primer". This book is very good, but a bit thick. If you just want to start quickly, then try to find some simple tutorial, here I recommend teacher Hou Jie
  2. After understanding C++, it is best to have some basic understanding of the modules, which will help you to read the code. This has been explained many times.
  3. Use code reading tools to help you read the code. I highly recommend vscode. It supports both Windows and Linux, and has a wealth of plug-ins, which can help you track codes, search and find call relationships.
  4. Of course, there are many professional knowledge and professional libraries. I can’t repeat the best tutorials one by one here, but I can try to recommend some. Hongyi Li's deep learning, even a math tutorial 3Blue1Brown's math
  5. Do some experiments with the simulator. We say "Make your hands dirty". You can't just watch, you need to try to modify some configurations and see if it takes effect, if possible, you can also try to answer some questions.
  6. Remember that autonomous driving is still far from mature. Read some papers, I read a lot of papers, it help me a lot.

Hope you have fun!

Contributing

This project welcomes contributions and suggestions. Please see our contribution guidelines.

References & resources

dig-into-apollo's People

Contributors

daohu527 avatar jackfu123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dig-into-apollo's Issues

Apollo code reading

Thanks for the detailed code sharing. As a beginner, I have a few questions to ask:

  1. which Module to start learning apollo?
  2. How to debug apollo using IDE?(like vscode or others)
    Hope you can write a blog about the basic debugging and learning of apollo!
    Best wishes!

有关车道线检测的问题

您好,非常感谢您之前的回复。
目前我在看车道线检测这一块,用的网络是DarkSCNN,我不太理解process2D这个函数对车道线做了什么处理?您对这一块有研究吗,谢谢!

Complete planning scenario code analysis

rule base

  • bare_intersection
  • dead_end
  • emergency
  • lane_follow
  • narrow_street_u_turn
  • park
  • park_and_go
  • stop_sign
  • traffic_light
  • yield_sign

learning base

  • learning_model

V2X

看了几天,还没搞明白V2X模块的输入和输出是什么呢?官方开发文档中有上机教程,感知,融合,规划等多个模块都有,V2X怎么来上机调试呢?(大佬,问的问题有点小儿科,还希望大佬解答下)

Proto file changed but cyber_monitor is still display the default one?

I have change the proto file "canbus/proto/chassis.proto". Delete the field "high_beam_signal/low_beam_signal/left_turn_signal/right_turn_signal" but still found in cyber_monitor, below is the log:

ChannelName: /apollo/canbus/chassis
MessageType: apollo.canbus.Chassis
FrameRatio: 0.00
RawMessage Size: 178 Bytes
engine_started: 1
engine_rpm: 799.999878
speed_mps: 0.000025
odometer_m: 0.000000
fuel_range_m: 0
throttle_percentage: 0.000000
brake_percentage: 100.000000
steering_percentage: -0.000000
parking_brake: 0
high_beam_signal: 0
low_beam_signal: 1
right_turn_signal: 1
Have Unknown Fields  // Here is the error

How do gps and imu work together, and why do need interpolation ?

In function PrepareLocalizationMsg(), why do need interpolation?

  CorrectedImu imu_msg;
  FindMatchingIMU(gps_time_stamp, &imu_msg);
  ComposeLocalizationMsg(gps_msg, imu_msg, localization);

Find nearest IMU and the put into localization. IMU adds several types linear acceleration angular velocity euler angle

关于"Perception/CNN/如何构建CNN"这一章节当中的一点疑问

你好,刚开始接触CNN的小白,对于issue标题中的那一章节中全连接层的尺寸有一些疑惑,已知卷积层2的尺寸为7 * 7,那么池化层2中每张特征图的尺寸是不是应该是(7-2)/2+1下取整为3,即为3 * 3,而非7 * 7,所以扁平化特征图(pool2)得到的pool2_flat的尺寸就不是7 * 7 * 64,而是3 * 3 * 64了?
今天才开始接触学习的CNN,所以也不太清楚自己的关注点正确与否, 还望指点[抱拳]

回答cyber笔记中的疑问

image

uint32_t pid = cr->processor_id();
这一句,processor_id默认值是-1(二进制位上全是1),但是赋值给无符号32位整型变量pid时,pid就是uint32_t的最大值(2^32-1),肯定要大于逻辑CPU(Processor)的数量的,所以入全局队列。

coordinate transformation

I want to know the mathematical formula of radar, lidar, camera and world coordinate conversion, but I can't find it in the conversion module. Do you have any suggestions? Thank you, boss.

cyber talker and listener problem

When I send two messages without time delay, then the message will not send out. I use "tos\examples\talker.cc" and "tos\examples\listener.cc" for test, and found there will be ideal sometime. Not sure it's a problem.

Need to confirm that if the sending frequency is very fast, the data will not be sent out?

高精地图

 1、高精度地图中使用的是WGS84坐标,为啥把坐标再转为UTM坐标系呢,还有proj4的参数为啥是这样:
    "+proj=tmerc +lat_0=37.413082 +lon_0=-122.013332 +k=0.9999999996 +ellps=WGS84 +no_defs"
    "+proj=utm +zone=50 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
 2、pj_transform转出来的坐标是三维的,为啥有些只用了二维坐标,比如(在apollo6.0中的ParsePointSet和ParseGeometry),而且向量也是二维的?在重庆感觉只使用二维好像还是找不到自己哎;
 3、使用UTM坐标系不用考虑跨区吗,疑惑中,这块怎么处理的呢?

New planning algorithm when braking?

It is necessary to plan the braking curve when parking. The current PID is adjusted according to the speed and position, which does not conform to human braking habits. There will be slipping, excessive braking, etc.

When parking, I suggest using the following acceleration curve, and the control module considers braking according to the acceleration.

        --------
      /
     / 
-----

Current implementation

For now they use a fix in control module(modules\control\controller\lon_controller.cc), but I don't think it’s very elegant

  // At near-stop stage, replace the brake control command with the standstill
  // acceleration if the former is even softer than the latter
  if ((trajectory_message_->trajectory_type() ==
       apollo::planning::ADCTrajectory::NORMAL) &&
      ((std::fabs(debug->preview_acceleration_reference()) <=
            control_conf_->max_acceleration_when_stopped() &&
        std::fabs(debug->preview_speed_reference()) <=
            vehicle_param_.max_abs_speed_when_stopped()) ||
       std::abs(debug->path_remain()) <
           control_conf_->max_path_remain_when_stopped())) {
    acceleration_cmd =
        (chassis->gear_location() == canbus::Chassis::GEAR_REVERSE)
            ? std::max(acceleration_cmd,
                       -lon_controller_conf.standstill_acceleration())
            : std::min(acceleration_cmd,
                       lon_controller_conf.standstill_acceleration());
    ADEBUG << "Stop location reached";
    debug->set_is_full_stop(true);
  }

定位模块LidarLocator具体数据结构似未释放

感谢您百忙之中,在知乎回答我的问题。
在localization模块中,LidarLocator该类型好像未有具体的定义, 头文件引导“include/LidarLocator.h”但是未找到该文件。相同还有Sins类型。 其中在Localization_lidar_impl_.cc Update函数中用到LidarLocator的函数,可能用点云地图与高精度地图进行匹配(不知道我理解的对不)。

planning模块的readme缺图

首先感谢分享这么多篇幅的apollo笔记,在阅读时我发现:

可以看到"OpenSpacePlanning","NaviPlanning"和"OnLanePlanning"都继承自同一个基类,并且在PlanningComponent中通过配置选择一个具体的实现进行注册。

这段话上面的图挂了.不知道是不是只有我这样.点击planning超链接,404

Control模块分析

  1. 多个输入怎么同步时间,以哪个时间为基准来判断,不同步的话如何处理?
  2. 一段时间没有控制信号怎么处理?
  3. 复位是否彻底?有什么状态会锁定?
  4. 换挡如何完成?哪些时候需要考虑换挡
  5. control模块通过canbus完成对车的控制
  6. 前后控制是不带角度的情况,这里的位置是根据带角度的纵向距离吗?
  7. 横向控制也是通过位置?需要事先计算一个初始角度吗?
  8. 确认下control的坐标系是frent坐标系,否则没法解释横纵向的问题

Missing angular_velocity and linear_acceleration when simulating in LGSVL

Ubuntu 18.04
Apollo v6.0_edu
LGSVL linux64-2021.3

When simulating in LGSVL, the messages, angular_velocity and linear_acceleration in channel localization/pose, are all zeros.
However, the messages above are correct in the channel sensor/gnss/imu, not in the channel sensor/gnss/corrected_imu.
It also show that "Raw IMU message delay, which is usually 10 secend behind current time" in dreamviewer.

We need a way to obtain the linear accelearation and angular velocity through localization.estimate message, since it is necessary for the control component.
581668421588_ pic

611668421588_ pic

591668421588_ pic

601668421588_ pic

What do you want to know more about Apollo?

Hi guys,

Update!

We have a discuss about autonomous driving every Friday at 8pm. #TencentConference: 812-6581-8776


This document has not been updated for 2 years, so some content is outdated or needs to be added, here I want to know how to improve the document, and need suggestions.

I know that here everyone loves autonomous driving and continues to improve their knowledge. The follow-up plan is to host the documentation to Here, which is more friendly to read.

Any advice would be appreciated, it would be even better if we could participate together!

zero

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.