goodrobots / vision_landing Goto Github PK
View Code? Open in Web Editor NEWPrecision landing using visual targets
License: GNU General Public License v3.0
Precision landing using visual targets
License: GNU General Public License v3.0
patrickpoirier51 @patrickpoirier51 12:18
@fnoop Concerning the runaway behavior, may I suggest that for the moment, we set a limit on the tracking range ( -0,2 to 0,2) and return to zero if no reading more than N millisecond (300-500 max)? Here is an an example if the system loosing tracking for a few seconds = on a live system that is equivalent of pushing sticks to extreme position !!
Opencv VideoWriter can take a fourcc option which influences how ffmpeg compresses the output stream. Normally this is set to 0 and a gstreamer output is used, or a simple filename. However if using a simple output string, using a fourcc code is the only way to specify a codec.
Currently only distance rendered, also render angular offsets
track_targets isn't found by vision_landing if not started in current directory.
CMakeLists.txt tries to find aruco using pkg-config. pkg-config is broken in aruco and better to use cmake hints anyway.
target:1:0.16301:0.142168:0.666498
Segmentation fault (core dumped)
cmake doesn't find aruco library
Only activate opencv/aruco vision processing when in landing/precloiter mode, to save (a lot of) CPU cycles. Will need to alter comms between python/c++ processes to allow c++ code to receive instructions as it's not directly connected to mavlink.
Try to gracefully handle ctrl-c, kill etc by trapping signals and handling.
Output is sent to dynamic log files if logdir is set in config. If not, the output should go to console.
Currently locked to 640x480
[dev] [mav@fnoop-joule /var/tmp/vision_landing]$ /srv/maverick/code/dronekit-apps/vision_landing/track_targets -d TAG36h11 -o 'appsrc ! videoconvert ! video/x-raw,width=640,height=480,format=NV12 ! vaapih264enc ! matrokamux ! filesink location=/var/tmp/vision-2017-03-21-15-36-42.mkv' -w 640 -g 480 -f 30 -v 'v4l2src device=/dev/video2 ! videoconvert ! appsink' /srv/maverick/code/dronekit-apps/vision_landing/calibration/r200-calibration-640x480.yml 0.235
warning: Error opening file (/srv/maverick/var/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:779)
warning: v4l2src device=/dev/video2 ! videoconvert ! appsink (/srv/maverick/var/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:780)
(DEBUG) try_init_v4l2 open "v4l2src device=/dev/video2 ! videoconvert ! appsink": No such file or directory
Killing vision_landing process leaves track_targets dangling. Create a graceful mechanism for shutting down, either sighup or daemon stop.
track_targets only skips processing frames when stateflag is off. This saves cpu which is good, ideally it should also release the camera (or input) so other processes can use it.
Detection is very poor under low light or other conditions. Add an option to alter gain/brightness, also detection thresholds.
Refactored code enables/disables vision processing based on landing or precloiter modes and armed status. Might be useful to have a config flag that activates/deactivates this control mechanism, for example to log and video capture the entire flight.
Every time vision_landing is started it creates a new log file dated by date/time. Helpful to have a symlink to the latest logfile.
track_targets sometimes takes a while to initialise, probably due to camera startup. If this is the case, the main vision_landing while loop immediately exits as it thinks track_targets is dead.
Need a better mechanism for the while loop.
Currently there is an attempt to sync timestamps between FC and OBC extracting timestamps from mavlink messages and adding microseconds from local clock tick. This suffers from out of order mavlink message processing and doesn't take into account transmission latency.
Instead, use the mavlink timesync messaging, establish the roundtrip latency/offset, and use that as the basis for adding the local ticker. Call regularly to counteract drift.
Only long options currently shown
translation vectors in input units, need to translate to radians for landing_target mavlink message
Get wierd xy values from PL log entries.
rmckay9 will look at logs:
ok. i wonder if maybe you could turn on logging-while-disarmed.. and then hold the vehicle so that it's nose up and tilted to the left.. maybe 10 ~ 20degrees.. and send me a dataflash log? Then I'll compare that to what I get out of IR Lock..
Output of track_targets is currently done through a gstreamer pipeline. For various reasons this isn't always desirable, an option to display output through imshow should also be available.
Currently sourced form shell, all sorts of problems with escaping and variable expansion
Currently any errors (ie stderr) from track_targets gets ignored.
Using -o videofile.avi results in a file that is too fast.
When in actual flight mode, sometimes tail -f a log can see output but later the output is not present in the file, and the file is larger size than the contents indicate. Must be corrupted file somehow, probably by loss of power.
vision_landing python script is essentially a wrapper to track_targets that connects to the flight ontroller using dronekit. Another option is to connect track_targets with ROS and then consume the vectors through mavros.
track_targets currently outputs marker id and translation vectors, also add ability to output info messages that can be picked up and displayed from vision_landing script.
Add pre-calibrated data for raspberry camera
Adapt track_target so if id is not set, it tracks multiple markers and uses the closest one to lock on to.
Refactored code only turns vision processing on/off when in landing mode, should also detect loiter+precisionloiter switch.
Choose the smallest detected target by default, so as altitude is reduced more accurate targets are chosen.
Opencv seems to intercept gstreamer pipelines for anything with filesink or filename with .avi and write raw video to. Need to provide option to specify fourcc algorithm to compress video if going to file.
Vision logs have dynamic filename with timestamp but output doesn't, so will be overwritten every invocation.
Add an option to have file output with dynamic output.
Currently we have to send 'fake' rangefinder mavlink messages (distance_sensor) at a predefined frequency in order to create a 'healthy' generic rangefinder. This is a dependency of precision landing, so has to be done in conjunction with landing_target messages.
Potentially easier and safer to alter AC_PrecLand::update to consume the distance sent in landing_target rather than requiring rangefinder.
How to start vision_landing? It requires an argument but not obvious as to what it is?
vision_landing running under systemd keeps only the last running instance logs under 'systemctl -u vision_landing.service -l'.
Need a better logging system, specify path in config file.
track_targets should only be doing a 1ms nanosleep() when not vision processing, should not use a lot of cpu.
Add a fps counter to the main loop, would be very useful to tell how fast the tracker is running
Add a check in vision_landing if track_targets exists. If it doesn't, print an error and quit.
FPS time currently implemented from detected frames in vision_landing. Useful to know, but also would be useful to know how the main loop in track_targets is performing.
vision_landing wrapper consumes 100% of a cpu core. It's not actually doing any hard work so shouldn't take all those cpu cycles.
No makefile currently, do a simple cmake
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.