Code Monkey home page Code Monkey logo

skyline's Introduction

skyline's People

Contributors

armandpl avatar rian-t avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

k2m5t2

skyline's Issues

configure car sim w/ hydra

  • to be able to tweak e.g max speed and still be able to re-instiantiate the env with the right config
  • also to clean up the car code setup which is a bit messy
  • don't recursively instantiate env
    • hydra.utils.instantiate on env reset
    • this one can be recursive to also setup the tire model!

don't know where to put the config though? can we have subconfigs with hydra? yes looks like it

ensure data lineage

once I get a first version working, go back and clean stuff up

  • configure rl experiment w/ hydra and log config to wandb
  • log trained rl agents to wandb
  • load trained agent and log trajectories to wandb
  • log generated images to wandb

calibrate camera

  • it looks like blender doesn't simulate lens distortion. there is a node to do so but I don't know if there is a unit to it. it would probably be better to correct the actual images coming from the camera
    • check how to do this
    • check how long it takes to do it on the jetson nano
  • for now I roughly measured the translation and rotation of the camera relative to the car. could use PnP to measure it more precisely. build a rig to secure the car, put points on the floor
  • the output of the two steps above should be committed to github and we could then pull them e.g from the blender script or the inference script (to correct distortion)
    • we might want to track it w/ artifacts? e.g to know which one was used by blender and therefore what params we should use to project the points when vizualizing preds
    • or just hardcode it, works as well

jour j, protocol and tools

  • record une video sur la piste avec nvargus capture
    • make sure the exposure and cam setting work in this setting
    • inference the model on it and gauge if prediction work
  • run the car and tune the controls

trained resnet18 seems slower?

  • trained resnet18 seems slower than base resnet18? is the torch.tanh slowing it down? wasn't it properly fused?
  • or is it the preprocessing?
  • need to have a way to quickly debug that

reduce weight

I tried adding my gopro (~100g) on top of the car and it almost flipped. this makes me think we might want to reduce the weight of the car. Potential things we could remove:

  • usb battery and power the jetson nano with a BEC: -192g (10% of the full weight)
  • wifi antennas: -32g (1.7%)

blender script bug

  • cam changes traj before aug (index error 1)
  • augs interpolate between each keyframe instead of being fixed
    • alright for now because we train on single images

find the optimal trajectories for a bunch of tracks

  • make a sim using the bicycle model
    • bicycle model with linear tires
    • more advanced bicycle model??
    • measure car dimensions
      • width, length, wheel_width, wheel_length=66mm
    • measure car params and update the sim
    • render sim
    • load and display track
    • raytrace to find the edges of the track
    • project the current pos onto the center line and get track progress
    • find if the car went outside the track
  • randomize initial speeds
  • randomize model params?
    • would increase the chance that the simulated traj can actually be executed on the car
    • but I think the current ones should be doable, go back to this if needed
  • add max accel and servo steering rate for the sim. currently we can go from 0 to 100km/h in 0 sec and steer from left to right instantly. in reality that's not the case
    • measure max accel with accelerometer
    • measure steering rate with camera
  • is the position of the bicycle model the center of mass or the geometric center?
    • it's the center of mass, update the sim accordingly!
  • import data dir and load stuff from there? e.g model config and track?
  • add other tracks
  • add option for random track loading
    • later, don't need it for now
  • log videos to wandb
    • fix the render system: add back rgb array, check the car racing env
  • log lap time!! in evaluate?
  • run sweep to see if we can train faster, if the crash reward is useful, how important gamma is
    • add early stopping? on lap time? on reward? but reward will change with crash reward
    • actually not urgent just let it run overnight
  • make script to export trajectories that we can then use in blender
    • input: model saved on wandb, traj len, total steps
    • output traj with: speed + position + orientation + end of traj delimiter
    • need way to go from global traj to local traj
    • metadata: track name
  • plot traj on track w/ speed and/or accel
  • config w/hydra so that e.g car params are saved to wandb
    • this is important bc trained models depend on this!
    • from a trained model we want to be able to retrieve
      • algo parameters
      • car parameters
      • which track(s?) it was trained on

generate synthetic dataset from optimal trajectories

alright so we got trajectories and we can import them into blender. here is what's left
Augment the data in sim:

  • augment data: between each trajectory:
    • randomize the lighting intensity
    • randomize the line colors?
    • swap between a few floor textures? or the moquette texture and random colors?
    • swap the wall for random colors, or maybe noise?
    • add noise to camera offset/rotation
  • add ceiling
  • match camera and sim
    • choose camera position for the car, measure it
    • match fov and sensor size. set this from the bpy script
    • fix the camera profile
      • revert to original one, see if ti fixes the black line
      • then find a new one to remove the purple tint. commit it somewhere?
    • look into rolling shutter and motion blur. compare sim and recorded data

execute trajectories

  • stanley controller?
  • gain scheduling?
  • calibrate cam?
  • calibration rig? relative to car axles? then we'll know relative to center of mass

initial plan and ideas

  • #5
    • use the bicycle model
    • or learn a model using deep learning and data from the actual car requires localization
  • #6
    • use a render engine/game engine. I think blender is a good choice here
    • gen script should log its settings to wandb?
    • and/or execute the trajectories on an actual track with the car requires localization + executing the trajectories
    • make sure to record extrinsic and intrinsinc cam parameters and match those to our hardware. for synthetic data not a big deal since we can probably re-generate it
    • make sure to add noise to the trajectories or to sim. or else the neural net wont be able to recover when outside the ideal trajectory.
    • think about augmentation. e.g diffusion, pix2pix? speed blur? lighting? warp to change pitch/yaw?
  • #7
    • match frequency at which we sample images with the control frequency we're shooting for
      • think about what "trajectory means" e.g if we go faster or sample differently the points are going to be closer or further appart. Does the distance between points encode speed? basically think about how to handle the longitudinal aspect?
    • look into camera calibration. maybe we want to calibrate. start without it imo
      • or maybe we can avoid calibrating since we generate synthetic data and can match the intrinsic and extrinsic params
    • maybe also predict pose from the NN! might be a bit difficult and we can start without it see comment below re: state estimator
    • https://kornia.readthedocs.io/en/v0.1.4/geometry.camera.perspective.html use that to project traj on images for viz
  • #2
    • get global shutter cam
    • stay on jetson nano? make sure we can install dependencies? python3.7?
    • make sure our IMU works, maybe get a second one
    • rework the way the electronics fit on the car
    • make sure the cam doesn't see the car? or we'll have to mask it to the nn? or add it to the synth data?
    • re-check the reduction ratio. I feel the car doesn't accelerate/brake fast enough on the new chassis
  • #9
    • add a messaging system, possibly cereal
    • have daemons running to:
      • fetch data from sensors (IMU, motor encoder, camera)
      • run model inference
      • log data to disk using mcap
      • execute trajectories
      • estimate state (kalman rednose)
    • actually could we use openpilot? and save a bunch of engineering
  • validate state estimator need localization
    • drive manually on the track to get sensor data + ground truth localization
      • from this dataset tune a kalman filter to estimate system state
  • validate speed estimator?
    • can we validate it against the motor speed?
      • estimated speeds along each axis should sum to motor speed * reduction ratio?
  • learn mpc
    • try executing the trajectories by using external localization
    • good time to collect an image test set for the trajectory preds
      • maybe introduce voluntary noise (maybe in the same way we introduce noise for the synthetic data?) to get trajectories outside the optimal ones
  • test the whole stack in simulation
    • e.g carla or smaller sims (check comma ai bounty)

update electronics + car

Mechanical setup

  • Last time we used a Tamiya TT-01 chassis which is super great I love it. However, a few time when taking sharp turns the car almost rolled over. For this iteration, we switched to a Kyosho Fazer mk2 chassis which is more modern, wider and closer to the ground. I think it should be better.
  • However, When driving the car manually, I feel like it doesn't accelerate as fast as the previous one. Change the pinion gear from 30T to 37T. This will reduce the torque, but increase the speed. I think we don't need that much torque. Let's try and see. If we have time we could also gather data, see how much current the motor draws, how much time to cross X meters, motor rpms etc... very low priority
    • that said, the ESC mid point is set for the PWM module, should try max throttle using this, maybe its more?
  • #3
    • Since this is a new chassis, we need to design a new board to hold the electronics. include the cad in this repo.

Electronics:

  • Decide if we want to power the jetson nano using the car battery + BEC. or an external USB battery?
    • last time we tried BEC + lipo, we only got like 2 hours of autonomy and then the jetson nano crashed.
    • using an external battery adds weight, easily half a kilo. but it allows to have the two power sources separated.
    • using one source, if the car draws too much power maybe it won't leave enough for the jetson and cause it to crash?
    • currently leaning towards using an external usb battery for the above reasons
  • decide if we want to order components for redundancy
    • sometimes (very rarely) the ESC stop working and we can't use the motor. I don't know if its a motor issue, a cable issue or an ESC issue
      • I modified the cable to snoop on the signal but that looks alright to me
      • got the ESC second hand so don't know if its in good shape? doesn't appear damaged though
      • the motor is new? I did solder the cables like a caveman using the max temp of my iron (400 celsius) but i feel this should be fine
    • so, new ESC maybe?
    • maybe a second lipo battery? I don't store the current one at the right voltage so it could be slightly damaged. It could also be good to a have a second one to rotate during vivatech
  • Replace ESC
    • order new ESC + cable
    • replace the old one
  • solder and calibrate new mpu9250
    • I soldered the last one badly and I think this is why it doesn't work anymore?
    • I also didn't you could and SHOULD calibrate it
  • swap the arduino nano every for a xiao samd21
    • the nano every has a micro usb port absolute stone age energy its going to break for sure
    • xiao samd21 has a nice usb C cable but it's 3.3v so I need to check if it will work with the optocoupler
  • get a global shutter camera just in case we go 2 FAST

improve speed control

  • filter measurements
    • try exponential moving average
    • try kalman (what equations to use?)
  • take esc dead zone into account?
  • try higher control freq? like 50Hz
  • get bytes instead of string from arduino
  • change baudrate?
  • use timer instead of time.sleep on the arduino
    • compute instantaneous speed from time between interrupts
    • compute the timeout and threshold from the slowest possible speed of the car

make arduino code cleaner/more robust?

communication between the jetson and arduino is a bit archaic?
the arduino just writes to the serial port continuously and the jetson reads continuously, I feel like formatted packets with checksum would be more robust? the jetson could send a read command and the arduino would respond with a packet containing the answer? It would allow changing the rate of reading from the jetson
that said it ain't broke so maybe don't fix it?

train nn for trajectory planning

  • setup new model
    • speed should be an input to the last layer
    • predicted traj should include x,y coords as well as speed.
      • think about how to normalize. rl script should output real values I think, e.g deg for steering and m/s. based on the model
    • add tanh or sigmoid or smth to help
  • write code to load the trajectories
  • write code to project trajectories on the images
  • add augmentations, start with basic ones
  • investigate other models. e.g efficientnet
    • maybe there are lighter and faster models
    • probably stick with torchvision, no need for smth too complicated
  • after setting up the new cam position and choosing a crop, record video on the local track
    • use nvargus, probably easier and cleaner than python script
    • this will be our test set to viz the model prediction
  • at the end (or even during?) of training, plot model predictions on this video
    • choose where to store camera params, maybe this should be the output of the blender script
      • maybe the blender script should output those where it outputs the images

RL controller?

reading https://rpg.ifi.uzh.ch/docs/RAL21_Fuchs.pdf
maybe we could extract the same features they feed their nn from images? and then have a nn control the car? could we train it irl though? unsure how to compute the reward
can we learn the vehicle dynamics without estimating the vehicle position?

write robust embedded code

The current embedded code is a single spaghetti Python script. It served us well and its time to let it go.

The ideal system has a few components:

  • camera capturing images
  • neural net making predictions
  • logger logging sensor and control data
    • could we also use it for profiling?
  • reading motor speed from arduino
  • pid to control the car speed
  • reading imu data
  • control module executing the trajectory?
    • merge with pid?

Now I have no idea how to write this. Couple of ideas:

  • would be nice to have a bus to have components working at different frequency + cleaner more readable code
  • containers would be useful to have different dependencies per component
    • need to scope what needs to be a container and what needs to be a process

Exploratory tasks:

  • try out cereal/zeromq on the jetson nano, can i have multiple process communicating
    • multi processing could be good enough? maybe no need for containers and shit?
    • or maybe just one container for nn stuff and the rest is one python codebase that we setup w/ poetry?
  • try out containers on jetson nano, how easy is it to download model from wandb, optimize and inference it?

configure camera

tried taking a picture using Jetcam and ended up with an overexposed image, tried nvargus capture and got a decent image.

  • understand what I can configure. look at nvarguscamerasrc --help, look at sensor doc
  • add options to jetcam

build local track

There is only one training race left before the actual race, it probably won't be enough to tune everything. We need to build our own track.

  • finish CAD of the track. Can we use only 1m and 3m for the turn radius?
  • Decide if we want to try and find a CNC or if we're okay doing it by hand
  • buy wood
  • if we do it by hand, design 3d printed mecanism to connect the pieces

longitudinal control

  • nn predict target speed or traj and then we deduce speed from traj. means it need speed info
    • feed it two images at once? can it run fast enough on the jetson nano?
    • feed it the speed sensor value

simulate diff drive

  • make 3900 harney 2nd floor track
  • read blog post about sim, simulate the "raw robot"
  • stick comma's pid controller on top
  • train an agent
  • add obstacles handling

compte rendu course 2022_05_13

  • silicon highway pour les nouveau jetson

  • replace accelerate par https://lightning.ai/docs/fabric/stable/ ? ou virer carrement?

  • ajouter tanh pour la regression? checker sur le chal comma aussi

  • pas reussi a lire le capteur de vitesse, tester avec l'ancien arduino

    • si marche pas: tester avec l'ancien optocoupleur
  • visualiser les labels sur sequence d'image as a sanity check (apres RL)

  • il faut gerer les dependances et faire un nouveau script

    • j'ai l'impression y'a un truc fondamental que je comprends pas a propos des dependances, genre qu'est-ce qui est en c pas en c, pourquoi j'ai pas tensorRT sur ubuntu20 tout ça

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.