Code Monkey home page Code Monkey logo

2019-2706-robot-code's Introduction

2019-2706-Robot-Code

The main robot code for the Merge Robotics (2706) robot for the 2019 FIRST Deep Space challenge.

Build Status

Attribution and license

Robot Overlord's (Shep) license applies.

We release our software under the MIT license in the hopes that other teams use and/or modify our software.

Our one request is that if you do find our code helpful, please send us an email at [email protected] letting us know. We love to hear when we've helped somebody, and we'd like to be able to measure our impact on the community.

Thanks, and enjoy!

Run Code

Ensure that JDK 11 is installed on the computer before proceeding.

From VS Code

  1. Make sure that the Java and WPILib plugin are both installed.
  2. Open the project in VS Code.
  3. Open the command pallette with Ctrl + Shift + P.
  4. Search for the desired run configuration.
  5. Select the command to run.

From Eclipse

  1. Import the project as a Gradle Project.
  2. Select the desired run configuration from the dropdown menu beside the green "run" triangle.

From IntelliJ

  1. Import the project as a Gradle Project, ensuring that the project format is directory based.
  2. Premade run configurations can be found in the runConfigurations folder. Simply copy the folder to inside the .idea folder.
  3. After copying, head to File--> Close Project and then reopen the project right after to get IntelliJ to detect the new run configurations.
  4. Select the desired run configuration from the dropdown menu beside the green "run" triangle.
  5. To run regular mode, click the green "run" triangle.
  6. To run debug mode, click the bug to the right of the green "run" triangle.

From Terminal

  1. Open a terminal in the root of this project
  2. Type the appropriate command to run the gradlew file (e.g. "./gradlew") and then add "deploy"
  3. If the code should be offline, add "--offline"
  4. If the code should run in debug mode, add "-PdebugMode"
  5. Hit enter

Want to help write robot code?

We have a lot of programmers on the team this year, so we've split the code out into chunks so each group can be in charge of a piece. Talk to your group's mentor or project leader to see which chunk you can work on.

2019-2706-robot-code's People

Contributors

abhij2706 avatar carmen1234 avatar dimatsa avatar georgetzavelas avatar kyleranderson avatar rachellucyshyn avatar robertlucyshyn avatar ryanlarkin avatar zinka010 avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2019-2706-robot-code's Issues

Setpoints do nothing when the lift is on the limit switch

Currently, the setpoints don't move the robot if the robot is on the limit switch. Often after going down we would want to go up to a setpoint and moving the lift manually slightly up is not as fast.

If we can't find a solution to this problem, there are a couple of workarounds we could try:

  1. Move the lift slightly up automatically and then go to the setpoint (less good).
  2. Temporarily disable the limit switch only if going up and at height of 0.

Create Framework for interruptible autonomous modes

One of the new things that comes this year with the sandstorm is the ability for the human drivers to interrupt autonomous modes.

We will need some code common to all auto modes to help implement this.

Continuous Integration

It would be super useful to get some sort of continuous integration service up and running for the team to begin benefiting from automated testing.

What needs to be researched:

  • Continuous integration services for GitHub.
  • Integration and Unit testing (and maybe other types of testing as well...)
  • Testing frameworks to be benefited from.
    • See if there are any that are specific to FIRST robotics that might be useful.

Smoother elevator motion on manual control

Currently, the elevator uses some really nice motion magic on setpoints.
For manual control however, the operator directly controls the motor output. This is bad because:

  • No smooth stopping
  • Inconsistent elevator velocities, especially when low battery
  • More jerky motion could be bad for handling game pieces.

The intent is to use velocity controls on the lift when in manual mode, however this only solves the second item. Will need to look into the two others.

Set up main Robot class

  • Hold references to all subsystems
    • Easy to add new subsystems
    • Clean way to access subsystem references (avoid public static fields)
  • Minimal
  • Possibly hold autonomous commands to run on selector
  • Callbacks for robot init/enable/disable/shutdown

Create Tests

Now that continuous integration and the dependencies required for testing have been created in #27, it's time to create tests for all the classes which should be tested that exists up to this point.

New classes that we make should come with tests already made for them when merging the new class to the master branch.

Classes which should be tested:

  • All subsystems
  • All commands
  • Anything else which can be tested.

Refer to previously created tests or ask questions to @KyleRAnderson for help on creating tests.

Absolute Gyro Rotation

Instead of rotating to a relative gyro location, rotate to one that is field oriented.
Also ensure that the degrees of rotation is minimized (e.g. 70 degrees clockwise instead of 290 degrees counterclockwise).
Should also implement IMirrorable<Command> as well in case it can be used in auto modes.

Document our code processes

It's a good idea for us to document how we do things in software as well as the logic behind a lot of our code and stuff. This is very important this year in particular because we've got a lot of rookies who would benefit from knowing how the code works as well as a rookie team that we're attempting to mentor.

I was thinking of stuff like maybe explaining autonomous math in detail in the repository's wiki pages or something to that effect. As long as it's easily accessible.

Things to include in documentation:

  • How we got the repo set up (with gradle and such) and how we're going to structure it.
  • How autonomous works (with math explanations a little bit maybe?)

Support Additional Talon Features

  • Motion magic
  • Auxiliary PID
  • Motion profiles

Also set it up so that it will be easier to support additional features and modes of movement in the future

Integrate Test Mode

When robot is run in test mode, it should be able to work without changes to robot
Be able to test each motor and sensor individually in LiveWindow

Calibrate Setpoints

We currently have some nice setpoints working (except when limit switch is pressed, see #103), but they aren't actually going to valuable heights.

What we need to do is find the valuable heights and use them for setpoints.

Get GradleRIO set up and running

It's a good idea to get used to the new robot framework before the actual kickoff date, so it'd be a good idea to start using it and seeing what it's like. Observations and issues found with the new framework should be documented.

Check for encoder presence on startup

The autonomous failure at Ryerson was due to the encoders being wired to different Talons on the practice and competition robots. However, a similar failure could occur for any number of reasons including encoders falling off (which happened in Stronghold) or damaged (Steamworks) and the same problem would occur if the encoder wires were cut or broken or if the Talon's Gadgeteer port were damaged.

Check out what 254 does: https://github.com/Team254/FRC-2018-Public/blob/master/src/main/java/com/team254/frc2018/subsystems/Drive.java

final ErrorCode sensorPresent = talon.configSelectedFeedbackSensor(FeedbackDevice .CTRE_MagEncoder_Relative, 0, 100); //primary closed-loop, 100 ms timeout if (sensorPresent != ErrorCode.OK) { DriverStation.reportError("Could not detect " + (left ? "left" : "right") + " encoder: " + sensorPresent, false); }

Please implement a similar check on our drivetrain and lift encoders. Test by swapping the talon IDs temporarily and verify that it detects that the encoder isn't there.

We might even want to disallow autonomous driving commands if we think we've lost an encoder.

Mechanisms

The robot will have mechanisms! We need code to run those mechanisms, which means new subsystems and new commands for certain actions.

How will we determine the starting configuration of the intake arms?

One of the things that came up while I was working on the intake subsystem was the following question: how do we know what configuration the intake arms are starting in?

If we wanted to preload a ball, we would need the arms to be down. If we have a hatch, they would be up. This question can be divided into two question then:

  1. Can the arms be down in the robot's starting configuration?
  2. If the arms can be down, how will the robot know this?

When held trigger

For drivers to have more control, it is important for them to stop any command that is running through the joystick easily. To do this last year, we had the command end when the button was released from the joystick. This functionality isn't provided in WPILib.

  • JoystickButton or FluidJoystickButton to include runWhenHeld() method

Robot Configuration Manager

We need to have some sort of method for easily accessing robot configurations, such as the button mappings for controllers as well as robot-dependent variables.

This is kind of like a merge between RobotConfig and RobotMap from last year.

The frontend portion of this needs to be easy to use.

Better control scheme for operator

After competition there are a couple of things that I would like to do with the operator controls:

  • Make setpoint controls smarter.
  • Make override lift smarter.

For the first point, setpoint controls rely on the D-Pad button bound to the setpoint to be in exactly the right position, which even if the controller is great doesn't happen all the time. The intended solution to this is to keep track of the last bound control pressed (UP, DOWN, LEFT, RIGHT) and then if the operator goes to one of the in-betweens, it runs the last bound control on the D-Pad. For example, if the operator pressed DOWN and then accidentally went to DOWN-RIGHT, the action bound to DOWN would still be run.

For override lift, this may not be something I want to continue with for sure but it would increase available controls if the action was bound to holding down on the manual lift control (the left stick) and moving the joystick. Speed would still be fixed to something small, but it would free up two buttons.

Camera Server

The camera server needs to be non-latent (use last year's camera as a reference for what is considered "non-latent") and reliable.

Field of view should be good for driving blind (more of a fab spec).

Button Assignments for DriverAssist

RCVI needs to assign three buttons on driver controls to initiate 3 actions.
We want the top three buttons that are part of the four button group on the right-hand side.

Assignments:
DriverAssist with vision (rocket): Right (B)
DriverAssist with vision (cargo ship): Left (X)
DriverAssist with laser rangefinder: Top middle (Y)

Autonomous Voltage Compensation

Talon feature that can be used to make Talon output consistent regardless of battery voltage. Since we don't run at 100% speed, this could useful for us to test automodes.

Problems with current instantiation pattern of subsystems

As we have it currently, the way all subsystems are instantiated is with the following two functions:

public class Subsystem {
   private static Subsystem currentInstance;

    public static void init() {
        if (currentInstance == null) {
            currentInstance = new Subsystem();
        }
    }

    public static Subsystem getInstance() {
        init();
        return currentInstance;
     }
}

The problem with this design is from a testing perspective, you would want to re-initialize subsystems for each test, which would never happen because they won't re-initialize if there is already an instance.

There are two solutions I see to this problem:

  1. Make it so that init() re-initializes a subsystem even if it already exists. The getInstance() method would not initialize the subsystem at all, it would just return currentInstance.
    • This is easy but I don't know if we want anyone to have the power to re-initialize a subsystem.
  2. Create a helper method in tests which resets *everything` before a test.
    • This could be better actually because the way we could do it is by instantiating another robot, which is more accurate.
    • We would need to use reflection extensively however.

Which of these solutions do you prefer @eandr127?

I have already done #1 in the subsystem-factory branch if you want to see the full extent of what would change.

Explore Floating Pneumatics

Don't really know what floating pneumatics do, but they may be worth exploring if they allow us to aim the cargo more precisely.

Plyboy testmode integration into main robot code

Since software and controls need to share a robot, it would be useful to combine their test mode code into the main code. That way there is no need to redeploy code when controls needs to use the robot.

  • Allocate data ports
    • Do last, and skip allocation if already allocated somewhere else
    • 4 drive Talons
    • 4 test Talons
    • 4 PWM
    • 2 analog ports
    • 4 DIO ports
    • 2 Relays
  • Include with test mode

Need method to return angle of robot using Pigeon IMU

The Robot Code Vision Interface subteam needs a method that will return the angle of the robot relative to a horizontal line going across the field (based on the Pigeon IMU sensor). The angle can be in degrees or radians, but to avoid unnecessary conversions, use the same unit as the Pigeon IMU.

Create a Sharp IR Distance Sensor class

We used Sharp IR sensors in the last several years' robots. They provide an analog voltage which varies according to the distance to the nearest object. A graph of voltage-versus-distance is provided in the datasheets for these sensors.

It would be useful to have a wrapper class that knows how to interpret the values to turn into a distance, e.g. 2.3 volts = 17 cm, or whatever. Instead of using if/else or case statements, we could come up with a mathematical formula that fits the curve

Hint: people have already written some of these for Arduino - Google them up, find and port them over!

Possible double-initialization of subsystems

Steps to reproduce:

  • Get an instance of the subsystem before robotInit() finishes (subsystem.getInstance())
    • Will initialize the subsystem if not already done (subsystem.init())
    • Returns the subsystem
  • Have robotInit() initialize subsystem (subsystem.init())
    • Subsystem will have been initialized twice

Figure out differences between XBox 360 versus XBox One controller

We are getting pretty good at using XBox 360 controllers and mappings for all of our robot code.

However, the XBox 360 controller is discontinued, and XBox One controllers are plentiful.

It would be useful to know how exactly these two map in the robot code. Do they behave identically? (I think the answer is no.) If not, can we create a mapping and/or subclass so we know how to work with them?

I think this is a good little project for a rookie student.

Poor Config and Robot static design

The static design of both the Robot class and the config class is making it hard to class. By having no config object, we have to rely on manually setting up each class before each test which is a pain for maintainability as well as reliability of tests.

Bling Patterns

Currently the robot's bling subsystem is ready for patterns, and has only one: nothing.

It would be great to get some patterns going, however there are a couple of things that need to be done as part of this.

  • Intake and Lift subsystem have to be merged to master, or to their own master branch.
  • Need feedback from drive team on what patterns they would like to see.
  • Need information from controls about where the LED strips are mounted and how big they are (to determine how useful they're going to be).

Write up testing procedures

Based off of @robertlucyshyn comment on #1, we think it would be a good idea for our testing procedures and/or methods to be documented (a "quick start guide" if you will) to aid with on-boarding rookies as well as to keep testing procedures consistent within the robot development team.

Would need to include:

  • How we test for multiple hardware configurations (if we test this at all).
  • When we test and how we do it so that everything that needs to be tested is tested.
  • How we ensure that code merged to the master branch is tested (so maybe a tie in with how pull requests work and what we expect of code that is ready enough to have a pull request made).

Port over rumbler

We're going to probably use rumbler again this year so it's a good idea to either take the existing rumbler command or to make a new one in the new code base.

Requirements for this system:

  • Provide live, haptic feedback to the driver and operator (if they want it).
  • Only have one instance of a rumbler class at a time.
  • Tight integration with the bling subsystem so that signals from bling are mirrored with controller rumbles.
  • Prioritization of signals (although should be handled by bling) so that the highest priority signal is the signal sent in case of multiple messages being sent at the same time.
  • Clear, documented patterns for rumbles so that the drivers know what it's doing. Without this, this entire thing is essentially useless.

Drivetrain Subsystem

Needs to be able to run motors of robot
Able to integrate Talons with PIDs
Move joystick movement logic out of Drivetrain
Easily choose movement mode

  • Brake mode toggle
  • Current vs voltage drive

Port over autonomous selector logic

Every year we have a selector switch for selecting autonomous modes. This year we would like to have it again. The code exists already in the 2018 and the 2017 code repos and can be ported over to this year's repo.

It might be nice to separate the commands from the switch positions themselves.

Port over bling

We need to take the bling program from the old robot code and port it over to the new robot.

This may require deciding whether or not we're actually going to use bling, but it's really easy to remove bling if we don't need it.

User Input

Use joystick buttons
Outside of code, have bindings (in a JSON file)
Run commands on button press
Look at previous year's code for inspiration

Change to Talon PIDs for Movement

Last year, PIDs built into WPILib were used, but Talon PIDs offer better control.

Integrate auxiliary PID for rotation (might require Pigeon IMU)
Have all movements from last year using Talons

  • Drive forward no gyro
  • Drive forward with gyro
  • Rotate with gyro
  • Motion magic drive forward gyro

Deal with possibility of ejecting hatch while elevator is at its lowest position

One of the thing requested in the priority email was to automatically lift the elevator when it hits the limit switch. I would like clarification on two things for this:
A. Why is this useful?
B. How much does the lift need to be raised?

Otherwise this should be pretty easy to implement. It would most likely be done in the HoldLift command, which would just check if the limit switch is being pressed and if so raise the hold position a little and then you're done. The operator will always be able to lower the lift however.

A mechanism to override the ring lights on for testing

RCVI team would like a way to tell the ring lights to turn on and stay on - for example to record the camera images during a full practice match at North Bay in order to get sample images for testing.

A potential way to do this is to short the ring light power wires across the relay so they are wired to always-on.

Can we do it in software with a dashboard selector? It doesn't need to be checked every teleop tick, it could just be a setting that gets checked when the robot is initialized and then sends a single call to your ringlight.enable() method.

Need functionality in motion control system to follow Pathfinder trajectory

Robot Code Vision Interface needs functionality in motion control system to follow the trajectory generated by our Pathfinder code. We are thinking of a method such as “followTraj(traj)“ where traj is a Trajectory object generated by Pathfinder. A Trajectory object contains an array of Segment objects where each Segment object has the following double attributes:

traj.segments[i].dt             // time since start of trajectory (sec)
traj.segments[i].x              // x coordinate in robot frame (dist)
traj.segments[i].y              // y coordinate in robot frame (dist)
traj.segments[i].pos            // distance along trajectory (dist)
traj.segments[i].velocity       // velocity along trajectory (dist/sec)
traj.segments[i].acceleration   // acceleration along trajectory (dist/sec^2)
traj.segments[i].jerk           // jerk along trajectory (dist/sec^3)
traj.segments[i].heading        // angle of tangent to trajectory with respect  
                                // to x axis in robot frame (rad)


It the above descriptions of the units of measurement, "dist" represents the
measurement is used for distance (e.g. feet, inches, metres, centimetres, etc.)
which at this moment is TBD. The units will be decided shortly according to
what is most compatible with the motion control system and the vision system.

It is assumed that the robot frame is located at the centre of the robot base
with the y axis is pointing to the front of the robot and the x axis pointing 
to the right of the robot when looking from the back to the front.
It is assumed that this frame is compatible with that used by the motion
control system. (Please advise us if this is not correct.)

The x and y attributes are in the robot frame as positioned and oriented
at the start of the trajectory.
 
In order to test the trajectory following functionality, it will be easiest to have your test code generate a trajectory using Pathfinder. To install Pathfinder, go to Jaci’s Pathfinder: https://github.com/JacisNonsense/Pathfinder

Here is some sample test code that can be used. It will generate a trajectory that starts from the origin with the robot at 0 degrees to the horizontal and will end up at position (x=3, y=5) at 45 degrees to the horizontal. Note that Pathfinder wants angles in radians so that the Pathfinder.d2r(45) converts 45 degrees to the value in radians.

package frc.robot;

import jaci.pathfinder.Pathfinder;
import jaci.pathfinder.Trajectory;
import jaci.pathfinder.Waypoint;

public class Basic {

public void testTraj() {
    System.out.println("RCVI: testTraj(): Entered");
    Trajectory.Config config = 
        new Trajectory.Config(Trajectory.FitMethod.HERMITE_CUBIC,
            Trajectory.Config.SAMPLES_HIGH, 0.05, 1.7, 2.0, 60.0);
    Waypoint[] points = new Waypoint[] {
        new Waypoint(0, 0, 0),
        new Waypoint(3,5,Pathfinder.d2r(45))
    };

    Trajectory traj = Pathfinder.generate(points, config);
  
    // Here is the method followTraj we are asking you to write
    SomeRobotClass.followTraj(traj)   

}

Dashboard set up

One of the large requirements for this year (given the sandstorm period) is some sort of nice dashboard set up. It would need the following features:

  • Shows lift motor current/torque on the lift motor.
  • Two resizable camera feeds for the two cameras on the robot.

We can probably accomplish this using shuffleboard.

I'm going to be very busy with mechanisms and I think @eandr127 is going to be doing auto modes so @carmen1234 or @Zinka010 if either of you are willing to do this that would be great.

Determine and solve risk of plunger collision

As @GabbyGenereux brought up, there is a possible risk of hitting the "eject hatch" plunger while lowering the intake arms.

Currently we need to know a couple more things before we can begin resolving this issue.

  1. How long does the plunger take to retract? Is the time consistent?
  2. How long do the arms take to raise? Is the time consistent?
  3. In what conditions is a collision possible?

Once we receive a practice bot or feedback from someone who has worked with the robot, we can begin working on it. For now, there is a basic check and call to the retract method but since there is so much unknown it might not do enough.

Vision integration

Would be cool to have a dedicated vision subsystem for integrating the separate vision processor.

This could take the form of a subsystem which calculates the path to be taken for the robot to hit a target and then for the autonomous subsystem to follow this path.

Note that this is simply presenting an idea, may not actually end up making sense to separate the processing like this.

Logging

Rewrite the logging system
Logging system needs to be able to be fully disabled.
Keep the frontend simple (so that calls to it are easy).
Just need to fix the backend portion of the logging to make it less clunky.
Maybe have levels of logging so we can disable nuisance logs.
Log to USB
Maybe log to RoboRIO

Specific Requirements

  1. Debug
  • Log to file
    • New log for each instance of application
    • Name includes date/time
    • Rollover that adds indicates the number of the log for that application
  • Log to console
    • Standard Out
  1. Info
  • Log to file
  • Log to console
  1. Warning
  • Log to file
  • Log to console
  • Log to driver station
    • Driverstation.reportWarning()
  1. Error+
  • Log to file
  • Log to console
    • Standard Error
  • Log to driver station
    • Driverstation.reportError()
  1. Wrapper
  • Disable logging
  • Wrappers for logs
  1. Async logger

Driver Assist for Target Approach Distance Using Sharp IR Distance Sensor

Issue #15 mentioned that the club has an Sharp IR Distance Sensor. Such a sensor could be useful in creating an autonomous mode that allows the robot to get (very) close to a hatch or cargo target. It can complement the autonomous mode using the vision system (Issue #25) and may actually be more accurate in the final portion when the robot is getting close to the target. This issue is to investigate the feasibility of such functionality, developing the prototype code, conducting tests on the prototype (initially using the small RoboRio wheeled robot if available), and if the initial tests are successful, integrating the prototype code in the actual robot code. Steve from Design and Fab has been advised about the need to have the Sharp IR Distance Sensor on the robot. This needs to be followed up when the final decision about the robot mechanical design is made.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.