Code Monkey home page Code Monkey logo

matthias-mayr / cartesian-impedance-controller Goto Github PK

View Code? Open in Web Editor NEW
199.0 3.0 31.0 3.43 MB

A C++ implementation of Cartesian impedance control for torque-controlled manipulators with ROS bindings.

Home Page: https://matthias-mayr.github.io/Cartesian-Impedance-Controller/

License: BSD 3-Clause "New" or "Revised" License

CMake 3.78% C++ 85.57% Shell 1.20% C 1.03% TeX 8.42%
franka-emika franka-panda gazebo iiwa kuka-iiwa kuka-lbr-iiwa manipulators robotics ros compliant-control

cartesian-impedance-controller's People

Contributors

faseehcs avatar jsaltducaju avatar kyleniemeyer avatar matthias-mayr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cartesian-impedance-controller's Issues

Bimanual cartesian controller

Hello,

First, I have been testing with your controller and actually I like it, so thank you for your work!.

My question is if you ever loaded in gazebo two robots and tried to control them using your actual controller loaded twice for each robot.

Also, I'm considering to create a dual Cartesian controller in order to control both a panda and iiwa14 robots, using your work as base. The idea would be to control both robots in one controller. Should I consider something before starting on this?

Regards,
Mikel

Controller manager freezing when loading the controller

Hi Matthias, great work! I am trying to run the controller on a Franka Emika Panda/FR3 with moveit, i followed the installation instructions but the controller manager freezes when it tries to load the controller. Sorry if this is not a problem directly with your controller, but something on my end. Do you have any idea what causes this? I think https://answers.ros.org/question/284099/controller-spawner-stuck-while-loading/ is related, and it seems to be something about a multi-threaded spinner in the hw iface node, causing a deadlock.

Slow execution of (moveit) trajectory

The execution of trajectories generated by moveit seem really slow. Even with high stiffness; translation up to 2000. They work well when executed by a position controller. The movement seems to be slower than commanded, and after finishing, the controller returns a successful execution. However, it is not at the final position yet and the arm still moves closer to the end goal, even after 'finishing' the trajectory. (Tried on Franka Panda/FR3)

Integration with Dart simulator

hey Matthias,
As you have mentioned that this library can be integrated with dart simulator. I am very new to dart as I have just started tutorials.
Can you throw some light how do I integrate this library with dart simulator?

Secondly:
I want to use iiwa_ros package with dart simulator using CI controller. Only selecting <args="physics" default="dart"/> in gazebo launch would not work right?

Would it be possible to support UR e-series arms?

This is more of a conceptual question. I am happy to jump in to try to get it running on a UR e-series arm, but I am wondering if that would be technically feasible in your opinion (or if it would just be a waste of effort).

Implement compatibilty with rqt_joint_trajectory_controller

This controller is currently not compatible with the rqt_joint_trajectory_controller from ros_control.

To be compatible, these prerequisites must be fulfilled:

For a controller to be compatible with this plugin, it must comply with
the following requisites:
    - The controller type contains the C{JointTrajectoryController}
    substring, e.g., C{position_controllers/JointTrajectoryController}
    - The controller exposes the C{command} and C{state} topics in its
    ROS interface.

Furthermore, this rqt plugin sends a trajectory with a single joint value. This means that real interpolation needs to be implemented by this controller. Currently we assume trajectories with many points and just update the nullspace configuration.

Feel free to send a PR to implement this and/or a reaction in this issue if this is a feature that is needed.

Running instruction?

Hi Matthias,

Great contribution to the community you have done :)
I followed your suggestion from your comment in another issue and come have a look.

I have some questions regarding running your codes.

  1. Does it directly work with the FRI-ROS driver on ubuntu 18 and ros melodic?
  2. Could you elaborate on what scripts are to be run in what orders, and what behaviours are to be expected?
    -- For example, start lbr_server (what option?), rosrun/roslaunch something something...

Thank you very much!

Passing license check with ros_license_linter

Ideally the repo should pass the ros_license_linter check.

Right now it does not, because of these files:

The following files contain licenses that are not covered by any license tag:
  'README.md': ['CC-BY-SA-4.0']                --> The readme contains a bib entry for the paper
  'res/orcidlink.sty': ['LPPL-1.3c']           --> The stylefile of the JOSS paper
  'src/pseudo_inversion.h': ['LicenseRef-scancode-public-domain']  --> valid

The issue is handled here: boschresearch/ros_license_toolkit#24
Once it is fixed, the Github action could be integrated.

ROS time restrictions

Greetings,
My field of interest is robotics and developing software in ROS for impedance control. My goal is to test my code on two robots, one active robot and one passive. The active robot will try to "push" gently the passive robot, as if by mistake and then try to move accordingly so that it won't actually push it. The interesting fact here is that the parts of the robots that will be in touch are made of metal, that is the surfaces will be hard. For that reason, the computation/refresh time of the package would need to be in the order of milliseconds. My worry is that the ROS environment would not thrive on these restrictions. That's why I would like to ask you whether you have tested your package using this library (consequently and the ROS capabilities) for tests between hard surfaces and what were your time restrictions?

Run in the Gazebo and real panda?

Hey,
Thanks for your nice project! I wanna try my own low level controller(cartesian impedance controller and so on) in c++ as the controller plugin just like u have done in this project, but for gazebo simulation. This kind of try can let me familiarize with franka robot. I read the doc Franka Control Interface. The reason for opening this issue is that the used interfaces in Franka Gazebo(Franka_HwSim) and real Franka(Franka_Hw) are not the same. Did you run the simulation on the gazebo? Did you run the controller on the real panda? If i want to run my controller on the real panda, should i also change my code for the real panda because of the Interface(Franka_Hw), even though i can run the controller for simulation?I found that you don't use the Franka_Hw in your controller. It's not the same way as the ones I've seen. I am very confused about the kind of issues right now. I'm looking forward to some of the experiences you can share. Thanks a lot for your time! !

Safety and Realibity measures when using Cartesian Impedance Controller

Hi Matthias,

I have been using Cartesian Impedance Contoller with iiwa_ros and also using CI_trajectory_controller/reference_pose or cartesian trajectory planner to send the commands to the robot. The functioning is very impressive, thanks to you.
I want to gain confidence for regular use of the same functionality without actively keeping my awareness to press Emergency stop :p , just in case robot behaves weirdly.
Are their any reliability and safety measures which I can set to keep the robot in confidence even when working very close to human?
There are some safety configurations which I have set up in the KUKA controller such as joint limits, max. cartesian velocities, etc. Can you recommend additional measures please.
Sometimes what happens that,

  1. FRI connection would break while running and restarting the application gives some JAVAlang error. This makes the robot totally in torque control mode with zero stiffness. Due to the errors in load data detemination (multiple trials/ additionally due to have some wires going down from the end-effector), robot does tries to move faster.
  2. Sending the reference_poses to the robot, sometimes robot tries to move very fast. Parameters which Im using are as follows:
    {delta_tau_max: 1.0
    filtering:
    nullspace_config: 0.1
    pose: 0.3
    stiffness: 0.1
    wrench: 0.5 }
  3. What will happen if the IK solution is not available at certain poses? In simulation the robot moves very fast in these situations. Can you please shed some light over it? I tried to check the IK using iiwa_tools over the path computed by cartesin_trajecotry_generator, but Im getting mirror solutions for every next pose. (provided the seed value as per the joint values of the last pose).

Any ideas for gravity compensation for gripper and workpiece weight ?

I have been using Cartesian-Impedance-Controller, however, I observed there is always an error to the goal state. From my understanding from the repo, we need to command wrenches to compensate the gravity due to tool and workpiece.

Any ideas how to calculate wrenches dynamically during motion to compensate the gravity due to weights of the tool and workpiece?

Test

Hey! nice work.

I'm developing a impedance controller as well, but the robot itself is not much available.
Did you test the impedance controller in simulation or only in real life?

I wanna do it in simulation, is that possible?

gazebo seg fault error

Hi Matthias,
Thank you for the controller. I am trying to use it with IIWA. when I launch gazebo with the impedance controller with the command 'roslaunch iiwa_gazebo iiwa_gazebo.launch controller:=CartesianImpedance_trajectory_controller', I get seg fault and the process dies. I am using ros-noetic.

error_controller

Do you have any inputs on what might be causing the issue?

Thank you

How to control the duration of single step motion in impedance control mode?

Hi Dr. Matthias, thank you very much for your work. I have tried this controller on iiwa simulation and real robot. But now I have a question, how to control the movement duration of a single cloth in the torque control mode? It mainly has the following two purposes:

  1. The speed can be controlled in a wide range of motion to ensure a balance between safety and efficiency.

  2. Strike a balance between pose convergence accuracy and movement duration.
    At present, I simply wrote a class to publish the reference pose, as follows

    pose_pub = nh->advertise<geometry_msgs::PoseStamped>("/iiwa/CartesianImpedance_trajectory_controller/reference_pose",10)

    void iiwa_pose_param_update::set_iiwa_pose_msg() {
    ref_rotation.GetQuaternion(quaternion_x,quaternion_y,quaternion_z,quaternion_w);
    pose_msg.pose.position.x = ref_pose.x();
    pose_msg.pose.position.y = ref_pose.y();
    pose_msg.pose.position.z = ref_pose.z();
    pose_msg.pose.orientation.x = quaternion_x;
    pose_msg.pose.orientation.y = quaternion_y;
    pose_msg.pose.orientation.z = quaternion_z;
    pose_msg.pose.orientation.w = quaternion_w;
    pose_msg.header.stamp = ros::Time::now();
    }

    void * iiwa_pose_param_update::publish_pose_config(const KDL::Frame& ref_frame) {
    ref_pose = ref_frame.p;
    ref_rotation = ref_frame.M;
    this->set_iiwa_pose_msg();
    pose_pub.publish(pose_msg);
    ros::spinOnce();
    ROS_INFO("Published pos message.");
    return nullptr;}

However, in the main program, I can only continue to execute the release topic before it can better converge to the desired pose, and the convergence time is not fixed (affected by the movement distance), as follows

 while (ros::ok()){
     ipu->publish_pose_config(KDL::Frame(KDL::Rotation::RPY(0, M_PI, 0),
                                         KDL::Vector(0.5, 0.0, 0.7)));

 }

Questions are as follows:

  1. I don’t know if this has anything to do with the choice of communication mechanism. Maybe the action communication mechanism can be used to control the single step duration? I'm not sure about this yet, and I still need to understand the action communication mechanism.
  2. Is it possible to manually set the pose convergence threshold and maximum movement duration? In this way, even if the movement time cannot be accurately controlled, the problem of movement accuracy can still be solved.

I would be very grateful if you could provide some suggestions or solutions. In addition, I hope you can correct me if I don't understand your code properly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.