Code Monkey home page Code Monkey logo

relaxed_ik's Introduction

relaxed_ik

📌 We provide a seperate repository called relaxed_ik_core, which is complitable with more recent Rust version and Ubuntu 20.04 / ROS Noetic.

Development update 10/26/21

Hi all, we are excited to share some updates to our relaxed_ik library. Apologies for the delay, I have been attending to many unforeseen circumstances over the past few months.

  • The original RelaxedIK code was written as somewhat messy “research code” in 2016, and each iteration of the code after that focused on code cleanup and computation speed (e.g., the port to Julia, and subsequently to Rust). This long-term development on a single codebase has been nice for some use cases, e.g., everything is connected well with ROS, everything has maintained compatibility with legacy code in Python and Julia, etc, but this large monolithic structure has also made it difficult to port the RelaxedIK solver to other applications in a lightweight manner. Thus, much of our recent development has focused on improving the portability of RelaxedIK. To this end, we introduce a new repository called relaxed_ik_core which contains just the central kernel of the RelaxedIK runtime without the extra ROS and robot setup baggage. The relaxed_ik_core repo comes prepackaged with a common set of pre-compiled and pre-trained robot models (UR5, Sawyer, etc.), allowing users with one of those robots to completely skip the setup steps. We are also hoping to grow that set of pre-compiled robots in that repo, so feel free to make pull requests with additional robot info files. The lightweight relaxed_ik_core package has allowed us to port the code to other applications and languates, such as the Unity game engine, Mujoco, CoppeliaSim, Python 2 & 3, ROS1, and ongoing development for ROS2. Note that going forward, this current repository (i.e., github/uwgraphics/relaxed_ik) should ONLY be used for robot setup and compilation or to maintain any older legacy setups. For anything else, it is essentially deprecated and further development will shift over to relaxed_ik_core. For additional information, please consult the documentation

  • We recently presented a paper at ICRA 2021 on a new method called CollisionIK (video). CollisionIK is a per-instant pose optimization method that can generate configurations that achieve specified pose or motion objectives as best as possible over a sequence of solutions, while also simultaneously avoiding collisions with static or dynamic obstacles in the environment. This is in contrast to RelaxedIK, which could only avoid self-collisions. The current research code for CollisionIK is available on relaxed_ik_core, and I am also working on an improved implementation of collisionIK that will be released in the coming months. The citation for this paper is:

@article{rakita2021collisionik,
  title={CollisionIK: A Per-Instant Pose Optimization Method for Generating Robot Motions with Environment Collision Avoidance},
  author={Rakita, Daniel and Shi, Haochen and Mutlu, Bilge and Gleicher, Michael},
  journal={arXiv preprint arXiv:2102.13187},
  year={2021}
}

Please email or post here if any questions come up!


Development update 1/3/20

RelaxedIK has been substantially rewritten in the Rust programming language. Everything is still completely ROS compatible and should serve as a drop-in replacement for older versions of the solver.

The Rust relaxedIK solver is MUCH faster than its python and julia alternatives. Testing on my laptop has indicated that the solver can run at over 3000Hz for single arm robots (tested on ur3, ur5, jaco, sawyer, panda, kuka iiwa, etc) and about 2500Hz for bimanual robots (tested on ABB Yumi and Rainbow Robotics DRC-Hubo+). All of the new code has been pushed to the Development branch, and will be pushed to the main branch after a brief testing phase. It is highly recommended that the development branch be used at this point, as it has many more features and options than the main branch.

 git clone -b dev https://github.com/uwgraphics/relaxed_ik.git 

If you are working with an older version of relaxedIK, note that you will have to start from a fresh repo and go through the start_here.py procedures again to work with the Rust version of the solver.

If you have any comments or questions on any of this, or if you encounter any bugs in the new rust version of the solver, feel free to post an issue or email me directly at [email protected]


RelaxedIK Solver

Welcome to RelaxedIK! This solver implements the methods discussed in our paper RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion (http://www.roboticsproceedings.org/rss14/p43.html)

Video of presentation at RSS 2018 (RelaxedIK part starts around 12:00) : https://youtu.be/bih5e9MHc88?t=737

Video explaining relaxedIK https://youtu.be/AhsQFJzB8WQ

RelaxedIK is an inverse kinematics (IK) solver designed for robot platforms such that the conversion between Cartesian end-effector pose goals (such as "move the robot's right arm end-effector to position X, while maintaining an end-effector orientation Y") to Joint-Space (i.e., the robot's rotation values for each joint degree-of-freedom at a particular time-point) is done both ACCURATELY and FEASIBLY. By this, we mean that RelaxedIK attempts to find the closest possible solution to the desired end-effector pose goals without exhibiting negative effects such as self-collisions, environment collisions, kinematic-singularities, or joint-space discontinuities.

To start using the solver, please follow the step-by-step instructions in the file start_here.py (in the root directory)

If anything with the solver is not working as expected, or if you have any feedback, feel free to let us know! (email: [email protected], website: http://pages.cs.wisc.edu/~rakita) We are actively supporting and extending this code, so we are interested to hear about how the solver is being used and any positive or negative experiences in using it.

Citation

If you use our solver, please cite our RSS paper RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion http://www.roboticsproceedings.org/rss14/p43.html

@INPROCEEDINGS{Rakita-RSS-18, 
    AUTHOR    = {Daniel Rakita AND Bilge Mutlu AND Michael Gleicher}, 
    TITLE     = {{RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2018}, 
    ADDRESS   = {Pittsburgh, Pennsylvania}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2018.XIV.043} 
} 

If you use our solver for a robot teleoperation interface, also consider citing our prior work that shows the effectiveness of RelaxedIK in this setting:

A Motion Retargeting Method for Effective Mimicry-based Teleoperation of Robot Arms https://dl.acm.org/citation.cfm?id=3020254

@inproceedings{rakita2017motion,
  title={A motion retargeting method for effective mimicry-based teleoperation of robot arms},
  author={Rakita, Daniel and Mutlu, Bilge and Gleicher, Michael},
  booktitle={Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction},
  pages={361--370},
  year={2017},
  organization={ACM}
}

An Autonomous Dynamic Camera Method for Effective Remote Teleoperation https://dl.acm.org/citation.cfm?id=3171221.3171279

@inproceedings{rakita2018autonomous,
  title={An autonomous dynamic camera method for effective remote teleoperation},
  author={Rakita, Daniel and Mutlu, Bilge and Gleicher, Michael},
  booktitle={Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction},
  pages={325--333},
  year={2018},
  organization={ACM}
}

Dependencies

kdl urdf parser:

>> sudo apt-get install ros-[your ros distro]-urdfdom-py
>> sudo apt-get install ros-[your ros distro]-kdl-parser-py
>> sudo apt-get install ros-[your ros distro]-kdl-conversions

fcl collision library: https://github.com/BerkeleyAutomation/python-fcl

scikit learn: http://scikit-learn.org/stable/index.html

Tutorial

For full setup and usage details, please refer to start_here.py in the src directory.

relaxed_ik's People

Contributors

djrakita avatar yepw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

relaxed_ik's Issues

The software assumes that the robot begins on the Fixed Frame

I have had this problem:
image

As you can see the self collisions shape are spawn on 0,0,0 of the common_world frame. Unfortunately, I have not been able to find the part of the code that creates the shapes for the self-collisions.

In my robot configuration, all the collision surfaces are expressed in the world frame which is my fixed frame. I am not able to configure the software to accommodate my situation.

I think this can be fixed by adding another parameter to the config file start_here.py

Question about collision detection

Hi!

Thanks for your works and generous sharing.

I have a little confusion about collision detection and hope to get your reply.

When you modify the poses of collision objects which are used to wrap the robot links, I found the code snippet here.

What I understand is that you only use the vector direction between the joints and use it to reconstruct the poses of the collision objects.

So, my questions are

  1. Is this because the default base frame of the fcl collision object is different from the robot base frame?

  2. How did you check if the geometry of the fcl wrapps the robot links correctly? Because the collision detection results are important to construct the training set.

Thanks!

Development update

The RelaxedIK solver has presented two main options over the past six months: (1) For a stable, reliable, yet relatively slow version of the solver, use the python implementation on the main branch; or (2) for a less stable, less reliable, yet much faster version of the solver, use the Julia implementation on the dev branch. My goal was always to iterate on the Julia version to become just as stable and reliable as its Python counterpart, and eventually unite these two camps into one on the main branch with the Julia version as the centerpiece.

However, there are a few idiosyncrasies that come along with the Julia programming language that have precluded those plans. First, the Julia version of relaxedIK takes at least a minute to do its JIT compilation and start from scratch each time. This is unacceptable for a stable release of a solver that should be convenient and lightweight to use. Plus, it makes development on top of the base solver incredibly tedious, since each little change requires this full JIT compilation. While the people developing the Julia programming language seem to be well aware of this issue for large projects developed in the language, and they may have a sufficient solution for this problem down the road, just waiting around and hoping this gets fixed does not seem like a good option. Second, while ros is somewhat supported in julia through RobotOS.jl, this library leads to mysterious errors too often to be the foundation of a stable release. With both of these issues in mind, I'd rather be proactive and just turn the page away from Julia and plan future development elsewhere.

So, over the next couple of months, I will be re-implementing relaxedIK in a fully statically compiled language (still TBD). Given recent testing, I anticipate the new solver will be even faster than the Julia version, without the JIT compilation hassle and stability issues. Everything will remain ROS compatible. The Julia and python versions will remain in the new release for backwards compatibility, but I anticipate new features will just be officially added and supported in the new language.

Along with the new language, I plan to add in a few other features as part of a "major" new release, such as better support for fast, standard IK solver options. The tools that make relaxedIK actually afford incredibly fast standard IK solver options (i.e., "standard" meaning just hitting end-effector pose goals accurately, without the motion continuity, collision avoidance, and other feasibility considerations included in relaxedIK). We've used these "standard" IK options in our prior work and, in our testing, it is faster and more reliable than other numerical IK solvers we have tried. I plan to make these features more publicly accessible and well documented.

If you have any comments or questions on any of this, feel free to use this thread to foster conversation. Or, if you prefer, feel free to email me directly at [email protected]

Thanks!

Danny

collision viewer fails to launch

Hi, I was trying to follow the tutorial given in start_here.py with UR5 example. But I am stuck at Step 5b because of an error shown below. Would you be able to take a look and help me out? I really appreciate it and thanks for sharing the code.

SUMMARY

PARAMETERS

  • /robot_state_publisher/publish_frequency: 50.0
  • /robot_state_publisher/tf_prefix:
  • /rosdistro: kinetic
  • /rosversion: 1.12.14

NODES
/
robot_state_publisher (robot_state_publisher/robot_state_publisher)
rviz (rviz/rviz)

ROS_MASTER_URI=http://localhost:11311
process[rviz-1]: started with pid [31991]
process[robot_state_publisher-2]: started with pid [31992]
Traceback (most recent call last):
File "/home/andypark/catkin_ws/src/relaxed_ik/src/collision_viewer.py", line 44, in
joint_ordering,init_state=starting_config, collision_file=collision_file_name, pre_config=True)
File "/home/andypark/catkin_ws/src/relaxed_ik/src/RelaxedIK/GROOVE_RelaxedIK/relaxedIK_vars.py", line 95, in init
self.robot = Robot(self.arms, full_joint_lists,joint_order)
File "/home/andypark/catkin_ws/src/relaxed_ik/src/RelaxedIK/Spacetime/robot.py", line 17, in init
self.__initialize_bounds()
File "/home/andypark/catkin_ws/src/relaxed_ik/src/RelaxedIK/Spacetime/robot.py", line 49, in __initialize_bounds
idx = self.subchain_indices[i][j]
IndexError: list index out of range
[robot_state_publisher-2] killing on exit
[rviz-1] killing on exit
[collision_viewer-1] process has died [pid 31940, exit code 1, cmd
/home/andypark/catkin_ws/src/relaxed_ik/src/collision_viewer.py __name:=collision_viewer
__log:=/home/andypark/.ros/log/9b6421f2-11ae-11e9-a8f2-18568023b643/collision_viewer-1.log].
log file: /home/andypark/.ros/log/9b6421f2-11ae-11e9-a8f2-18568023b643/collision_viewer-1.log

Goal poses on absolute mode

Hi rakita,

Sorry for many questions, I'm trying to use this library with absolute mode for sending both position/rotation goal.

The current solve function (https://github.com/uwgraphics/relaxed_ik/blob/master/src/RelaxedIK/relaxedIK.py#L52) takes goal_positions and goal_quats as arguments for goal pose of IK.

Considering the robot that has the links like below:
base_link -> ... -> end_effector_base_frame -> (end_effector_fixed_joint) -> end_effector_frame.

I firstly mean the goal position / rotation is just a goal of end effector (e.g. end_effector_frame) frame from the fixed frame (e.g. base_link), but it looks it is wrong.
Which frame should I set as goal poses from?

FYI:
I debugged by changing relative to absolute here:
https://github.com/uwgraphics/relaxed_ik/blob/master/src/RelaxedIK/GROOVE_RelaxedIK/relaxedIK_vars.py#L28-L29

-                  rotation_mode = 'relative',  # could be 'absolute' or 'relative'                                                     
+                  rotation_mode = 'absolute',  # could be 'absolute' or 'relative'                                                     
-                  position_mode = 'relative',                                                                                          
+                  position_mode = 'absolute',                                                                                          

and weights for objectives just to affect position and rotation of end effector:
https://github.com/uwgraphics/relaxed_ik/blob/master/src/RelaxedIK/GROOVE_RelaxedIK/relaxedIK_vars.py#L32

-                 weight_priors=(50.0,10.0,0.3,0.3,0.3,1.0,2.0),                                                                     
+                 weight_priors=(50.0,50.0,0.0,0.0,0.0,0.0,0.0),                                                                       

System performance and speed

Hi -
I am working on a 2-arm robot and running through the process. All seems to be working, but having speed issues. I am running on 12-core Intel Xeon 3.6GHz that runs all cores about 40%. I am only getting update rates around 5-10Hz. I notice when I turn down the max iterations on the solve function in relaxed_ik.py I get performance boost to ~50Hz (soln suffers a bit). I see the same issue running the sample.py or my custom config using the relaxed_ik node with pub/subscribe.

On your default config with ur5. I get about 40hz (which is inline with your posted #s) on /relaxed_ik/ee_pose_goals and then drop the max iterations to 2 and get 110 Hz. How did you select 11 iterations?

I am not seeing the results you published in your paper on speed for a 2 arm config. Other things I could check here.

Why would meshes have the same number of vertex and triangles?

In the collision_utils.py
There is this condition:

       if not len(self.params['verts']) == len(self.params['tris']):
       raise TypeError(bc.FAIL + 'ERROR: number of tris must equal the number of verts in collision mesh.' + bc.ENDC)

As far as I know, unless you reuse some vertices you are going to get 3 times the amount of triangles. I doubt that all of them are going to be reused.

In case I am wrong, how can I convert a mesh to have the same numbers of vertices and triangles?

mimic joints in urdf

Hi again -

Working with a URDF that has mimic joints in the chain. I have found I need to explicitly call these out in the chains in the joint names and ordering lists to get all to work. The URDF viewer can recognize the mimic joints, but relaxed_IK process them as independent joints and does use the mimic constraint in the urdf. Any thoughts how to modify relaxed_IK to account for mimic joints?

Thanks.

Python3 migration?

Hi,

Now that Python2 is officially deprecated, and ROS-Noetic targets Python3 exclusively, is there any plans to migrate or development migrating relaxed_ik?

Dependency problem in Kinetic

Hello,

I wanted to inform the authors that the version of urdfdom-py available for kinetic has problems with default values in the urdf files.
For example if the urdf has <origin xyz="0 0 0"> the rpy value of such joint will be set to None instead of [0,0,0] like newer versions would do. I think this situation can be easily managed in the urdf_load function.
This is the only problem I have actually encountered in testing the package on ros kinetic so far.

Set mesh as enviroment collision object

Hi, I would like to add in the collision file a collision object described with a mesh. Reading the code, this seems possible. Unfortunately, I don't understand how to provide all the necessary information into the collision file. Could you help me with an example?
Thank you for your time.
Best,
Lucia

UR5's links and collision obstacles not aligned

Hi,
I am running the start_here.py in a ROS Melodic enviroment using Python2.7 to generate all necessary files to use relaxed_ik_core. I am using a modified UR5 urdf (my_ur5.txt) because I need a specific position of end-effector link. Anyway, also with your ur5.urdf the problem that I will present below was the same.
All seems fine until step 7. When I display robot platform with collision objects they don't match at all with robot's link. Seen below in the screen shot. How can I fix this?

issue-photo

I assume that when the algorithm calculates distance between links, it actually calculates distance between their approximations (green cylindres). So if they don't match with robot's links all the process is useless. Is it correct?
Thank you for your time in advance.

Issue with File Writing on Dev - Step 5c

Hi -

OK. Working through issues on this on. running the NN and runs fine but the file write crashes on the last step. Can't seem to see the issue here:

IOError: [Errno 21] Is a directory: '/home/initial/myws/src/relaxed_ik/src/RelaxedIK/Config/collision_nn_python/'

Ideas?

start_here file config params

`urdf_file_name = 'ur5.urdf'
fixed_frame = 'base_link'
joint_names = [ ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint'] ]
joint_ordering = ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
ee_fixed_joints = ['ee_fixed_joint']
starting_config = [ 3.12769839, -0.03987385, -2.07729916, -1.03981438, -1.58652782, -1.5710159 ]'

'# UR5
from sensor_msgs.msg import JointState
def joint_state_define(x):
js = JointState()
js.name = joint_ordering
js.position = tuple(x)
return js
#default file looks good for UR5

collision_file_name = 'collision_ur5.yaml''

Error:
....Working fine....
[0.48887457]
0.7862703352498891

[0.5682789]
0.5072929357101621

[0.01663378]
2.3967835712600306e-09

[0.39437549]
0.005419666218923598

Traceback (most recent call last):
File "/home/initial/myws/src/relaxed_ik/src/preprocessing.py", line 48, in
vars = RelaxedIK_vars('relaxedIK',os.path.dirname(file) + '/RelaxedIK/urdfs/' + urdf_file_name,joint_names,ee_fixed_joints, joint_ordering,init_state=starting_config, collision_file=collision_file_name, config_override=True, path_to_src=path_to_src)
File "/home/initial/myws/src/relaxed_ik/src/RelaxedIK/GROOVE_RelaxedIK/relaxedIK_vars.py", line 189, in init
self.ce = Config_Engine(self.collision_graph, self, path_to_src,collision_nn_file,config_fn=config_file_name, override=config_override)
File "/home/initial/myws/src/relaxed_ik/src/RelaxedIK/Utils/config_engine.py", line 23, in init
self.joint_order, self.urdf_path, self.collision_file = self.generate_config_file()
File "/home/initial/myws/src/relaxed_ik/src/RelaxedIK/Utils/config_engine.py", line 63, in generate_config_file
joblib.dump(collision_nn, self.path + self.nn_file_name)
File "/home/initial/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 501, in dump
with open(filename, 'wb') as f:
IOError: [Errno 21] Is a directory: '/home/initial/myws/src/relaxed_ik/src/RelaxedIK/Config/collision_nn_python/'
[preprocessing-1] process has died [pid 11500, exit code 1, cmd /home/initial/myws/src/relaxed_ik/src/preprocessing.py __name:=preprocessing __log:=/home/initial/.ros/log/a7764458-1892-11e9-82f4-10e7c6332630/preprocessing-1.log].
log file: /home/initial/.ros/log/a7764458-1892-11e9-82f4-10e7c6332630/preprocessing-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete

Two robotic arms used at the same time, collision cylinders can not be aligned

When using jaco7_and_ur5.urdf, after running rosrun relaxed_ik urdf_viewer_with_collision_info.py, the positions of the two robotic arms generating the columns overlap, but I don't want that
2021-12-27 16-10-12屏幕截图

Is there something wrong with the settings of my start_here.py file

2021-12-27 16-12-45屏幕截图

or other possible problems.

I sincerely hope that I can get your help,Thank you

Relaxed_IK moveit

Hello,

I am trying to include relaxed IK into my move-it planner for ROS1 Noetic. I followed the instructions for setting up for Franka emika panda robot. I can see IK solvers like TrackIK have a plugin. Is it anything additional I need to do in order to include in a moveit ?

Kinova Mico URDF and collision obstacles not aligned

I am running the start_here.py demo on Ubuntu 16.04 ROS kinetic with an m1n6s300 Kinova mico arm.

Attached is my URDF, start_here.py file and config.yaml. All seems fine until step 5b. Rviz seems to show the joint angles are off from the collision objects. Seen below in the screen shot.

screenshot from 2018-08-24 18-12-09

kinova_ik_error.tar.gz

I am sure its something to do with my configuration/URDF etc, but I cant seem to figure out what. Any hints are welcome. :)

relaxed_ik_rust fails

Hi all,

I am trying the dev branch rust version relaxed_ik_rust on ROS Melodic with UR5 robot. I get the following fail:
...bunch of what I assume Rust warnings, then
warning: 118 warnings emitted
Finished dev [optimized + debuginfo] target(s) in 0.07s
Running target/debug/relaxed_ik_node
solver initialized!
thread 'main' panicked at 'attempted to leave type linked_hash_map::Node<yaml::Yaml, yaml::Yaml> uninitialized, which is invalid', /rustc/cb75ad5db02783e8b0222fee363c5f63f7e2cf5b/library/core/src/mem/mod.rs:659:9
[relaxed_ik_node-1] process has finished cleanly
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete

The python version works as intended, I haven't tested the Julia version yet. Any advice?

Additional dependencies

When installing I also had to get numba and ad with pip install numba and pip install ad. Should these be listed as additional dependencies to install or are they maybe installed along with one of the other dependencies? These were needed at the collision viewer step.

Knet 1.2.5 version error

Knet version 1.2.5 appears to cause an error during preprocessing in the Julia version of RelaxedIK. I will look into permanently addressing this issue by updating RelaxedIK to be compatible with this new version or potentially switching to Flux. In the meantime, please revert to Knet version 1.2.4.

I have a working python3.8 / ros noetic version

Hi there,

I have updated the code to run on Ubuntu 20 / ros-noetic / python3.8, the updated version is saved here: https://github.com/jstmn/relaxed_ik-ubuntu20/tree/dev-python38-noetic

There were a number of fixes I needed to make, including updating the python version to 3.8, fixing some julia bugs with Knet, and updating the ros joint_state_publisher, among others.

I'm happy to help add my version to this repo, let me know if that is something y'all (the maintainers) are interested in

  • Jeremy

the ur5.urdf

Hi, Daniel Rakita, djrakita:
I'm a student pursuing a PhD degree.
I want to use the ur5.urdf to test the project, but in my computer, there is a error
[ERROR] [1606108638.445549825]: Could not load model 'package://ur_description/meshes/ur5/visual/upperarm.dae' ...
After I checking it, I think in the ur5.urdf, there is a label called

geometry
mesh filename="package://ur_description/meshes/ur5/visual/upperarm.dae"/
/geometry

But I don't have the mesh source.
What can I do? I'm sorry the question seems naive to you.

Gaussian model parameters

Hi!

I found that the model parameters in the paper are n = 1, s = 0, c = 0.2, r = 0.5, which are different from those in the code snippet .

(1) Did I miss some details about model parameter adjustment?

(2) Besides, I was wondering what is the value of the objective function generally when converged.

Thanks!

Link to Python Repo Does Not Work

Hi,

I'm super excited to try the Python bindings described in the README update, but the link appears to be broken. Can you point me to the working Python repo?

Thanks so much!

Cannot add box collision collision object

Hi Daniel,

I cannot visualize the box collision object with the collision_viewer so I am not sure if it has been added added correctly. Whenever I attempt to add a box, a cylinder is inserted into one of the robot link collision objects.
Screen Shot 2020-11-27 at 6 38 50 PM

Do you know how I may be able to fix this? Thank you!

Here is my collision configuration file

robot_link_radius: 0.05
sample_states: [[-0.85657, -0.18703, 0.18963, 1.77668, 0.84847, 1.0535], [1.01675, 0.40612, 0.06659, 0.93393, -1.17729, -0.62849], [-0.17209, 0.25656, 0.08641, 0.5847, -0.1266, 1.39353], [0.09171, -0.42917, 0.0411, -0.13787, 0.01052, 0.34674], [-0.69866, 0.67231, 0.00311, -1.37154, 0.87735, -0.39271], [-0.54174, 0.54556, 0.05997, -0.65263, -0.58567, -0.35845], [-1.52783, 0.39143, 0.02623, 1.81542, 1.03184, 0.00192], [-1.03257, -0.35803, 0.15493, -1.37735, 0.28299, 0.49498], [-0.58533, -0.71106, 0.12337, -0.49909, -1.22449, -0.57998], [0.07072, 0.59088, 0.1052, -1.69021, -0.04991, 0.45641], [0.76369, 0.34258, 0.1018, -0.42078, 0.90131, -1.51509], [-0.77521, -0.64356, 0.11783, -1.27076, -1.39859, -1.45367], [-1.13214, -0.54064, 0.13576, 0.58941, -0.36877, 1.44977]]

boxes:
  - name: box1
    parameters: [10,10,0.5] # r
    coordinate_frame: 0
    rotation: [0,3.14,0]
    # translation: [0.250, 0.000, 0]
    translation: [0, 0.000, -0.5]

# spheres:
#   - name: sphere1
#     parameters: .2
#     coordinate_frame: 0
#     rotation: [0,0,0]
#     translation: [0.250, 0.000, 0]
# ellipsoids:
#   - name: ellipse1
#     parameters: [0.1,0.2,0.3]
#     coordinate_frame: 0
#     rotation: [0,0,0]
#     translation: [0, 0.4, 0.2]
# capsules:
#   - name: capsule1
#     parameters: [0.1,0.5]
#     coordinate_frame: 0
#     rotation: [0,0,0]
#     translation: [0.2, 0.2, 0]
# cylinders:
#   - name: cylinder1
#     parameters: [0.1,0.5]
#     coordinate_frame: 0
#     rotation: [0,0,0]
#     translation: [0, 0.2, 0]

"lookat" objective function

Hi Daniel!

I am an undergraduate student working to complete an undergraduate research paper. I would like to apply relaxed_ik to a camera manipulator robot but I am having to trouble formulating the objective function to to point the camera at the target. I've been following your work in Remote Telemanipulation with Adapting Viewpoints
in Visually Complex Environments
and I am trying to recreate the "lookat" objective function from the paper. I have posted my attempt below.

The position goal I input to the solver is the position of the target, point_to_line_dist is a function that returns the perpendicular distance from the visual target to the viewpoint vector, and I have the weights to give priority to this objective "weight_priors=(50.0,5.0,0.3,0.3,0.3,1.0,2.0) ". Currently this function does not work at all, and the solution seems to configure a viewing direction that is almost in the opposite direction of the desired one. Do you have any pointers on what I may be doing wrong or how I may implement new objective functions in an efficient way?

class Position_MultiEE_Obj(Objective):
    def __init__(self, *args): pass
    def isVelObj(self): return False
    def name(self): return 'Position_MultiEE'

    def __call__(self, x, vars):
        if vars.c_boost:
            x_val = objectives_ext.position_multiEE_obj(vars.frames, vars.goal_positions, [1.0, 1.0])
        else:
            x_val_sum = 0.0

            for i, f in enumerate(vars.frames):
                positions = f[0]
                eePos = positions[-1]
                goal_pos = vars.goal_positions[i]

                z_axis = np.array([0, 0 , 1])

                eeMat = f[1][-1]
            
                new_mat = np.zeros((4, 4))
                new_mat[0:3, 0:3] = eeMat
                new_mat[3, 3] = 1
                
                ee_quat = Tf.quaternion_from_matrix(new_mat)

                ee_rot = Quaternion(ee_quat[3],ee_quat[0],ee_quat[1],ee_quat[2])
                ee_orientation_vect = ee_rot.rotate(z_axis)
                ee_orientation_vect = ee_orientation_vect/np.linalg.norm(ee_orientation_vect)
                end = 10*np.array(ee_orientation_vect) + np.array(eePos) 

                x_val = Tf.point_to_line_dist(goal_pos,eePos,end)
                x_val_sum += x_val

            x_val = x_val_sum

        t = 0.0
        d = 2.0
        c = .1
        f = 10
        g = 2
        
        if vars.c_boost:
            return objectives_ext.nloss(x_val, t, d, c, f, g)
        else:
            return (-math.e ** ((-(x_val - t) ** d) / (2.0 * c ** 2)) ) + f * (x_val - t) ** g

Additional Dependencies (again?)

It looks like you have a closed issue regarding the "numba" dependency but i'm still having issues with it.

When i tried running the collision viewer with roslaunch relaxed_ik collision_viewer.launch, its saying I need numba.

Inaccurate in following trajectory

My robot needs to follow a spline trajectory. Somehow, there is a quite large offset between target and gripper_link. Especially, at the beginning the offset > 2cm. The first image is at the start of the spline and the second image is at the end. Is this error caused by a low position weight factor?
inaccurate
accurate

Confusion about the Step 5a.

Hi,
About the Step 5a, what's exactly the purpose of selecting the "sample states"? That means the sample states create collision boundary ? Could you please explain it in detail, thank you!

I can not control my 2 ur5 robots

Hi,
I used relaxedIK to control just one ur5 and I got excellent results.
Now I'd like to use relaxedIK with a bimanual robot (two ur5 robots as in the figure above).

2021-11-29_posa iniziale

As the first time (with only one ur5) I run the start_here.py in a ROS Melodic enviroment using Python2.7 to generate all necessary files to use relaxed_ik_core. Then, I transfered all the files into a ROS noetic enviroment and I tried to use them with relaxed_ik_ros1.
Unfortunatelly somenthing dosen't work! When I published on /relaxed_ik/ee_pose_goals topic a delta equal to zero the robot should remane in its initial pose, but it moves itself (as you can see in the following figure).

2021-11-29_problema

I tried to fix the problem, but I failed.

These are the files I used: collision_2ur5.yaml, 2ur5.urdf, start_here.py

I hope you can help me to figure out which is the problem and how to solve it.
Thank you for your time in advance.

Adjusting static weights for each objective

For a manipulator/task that I am working on, I am trying to decrease the position/orientation error along a target trajectory and relax some of the other constraints (i.e. joint limits, collisions). The paper mentions that there are static weights which the user can change to set the priority of the objectives for different tasks. Where can the weights be set/changed? I found weight_funcs and weight_priors being initialized here but changing the values or creating a custom weight function seems to result in erratic behavior. I also noticed that in all of the objective functions, g is set to 2 (see here), where as the paper shows in Equation 3 that the power should be 4. I tried changing the value in the hopes that it might be related, but it did not seem to help.

How are the static weights set? Is higher better or lower? What scale are the weights on? The default weights for position/orientation are 50 and 40 respectively, yet the other weights are around 1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.