Code Monkey home page Code Monkey logo

Comments (10)

seivazi avatar seivazi commented on May 25, 2024

Hello,

I guess you need to write your own Communicator e.g. ROScommunicator. Also later your own task. Look to https://github.com/kindredresearch/SenseAct/tree/master/docs

Could you share your ROS/Gazebo code?

from senseact.

cambel avatar cambel commented on May 25, 2024

@Murtaza786-oss Hi, that is a cool video you have there.

I have been working on ROS/Gazebo for some time now too, I have been working on this repo https://github.com/cambel/ur3. I am using a ur3 but the changes needed to use a ur5 aren't that much. I'm also starting to create some gym environments based on openai_ros. So feel free to check it and I'd be glad to help you with any questions/issues

from senseact.

murtazabasu avatar murtazabasu commented on May 25, 2024

@cambel Thank you for replying and for your comment, well I have started working with RL using PPO on my task. I saw your repo which you referred it seems that you are not using moveit-python interface for your task and you are defining the FK and IK manually while I am using the Moveit-python interface for doing the motion planning. Right now, I have defined my environment
(states, actions and rewards) and the agent (PPO) is giving me an action i.e. (joint angles). But for some reason I am facing issues in publishing that joint angle (action) to the moveit interface which should simulate in Gazebo. My question is why didn't you prefer the moveit commander interface with python to do the simulation which would actually save a lot of coding?

from senseact.

cambel avatar cambel commented on May 25, 2024

@Murtaza786-oss Yeah, I don't use moveit-python because on one hand, I was to it was a bit slower compared to a direct control approach. On the other hand, since I want the robot to move one step at a time and not with a predefined trajectory, a motion planner such as moveit is not necessary. Therefore, I use two different approaches to control the robot, each one with some pros and cons. Basically, the main controller I use is here https://github.com/cambel/ur3/blob/master/ur_control/src/ur_control/arm.py, there is one for the gripper too.
It uses the standard ROS FollowJointTrajectory controller, which allows me to define a trajectory with multiple points, and the other option is to directly publish to the robot's control node, which allows me to send and overwrite motion commands (e.g. if the robot is still moving but I have a new target goal, I can send a new command and the robot will update its trajectory without stoping)
In any case, I'd have liked to save coding but for me moveit interface didn't quite work either.
I have been using this repo to train an agent using GPS and recently I manage to make it work with SAC.

from senseact.

murtazabasu avatar murtazabasu commented on May 25, 2024

@cambel Yeah makes a lot of sense! Which version of ROS and Python did you use for your project? I tried to do a catkin_make after cloning your repo but it is throwing me this error

CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
 Could NOT find PythonLibs: Found unsuitable version "2.7.15+", but required
 is at least "3" (found /usr/lib/x86_64-linux-gnu/libpython2.7.so)

from senseact.

cambel avatar cambel commented on May 25, 2024

@Murtaza786-oss I'm using python 3 right now, but with a few minor changes, it should work for python 2 as well. Open an issue in my repo if you have any troubles I'd happy to help

from senseact.

murtazabasu avatar murtazabasu commented on May 25, 2024

@cambel Hi I cannot open a new issue in your rep because I don't see any option to open a new issue. By the way,
I have made my agent and it is learning in the environment the only problem is collision check. The UR5 arm gets collided to the surface and objects which disrupts the entire simulation. Do you by chance develop any contact sensor for the UR5 which gives ROS message when the end effector is close to some object? So that I can tell the agent that if the collision rosmsg prints something then RESET! Right now I am using the position Z of the end effector as a substitute by saying that if Z is less than the Z of the table then RESET! but the problem with this is that the Z of the end effector only after simulating the action taken by the agent and so if the agent choses a bad action where the end effector already goes and collides to the table then the entire simulation gets disrupted!
I have included a code snippet of the current collision measurement using Z of the end effector


 self.reward_coll = 0
        self.iterator += 1
        coll_check = self._obs_pose1.position.z

        if coll_check < 0.25: #0.22 is the height of the table
            self.reset_robot2.call()
            self.reward_coll = -0.5
            self.collided += 1
            rospy.sleep(1)

I am also sharing you the current training video of one of the UR5, it can been seen that the robot is colliding to itself and also to the table because there is no contact sensor installed

[https://www.dropbox.com/s/b6bd5qjyj1bh9x3/Training_UR5_PPO.mp4?dl=0]

from senseact.

cambel avatar cambel commented on May 25, 2024

@Murtaza786-oss I re-uploaded the repository so that now Issues can be created.

I do not have a collision detector for the UR but I implemented a force/torque sensor at the end effector that you could virtually use to check if force is detected. But from what I can see from the video, it seems that you are using direct joint commands as actions of the robot. By experience, I would recommend you avoid that, it kind of does not work, there is no continuity to the motion of the robot. What I do instead of direct joint commands, is step joint commands, i.e., instead of joint_command = action use joint_command = current_joints + action. Then make the actions smaller, limit to something suitable for your task such as a maximum of 5cm per step or less. Then it would be easier to compute the forward kinematics of the expected new action to validate the position z of the end effector before the robot even move.

from senseact.

ZZWang21 avatar ZZWang21 commented on May 25, 2024

@cambel Hi, have you finished the openai_ros setup for the ur3 robot? I am doing a project on UR5e. I am blocked when coding the task_envs.

from senseact.

cambel avatar cambel commented on May 25, 2024

@ZZWang21 Hi, yes I have been using it for a while now. You can find a working demo here https://github.com/cambel/robot-learning-cl-dr

from senseact.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.