Vocal interactive learning of Clustered Probabilistic Movement Primitives
Python implementation of Probabilistic Movement Primitives (ProMP) with an optional ROS overlay.
Joint-space primitives with a task-space constraint
The primitives are stored in joint-space but demonstrations are provided both in joint space and task space, context. Thanks to this context, task-space goals can be requested to these joint-space primitives. The benefit is that requesting a new task-space goal does not require to call an IK method which would return demonstrations-agnostic joint configurations.
Vocal interactive learning and clustering
This work includes an interactive learning aspect which allows to automatically cluster motor primitives based on the standard deviation of their demonstrations. A new primitive is created automatically if the provided demonstration is out of 2 standard deviation of the existing primitives, otherwise the demonstration is distributed to an existing one.
Classes
ProMP
is a probabilistic movement primitive, the basic implementation for a single jointNDProMP
is the same thing dealing with N jointsQCartProMP
is a "Q + Cartesian" probabilistic movement primitive, i.e. the mother class representing a joint-space primitive on N joints including a cartesian context (position + orientation).InteractiveQCartProMP
owns severalQCartProMP
, each corresponding to a cluster.ReplayabaleInteractiveQCartProMP
has the same interface thanInteractiveQCartProMP
except that it also dumps all actions (add a new demo or set a new goal) in files for further perfect replays of the sequence of actions.- Each of these classes has inputs and outputs in the form of Python lists, but has also a ROS overlay providing the same methods using ROS messages instead (mainly
JointTrajectory
andJointState
).
In general it is a good idea to always use the highest level (ReplayableInteractiveProMP
) to be able to reproduce and compare results, but (InteractiveProMP
) has the same behaviour and API without file persistence, recorded primitives die at program closure.
Note: QCartProMP
and NDProMP
share the same concept, except that the first allows to set goals in cartesian space, the second in joint space. Depending of your usage your must select the primitive your need: QCartProMP
handles most casual picking situations when the object pose is given in cartesian space, NDPRoMP
is much more precise but requires to call in IK first since it only accepts joint space targets.
Usage
Direct usage with Baxter
This package works out-of-the-box on ROS + Baxter with the following dependencies installed:
- numpy, scipy (
pip install numpy scipy
) - ROS package baxter_commander
- ROS package baxter_pykdl
- C# and ROS packages Kinect 2 server and Python client (only used with vocal interaction, with the server running on an extra Windows 10 machine)
Use a different robot
Using a different robot only requires replacing the forward kinematics -and optionnally inverse kinematics- class(es) to produce answers accordingly to your robot. The only constraint is that the constructors, get(...)
, and joints
exist and keep the same signature.
Since this package is both non and non-ROS compatible, standard ROS messages for FK and IK are not used, so you must replace or overload these classes by respecting the format of the inputs/outputs described in the code.
The provided IK is optimization-based, you may keep this implementation and override only the FK or replace both.
Execute
Demonstrations are recorded and clustered in the mean time, through one of these methods:
- The ipython notebook test_replayable (
recording
section); or - The script Vocal interactive learning, which has the vocal interaction aspect, by saying
record a motion
Movements can be computed and played, through one of these methods:
- The ipython notebook test_replayable (
reading
section); or - The script Vocal interactive learning, which has the vocal interaction aspect, by saying
set a goal
- The script Replay which replays the exact same sequence of demonstrations and tentatives of goal reaching