The UR5e robotic chess player is an autonomous system that drives an industrial collaborative robotic arm through visual information obtained from a camera to play physical chess against a human.
The challenges for robotic arms operating on a production line include detection, decision, and reaction. These challenges are amplified in making a robotic arm that interactively plays chess with a human. Developing this robotic system to play chess has given insights into solving problems in the application of robotic manipulators to manufacturing inspection.
- 3D visual perception based on 2D imaging feedback: able to detection the position and orientation of the chessboard in the workspace and on the fly
limitation: The entire chessboard needs to be inside the view of the camera at the standby position and better without any pieces on the board. In other words, the robot needs to see the chessboard fully, otherwise the pose estimation may fail. - Visual situational awareness: able to detection the chessboard state every time a player makes a move
limitation: The accuracy of the built-in neural network is 97% so it is not bullet proof to errors; the detection result at the beginning of the game needs human check: if the majority of a chess piece is poorly placed outside the containing square, the detection will fail. - Physical manipulation intelligence: able to move every chess piece from a start square to an end square at each step, and to compete against a human player with world-class chess analysis
limitation: The system does not detect the exact position of a chess piece inside a square. All pieces are gripped in the same way.
- Universal Robot UR5e collaborative robot
- Robotiq Hand-e gripper
- Allied Vision Mako camera (mounted on the robotic arm)
The current software system was developed and tested in Ubuntu 18.04 with ROS melodic. The software works for the hardware listed above out of the box, but can be modified to fit different or more hardware for future
development.
- UR5e controller: driver for the UR5e manipulator.
- Gripper controller: driver for the Robotiq HandE controller.
- AVT camera controller: driver for the AVT camera.
- Manipulation node: motion planning for manipulation taks.
- Board state detection: running CNN model for piece classification.
- Task planning: high-level task planning node.
- GUI: graphical user interface node.
Follow the ROS melodic installation to download the ROS.
Follow the Installing and Configuring Your ROS Environment to create a ROS workspace using catkin
Go to the official website. Download Vimba_v5.0_Linux
driver and extract it in the Home
directory.
Look inside the folder named VimbaGigETL, run Install.sh
file from command line. After installation, log off once to activate the driver.
Please use VimbaViewer
to test if your camera has been properly configered and is discoverable. The VimbaViewer
can be found inside the extracted dowload folder's Tools/Viewer/Bin/x86_64bit
folder. If you cannot open the camera and grab images using VimbaViewer
, this ROS wrapper will fail, too.
More information is available in this seperate package.
Python 3.6 or newer is requred to install for using pytorch
and pyqt5
.
# check which python3 version in the system. If you have already install python3 version >= 3.6, you are good to go.
$ python3 --version
# If the above command shows an error, or you have python3 version < 3.6, install/reinstall python3
$ sudo apt-get install software-properties-common
$ sudo apt-get update
$ sudo apt-get install python3.6
- Python library:
$ python -m pip install --user numpy scipy pytransform3d opencv-python==4.2.0.32
$ python3 -m pip install PyQt5 opencv-python-headless rospkg numpy chess
$ sudo apt-get install ros-melodic-cv-bridge ros-melodic-image-transport ros-melodic-ros-controllers
ros-melodic-moveit
- PyTorch:
Follow the PyTorch GET STARTED to download the pyrotch
- Download dependency
$ sudo apt-get install python3-pip python-catkin-tools python3-dev python3-numpy
$ sudo pip3 install rospkg catkin_pkg
- Create workspace and clond package
$ mkdir -p ~/cvbridge_build_ws/src
$ cd ~/cvbridge_build_ws/src
$ git clone -b melodic https://github.com/ros-perception/vision_opencv.git
- Compilation
$ cd ~/cvbridge_build_ws
$ cd catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so
$ catkin config --install
$ catkin build cv_bridge
$ cd ~/catkin_ws/src
# clone `avt_camera` package
$ git clone https://github.com/macs-lab/avt_camera.git
# clone `robotiq_hande_ros_driver` package
$ git clone https://github.com/macs-lab/robotiq_hande_ros_driver.git
# clone `robotic_chess_player` package
$ git clone https://github.com/macs-lab/robotic_chess_player.git
# build
$ cd ~/catkin_ws
$ catkin_make
Make sure the cam_IP
parameter in the camera launch file (robotic_chess_player/launch/camera_bringup_freerun.launch) match with the actual camera IP adress.
To test if the camera driver is working properly, do the following:
- launch camera in free run mode.
$ roslaunch robotic_chess_player camera_bringup_freerun.launch
- use
rqt_image_view
to see image output.
$ rosrun rqt_image_view rqt_image_view
Go to gripper_bringup.launch
. Modify the parameter robot_ip
to the actual IP address for the UR robot.
To test if the gripper controller is working properly, launch gripper_bringup.launch
and run test.py
inside the robotiq_hande_ros_driver
pakcage:
# launch gripper driver
$ roslaunch robotiq_hande_ros_driver gripper_bringup.launch
# open a new bash, run
$ rosrun robotiq_hande_ros_driver test.py
# you should see the gripper moving.
This step directly affects the pose estimation accuracy! Do the calibration when switching the camera or changing the camera's focal length.
Use camera_hand_eye_calibration to obtain the camera_hand_eye_calibration.yaml
file.
This file contains the camera calibration results as well as the hand-eye pose.
Copy the file and paste it into /robotic_chess_player/config
folder.
- The currently used chessboard is 0.043 meters wide for each of its squares.
- The pose estimation result has a certain amount of error in the z-direction, so compensation needs to be added to the z-direction. The current compensation value is -0.156 meters. A more negative value means the gripper is raised further away from the chessboard surface and vice versa.
- The square's length and compensation are manually typed and saved in the
camera_hand_eye_calibration.yaml
file. The format of saving these parameters is shown below.
parameter:
# chessboard square's length
edge: 0.043
# compensation of pose estimation error in z direction
height: -0.156
Please keep the format and adjust the values if a new chessboard is used. Remember, the unit is in meters.
- The process of getting the proper compensation value will be elaborated in the Usage section
- Power on the robot, with the robot's base mounted on the workspace's surface. Load the installation program,
right_arm_ros_upright.installation
, on the robot's polyscope. For more detail information about how to load or create a new installation file, please check Universal Robot e-Series User Manual, section 21, file manager - Open a terminal and initiate the driver by following the manual: Running Universal_Robots_ROS_Driver in a separate machine.
- Run the external control program (external_control.urp) on the robot. Back to the terminal, you should see message similar to
robot is ready to receive control command
. If the polyscope does not have the external_control.urp in the folder, follow this instruction: Installing a URCap on a e-Series robot
Open a new terminal, run:
$ source ~/cvbridge_build_ws/install/setup.bash --extend
$ rosrun robotic_chess_player chessboard_state_detection.py
Open a new terminal, run:
$ roslaunch robotic_chess_player entire_system_bringup.launch
A GUI windonw will show up after finish step 3 and it looks like the image below.
- Before placing the chess pieces on the board, click first
Locate Chessboard
to estimate the chessboard pose relative to the robotic arm's base. - Place the chess pieces on the board. The human player can now make a move. Then, click
Detect Chessboard
to detect the chessboard state. The result will be shown in the left dialog box. - Enter false detections' square and correct piece type in the dialog box on the right and click
Correct Chessboard
. This is needed for only when an error occurs. The formate of the comment is first to specify which square in lowercase, then press space, and enter the correct piece's type. Each type of chess piece is using one letter. Capital letter means white and lowercase letter represents black. Piece type: k(king), q(queen), r(rook), n(knight), b(bishop), p(pawn). - Click
Confirm
to make the system search for the best chess move and perform that move through driving the robotic arm.
To close the system, just go to every terminal which was opened by following the steps above and hit Crtl+c
.
We built a feature to automatically image the chess board and chess pieces to collect data and train the neural network. Here, the system positions the camera vertically and right above the center of the chessboard at a particular height to detect the chessboard state. The current automatic image collecting function also takes images at the same position.
- Bring up UR5e robot driver as shown before
- Roslaunch partial system:
$ roslaunch robotic_chess_player partial_system_bringup.launch
- Bring up rqt:
$ rosrun rqt_service_caller t_service_caller
- Change the server caller topic to
robot_service
, and type inlocate chessboard
in the expression. - Place chess piece on the board. Place only black or white of one type of chess piece on the board each time. All chess pieces have two pieces except the king for one color, so place two pieces at square h1 and square h2. When collecting the data for the king, just put it at square h1.
- Type in
auto;
+corresponding chess piece type letter
in the expression For example,auto;Q
is collecting the white queen's image data
As we have mentioned earlier in Section 5 of the Setting Up section, the pose estimation of the chessboard may require compensation. To do so,
- Follow the Automatically collecting image data instructions from 1 to 4.
- Type in
to:
+corresponding chessboard square
in the expression, for example,to:h1
. Before hitcall
, slow down the robotic arm's operation speed by using the polyscope so that the robotic arm will not break the surface of the chessboard if the current compensation value is not proper. - The robotic arm will move and try to touch the surface of given square. Based on the observation, change the value in the
camera_hand_eye_calibration.yaml
file. - Shut down the system and redo the steps above until getting a feasible value.
- neural network model: we use the transfer learning technique to finetune the Pytorch pre-trained Resnet-18 with softmax fully connected layer. The neural network's parameter is saved in the
neural_net
folder. - chess analysis engine: we use the award-winning open-source Stockfish chess engine. The engine is saved in the
chess__ai
folder. - Multiple motivations were drawn from the Gambit project: Matuszek, Cynthia, et al. "Gambit: An autonomous chess-playing robotic system."ย 2011 IEEE International Conference on Robotics and Automation, 2011. The Gambit project used 2D and depth cameras and a robot different from the UR5e used in this project. For detecting where the chessboard is in the image, it first detected the points and then found four chessboard corner points using the depth information and trained the RANSAC algorithm. For detecting the pieces on the chessboard, Gambit first used point clouds and depth information to find which square had points above it, and then it cropped the image and used more than one trained SVM algorithm to classify one image about which type and color this piece was. Our project used one 2D camera without the depth information throughout the process, and used one neural network over point clouds, depth information, and SVM. Another difference is the reserach on different lighting that is elaborated in Mingyu Wang's MS thesis.