The pose estimation is not executed correctly. I get an error regarding model weights and input not being on the same device.
When I change this line to this
device = torch.device("cpu")
it works fine.
Used the demo Unity project, therefore did not everything in the 4 readme's.
[ERROR] [1640807467.034139]: Error processing request: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
['Traceback (most recent call last):\n', ' File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 633, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 96, in pose_estimation_main\n est_position, est_rotation = _run_model(image_path)\n', ' File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 52, in _run_model\n output = run_model_main(image_path, MODEL_PATH)\n', ' File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 138, in run_model_main\n output_translation, output_orientation = model(torch.stack(image).reshape(-1, 3, 224, 224))\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n result = self.forward(*input, **kwargs)\n', ' File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 54, in forward\n x = self.model_backbone(x)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n result = self.forward(*input, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/torchvision/models/vgg.py", line 43, in forward\n x = self.features(x)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n result = self.forward(*input, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward\n input = module(input)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n result = self.forward(*input, **kwargs)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 423, in forward\n return self._conv_forward(input, self.weight)\n', ' File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 419, in _conv_forward\n return F.conv2d(input, weight, self.bias, self.stride,\n', 'RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\n']
A working pose estimation.