This ROS package demonstrates a method of controlling two robot manipulators by mirroring human arm poses in real-time.
Manually operating a robot and deviating it off a pre-programmed path can often be arduous or imprecise in practice. Existing human-in-the-loop solutions in industrial robots presently involve a robot driver being sent jogging commands from a handheld controller or other hardware input device. Precision is lost in the process of a human operator translating their movement intention into joystick or button inputs. In addition, the human operator takes on significant mental load in the form of having to extrapolate every action into 3D space from a third person perspective. This may lead to slower than desired operation or human error.
The most precise and instinctive method by which we humans use to interact with the physical world around us is by none other than manipulating the limbs of our own body. Current master-slave implementations within the industry involve devices that are highly specific to a single robot, importable, expensive, and generally unscaleable. Having a cost-free and universal method of control that allows an untrained human operator to move the limbs of a robot as if it were an extension of their own body may open new doors within the robotics industry towards novel applications that extend beyond process automation.
- Telerobotics
- Kinesthetic teaching
- Supervised learning
- Ubuntu 18.04 LTS
- ROS Melodic Morenia
- Python >= 3.6
- Pytorch >= 1.6
- Numpy >= 1.17
- OpenCV >= 4.0
- CUDA-capable GPU
Step 1: Create a Catkin workspace
Step 2: Install ROS-Industrial
Build from source into your Catkin workspace:
cd catkin_ws
git clone https://github.com/jasongedev/handsfree-teleop/ src/handsfree_teleop
pip3 install -r src/handsfree_teleop/pose_estimation/requirements.txt
rosdep -r install
catkin build -j7
source devel/setup.bash
Optional: to obtain 100hz trajectory update rate as shown in the examples:
echo "$(awk '/robot_interface_simulator.launch/ { print; print " <param name=\"pub_rate\" value="100" />"; next}1' src/motoman/motoman_sda10f_moveit_config/launch/moveit_planning_execution.launch)" > src/motoman/motoman_sda10f_moveit_config/launch/moveit_planning_execution.launch
From three seperate terminals within your Catkin workspace:
$ roslaunch motoman_sda10f_moveit_config moveit_planning_execution.launch sim:=true
$ roslaunch handsfree_teleop_launch teleop.launch
To stream video input from webcam:
$ python3 src/handsfree_teleop/pose_estimation/main.py --video /dev/video0
OR
To stream a pre-recorded video:
$ python3 src/handsfree_teleop/pose_estimation/main.py --video {VIDEO_FILEPATH.mp4}