Human-Planned Robotic Grasp Ranges: Capture and Validation
Published in AAAI Fall Symposium Series, 2016
Leveraging human grasping skills to teach a robot to perform a manipulation task is appealing, but there are several limitations to this approach: time-inefficient data capture procedures, limited generalization of the data to other grasps and objects, and inability to use that data to learn more about how humans perform and evaluate grasps. This paper presents a data capture protocol that partially addresses these deficiencies by asking participants to specify ranges over which a grasp is valid. The protocol is verified both qualitatively through online survey questions (where within-range grasps are identified correctly with the nearest extreme grasp) and quantitatively by showing that there is small variation in grasps ranges from different participants as measured by joint angles and position. We demonstrate that these grasp ranges are valid through testing on a physical robot (93.75% of grasps interpolated from grasp ranges are successful). grasping, learning from demonstration, grasp range
authors: Brendan John and Jackson Carter and Javier Ruiz and Sai Krishna Allani and Saurabh Dixit and Cindy Grimm and Ravi Balasubramanian
Authors: Brendan John and Jackson Carter and Javier Ruiz and Sai Krishna Allani and Saurabh Dixit and Cindy Grimm and Ravi Balasubramanian