Published in Robotics: Science and Systems, New Benchmarks Metrics and Competitions for Robot Learning Workshop, 2018
To date the design of grasping metrics has largelyfocused on finding (and calculating) specific features that are(potentially) relevant to grouping or characterizing grasps, andparticularly on metrics that might predict success (or failure) ofa grasp. These metrics leverage human knowledge of physicalinteraction and are typically relatively quick to compute. Onedrawback to them, however, is that they are heterogeneous (eg, some combination of number of contact points, force vectors, positional or joint data), often specific to the robotic handemployed (eg, joint angles) and rarely take the full object shapeinto account (often reducing the shape geometry via PCA toa simple 3-vector). From a machine-learning perspective thismakes it challenging to combine and learn over mixed datasets. A more subtle challenge is that the metrics (particularlycontact points) are unstable, in that very small movements ofthe geometry can result in big changes in the number andlocation of contacts.In this paper we explore an alternative metric which ishand and object agnostic, and very stable with respect to smallchanges in the geometry of the hand or object. Although computationally more expensive than existing, specialized approaches(and also higher dimensional), we propose that it may be moresuited to machine learning analysis. At heart, this metric simplycaptures the ways in which the object is free to “twist” or moveout of the hand, grasping metric,
Authors: Ammar Kothari and Yi Ong and John Morrow and Ravi Balasubramanian and Cindy Grimm