A Predictive Model for Tactile Force Estimation using Audio-Tactile Data

Abstract

Robust in-hand manipulation of objects with movable content requires estimation and prediction of the contents’ motion with enough anticipation to allow time to compensate for resulting internal torques. The quick estimation of the objects’ dynamics can be challenging when the objects’ motion properties (e.g., type, amount, dynamics) cannot be observed visually due to robot occlusions or opacity of the container. This can be further complicated by the computational requirements of onboard hardware available for real-time processing and control for robotics. In this work, we develop a simple learning framework that uses echo state networks to predict the torques experienced on the robotic hand with enough anticipation to allow for adaptive controls and sufficient efficiency for real-time prediction without GPU processing. We demonstrate the efficacy of this formulation for tactile force prediction on the Allegro robotic hand with a Tekscan tactile skin using both material-specific and material-agnostic learned models. We show that while both are effective, the material-specific models show an improvement in accuracy due to the difference in inertial properties between the different materials. We also develop a prediction model that uses audio feedback to augment the tactile predictions. We show that adding auditory feedback improves the prediction error, though it significantly increases the computation cost of the model. We validate this formulation for online prediction on the robotic hand moving materials in real-time and adapting grip for slip detection.

Publication
Robotics and Automation Letters