RESPRECT: Speeding-up Multi-fingered Grasping with Residual Reinforcement Learning
Abstract
<PRE_TAG>Deep Reinforcement Learning (DRL)</POST_TAG> has proven effective in learning control policies using robotic grippers, but much less practical for solving the problem of grasping with dexterous hands -- especially on real robotic platforms -- due to the high dimensionality of the problem. In this work, we focus on the <PRE_TAG>multi-fingered grasping</POST_TAG> task with the <PRE_TAG>anthropomorphic hand</POST_TAG> of the <PRE_TAG>iCub humanoid</POST_TAG>. We propose the RESidual learning with PREtrained CriTics (RESPRECT) method that, starting from a policy pre-trained on a large set of objects, can learn a <PRE_TAG>residual policy</POST_TAG> to grasp a novel object in a fraction (sim 5 times faster) of the timesteps required to train a policy from scratch, without requiring any task demonstration. To our knowledge, this is the first Residual Reinforcement Learning (RRL) approach that learns a residual policy on top of another policy <PRE_TAG>pre-trained with DRL</POST_TAG>. We exploit some components of the <PRE_TAG>pre-trained policy</POST_TAG> during residual learning that further speed-up the training. We benchmark our results in the iCub simulated environment, and we show that RESPRECT can be effectively used to learn a <PRE_TAG>multi-fingered grasping</POST_TAG> policy on the <PRE_TAG>real iCub robot</POST_TAG>. The code to reproduce the experiments is released together with the paper with an open source license.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper