RLDG: Robotic Generalist Policy Distillation via Reinforcement Learning
Abstract
Recent advances in robotic foundation models have enabled the development of generalist policies that can adapt to diverse tasks. While these models show impressive flexibility, their performance heavily depends on the quality of their training data. In this work, we propose Reinforcement Learning Distilled Generalists (RLDG), a method that leverages reinforcement learning to generate high-quality training data for finetuning generalist policies. Through extensive real-world experiments on precise manipulation tasks like connector insertion and assembly, we demonstrate that generalist policies trained with RL-generated data consistently outperform those trained with human demonstrations, achieving up to 40% higher success rates while generalizing better to new tasks. We also provide a detailed analysis that reveals this performance gain stems from both optimized action distributions and improved state coverage. Our results suggest that combining task-specific RL with generalist policy distillation offers a promising approach for developing more capable and efficient robotic manipulation systems that maintain the flexibility of foundation models while achieving the performance of specialized controllers. Videos and code can be found on our project website https://generalist-distillation.github.io
Community
Finetune robot generalist models with data generated by RL policies.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment (2024)
- π0: A Vision-Language-Action Flow Model for General Robot Control (2024)
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation (2024)
- CAGE: Causal Attention Enables Data-Efficient Generalizable Robotic Manipulation (2024)
- Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning (2024)
- Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control (2024)
- RT-Affordance: Affordances are Versatile Intermediate Representations for Robot Manipulation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper