bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 240
646
| abstract
stringlengths 653
2.03k
| title
stringlengths 25
127
| authors
sequencelengths 2
22
| id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 35
values | n_linked_authors
int64 -1
7
| upvotes
int64 -1
45
| num_comments
int64 -1
3
| n_authors
int64 -1
22
| Models
sequencelengths 0
6
| Datasets
sequencelengths 0
2
| Spaces
sequencelengths 0
0
| old_Models
sequencelengths 0
6
| old_Datasets
sequencelengths 0
2
| old_Spaces
sequencelengths 0
0
| paper_page_exists_pre_conf
int64 0
1
| project_page
stringlengths 0
89
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=Bq4XOaU4sV | @inproceedings{
he2024bridging,
title={Bridging the Sim-to-Real Gap from the Information Bottleneck Perspective},
author={Haoran He and Peilin Wu and Chenjia Bai and Hang Lai and Lingxiao Wang and Ling Pan and Xiaolin Hu and Weinan Zhang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=Bq4XOaU4sV}
} | Reinforcement Learning (RL) has recently achieved remarkable success in robotic control. However, most works in RL operate in simulated environments where privileged knowledge (e.g., dynamics, surroundings, terrains) is readily available. Conversely, in real-world scenarios, robot agents usually rely solely on local states (e.g., proprioceptive feedback of robot joints) to select actions, leading to a significant sim-to-real gap. Existing methods address this gap by either gradually reducing the reliance on privileged knowledge or performing a two-stage policy imitation. However, we argue that these methods are limited in their ability to fully leverage the available privileged knowledge, resulting in suboptimal performance. In this paper, we formulate the sim-to-real gap as an information bottleneck problem and therefore propose a novel privileged knowledge distillation method called the Historical Information Bottleneck (HIB). In particular, HIB learns a privileged knowledge representation from historical trajectories by capturing the underlying changeable dynamic information. Theoretical analysis shows that the learned privileged knowledge representation helps reduce the value discrepancy between the oracle and learned policies. Empirical experiments on both simulated and real-world tasks demonstrate that HIB yields improved generalizability compared to previous methods. | Bridging the Sim-to-Real Gap from the Information Bottleneck Perspective | [
"Haoran He",
"Peilin Wu",
"Chenjia Bai",
"Hang Lai",
"Lingxiao Wang",
"Ling Pan",
"Xiaolin Hu",
"Weinan Zhang"
] | Conference | Oral | 2305.18464 | [
"https://github.com/tinnerhrhe/HIB_Policy"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://sites.google.com/view/history-ib |
|
null | https://openreview.net/forum?id=BmvUg1FIWC | @inproceedings{
wi2024neural,
title={Neural Inverse Source Problem},
author={Youngsun Wi and Miquel Oller and Jayjun Lee and Nima Fazeli},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=BmvUg1FIWC}
} | Reconstructing unknown external source functions is an important perception capability for a large range of robotics domains including manipulation, aerial, and underwater robotics. In this work, we propose a Physics-Informed Neural Network (PINN) based approach for solving the inverse source problems in robotics, jointly identifying unknown source functions and the complete state of a system given partial and noisy observations. Our approach demonstrates several advantages over prior works (Finite Element Methods (FEM) and data-driven approaches): it offers flexibility in integrating diverse constraints and boundary conditions; eliminates the need for complex discretizations (e.g., meshing); easily accommodates gradients from real measurements; and does not limit performance based on the diversity and quality of training data. We validate our method across three simulation and real-world scenarios involving up to 4th order partial differential equations (PDEs), constraints such as Signorini and Dirichlet, and various regression losses including Chamfer distance and L2 norm. | Neural Inverse Source Problem | [
"Youngsun Wi",
"Jayjun Lee",
"Miquel Oller",
"Nima Fazeli"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=B7Lf6xEv7l | @inproceedings{
huang2024diffusionseeder,
title={DiffusionSeeder: Seeding Motion Optimization with Diffusion for Rapid Motion Planning},
author={Huang Huang and Balakumar Sundaralingam and Arsalan Mousavian and Adithyavairavan Murali and Ken Goldberg and Dieter Fox},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=B7Lf6xEv7l}
} | Running optimization across many parallel seeds leveraging GPU compute [2] have relaxed the need for a good initialization, but this can fail if the problem is highly non-convex as all seeds could get stuck in local minima. One such setting is collision-free motion optimization for robot manipulation, where optimization converges quickly on easy problems but struggle in obstacle dense environments (e.g., a cluttered cabinet or table). In these situations, graph based planning algorithms are called to obtain seeds, resulting significant slowdowns. We propose DiffusionSeeder, a diffusion based approach that generates trajectories to seed motion optimization for rapid robot motion planning. DiffusionSeeder takes the initial depth image observation of the scene and generates high quality, multi-modal trajectories that are then fine-tuned with few iterations of motion optimization. We integrated DiffusionSeeder with cuRobo, a GPU-accelerated motion optimization method, to generate the seed trajectories which results in 12x speed up on average, and 36x speed up for more complicated problems, while achieving 10% higher success rate in partially observed simulation environments. Our results prove the effectiveness of using diverse solutions from learned diffusion model. Physical experiments on a Franka robot demonstrate the sim2real transfer of DiffusionSeeder to the real robot, with an average success rate of 86% and planning time of 26ms, increasing on cuRobo by 51% higher success rate and 2.5x speed up. The code and the model weights will be available after publication. | DiffusionSeeder: Seeding Motion Optimization with Diffusion for Rapid Motion Planning | [
"Huang Huang",
"Balakumar Sundaralingam",
"Arsalan Mousavian",
"Adithyavairavan Murali",
"Ken Goldberg",
"Dieter Fox"
] | Conference | Poster | 2410.16727 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://diffusion-seeder.github.io/ |
|
null | https://openreview.net/forum?id=B45HRM4Wb4 | @inproceedings{
naughton2024respilot,
title={ResPilot: Teleoperated Finger Gaiting via Gaussian Process Residual Learning},
author={Patrick Naughton and Jinda Cui and Karankumar Patel and Soshi Iba},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=B45HRM4Wb4}
} | Dexterous robot hand teleoperation allows for long-range transfer of human manipulation expertise, and could simultaneously provide a way for humans to teach these skills to robots. However, current methods struggle to reproduce the functional workspace of the human hand, often limiting them to simple grasping tasks. We present a novel method for finger-gaited manipulation with multi-fingered robot hands. Our method provides the operator enhanced flexibility in making contacts by expanding the reachable workspace of the robot hand through residual Gaussian Process learning. We also assist the operator in maintaining stable contacts with the object by allowing them to constrain fingertips of the hand to move in concert. Extensive quantitative evaluations show that our method significantly increases the reachable workspace of the robot hand and enables the completion of novel dexterous finger gaiting tasks. | ResPilot: Teleoperated Finger Gaiting via Gaussian Process Residual Learning | [
"Patrick Naughton",
"Jinda Cui",
"Karankumar Patel",
"Soshi Iba"
] | Conference | Poster | 2409.09140 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://respilot-hri.github.io/ |
|
null | https://openreview.net/forum?id=B2X57y37kC | @inproceedings{
dass2024dual,
title={Dual Policy Manipulation with Information-Seeking Behavior},
author={Shivin Dass and Jiaheng Hu and Ben Abbatematteo and Peter Stone and Roberto Mart{\'\i}n-Mart{\'\i}n},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=B2X57y37kC}
} | Many robot manipulation tasks require active or interactive exploration behavior in order to be performed successfully. Such tasks are ubiquitous in embodied domains, where agents must actively search for the information necessary for each stage of a task, e.g., moving the head of the robot to find information relevant to manipulation, or in multi-robot domains, where one scout robot may search for the information that another robot needs to make informed decisions. We identify these tasks with a new type of problem, factorized Contextual Markov Decision Processes, and propose DISaM, a dual-policy solution composed of an information-seeking policy that explores the environment to find the relevant contextual information and an information-receiving policy that exploits the context to achieve the manipulation goal. This factorization allows us to train both policies separately, using the information-receiving one to provide reward to train the information-seeking policy. At test time, the dual agent balances exploration and exploitation based on the uncertainty the manipulation policy has on what the next best action is. We demonstrate the capabilities of our dual policy solution in five manipulation tasks that require information-seeking behaviors, both in simulation and in the real-world, where DISaM significantly outperforms existing methods. More information at https://robin-lab.cs.utexas.edu/learning2look/. | Learning to Look: Seeking Information for Decision Making via Policy Factorization | [
"Shivin Dass",
"Jiaheng Hu",
"Ben Abbatematteo",
"Peter Stone",
"Roberto Martín-Martín"
] | Conference | Poster | 2410.18964 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://robin-lab.cs.utexas.edu/learning2look/ |
|
null | https://openreview.net/forum?id=AzP6kSEffm | @inproceedings{
xu2024dynamicsguided,
title={Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design},
author={Xiaomeng Xu and Huy Ha and Shuran Song},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AzP6kSEffm}
} | We present Dynamics-Guided Diffusion Model (DGDM), a data-driven framework for generating task-specific manipulator designs without task-specific training. Given object shapes and task specifications, DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion. This framework 1) flexibly represents manipulation tasks as interaction profiles, 2) represents the design space using a geometric diffusion model, and 3) efficiently searches this design space using the gradients provided by a dynamics network trained without any task information. We evaluate DGDM on various manipulation tasks ranging from shifting/rotating objects to converging objects to a specific pose. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5\% and 45.3\% on average success rate. With the ability to generate a new design within 0.8s, DGDM facilitates rapid design iteration and enhances the adoption of data-driven approaches for robot mechanism design. Qualitative results are best viewed on our project website https://dgdmcorl.github.io. | Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design | [
"Xiaomeng Xu",
"Huy Ha",
"Shuran Song"
] | Conference | Poster | [
"https://github.com/real-stanford/dgdm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://dgdm-robot.github.io/ |
||
null | https://openreview.net/forum?id=AuJnXGq3AL | @inproceedings{
doshi2024scaling,
title={Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation},
author={Ria Doshi and Homer Rich Walke and Oier Mees and Sudeep Dasari and Sergey Levine},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AuJnXGq3AL}
} | Modern machine learning systems rely on large datasets to attain broad generalization, and this often poses a challenge in robotic learning, where each robotic platform and task might have only a small dataset. By training a single policy across many different kinds of robots, a robotic learning method can leverage much broader and more diverse datasets, which in turn can lead to better generalization and robustness. However, training a single policy on multi-robot data is challenging because robots can have widely varying sensors, actuators, and control frequencies. We propose CrossFormer, a scalable and flexible transformer-based policy that can consume data from any embodiment. We train CrossFormer on the largest and most diverse dataset to date, 900K trajectories across 20 different robot embodiments. We demonstrate that the same network weights can control vastly different robots, including single and dual arm manipulation systems, wheeled robots, quadcopters, and quadrupeds. Unlike prior work, our model does not require manual alignment of the observation or action spaces. Extensive experiments in the real world show that our method matches the performance of specialist policies tailored for each embodiment, while also significantly outperforming the prior state of the art in cross-embodiment learning. | Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation | [
"Ria Doshi",
"Homer Rich Walke",
"Oier Mees",
"Sudeep Dasari",
"Sergey Levine"
] | Conference | Oral | 2408.11812 | [
"https://github.com/rail-berkeley/crossformer"
] | https://huggingface.co/papers/2408.11812 | 0 | 4 | 2 | 5 | [
"rail-berkeley/crossformer"
] | [] | [] | [
"rail-berkeley/crossformer"
] | [] | [] | 1 | https://crossformer-model.github.io/ |
null | https://openreview.net/forum?id=AsbyZRdqPv | @inproceedings{
skand2024simple,
title={Simple Masked Training Strategies Yield Control Policies That Are Robust to Sensor Failure},
author={Skand Skand and Bikram Pandit and Chanho Kim and Li Fuxin and Stefan Lee},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AsbyZRdqPv}
} | Sensor failure is common when robots are deployed in the real world, as sensors naturally wear out over time. Such failures can lead to catastrophic outcomes, including damage to the robot from unexpected robot behaviors such as falling during walking. Previous work has tried to address this problem by recovering missing sensor values from the history of states or by adapting learned control policies to handle corrupted sensors through fine-tuning during deployment.
In this work, we propose training reinforcement learning (RL) policies that are robust to sensory failures. We use a multimodal encoder designed to account for these failures and a training strategy that randomly drops a subset of sensor modalities, similar to missing observations caused by failed sensors. We conduct evaluations across multiple tasks (bipedal locomotion and robotic manipulation) with varying robot embodiments in both simulation and the real world to demonstrate the effectiveness of our approach. Our results show that the proposed method produces robust RL policies that handle failures in both low-dimensional proprioceptive and high-dimensional visual modalities without a significant increase in training time or decrease in sample efficiency, making it a promising solution for learning RL policies robust to sensory failures. | Simple Masked Training Strategies Yield Control Policies That Are Robust to Sensor Failure | [
"Skand Skand",
"Bikram Pandit",
"Chanho Kim",
"Li Fuxin",
"Stefan Lee"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://pvskand.github.io/projects/RME |
||
null | https://openreview.net/forum?id=AhEE5wrcLU | @inproceedings{
triest2024velociraptor,
title={Velociraptor: Leveraging Visual Foundation Models for Label-Free, Risk-Aware Off-Road Navigation},
author={Samuel Triest and Matthew Sivaprakasam and Shubhra Aich and David Fan and Wenshan Wang and Sebastian Scherer},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AhEE5wrcLU}
} | Traversability analysis in off-road regimes is a challenging task that requires understanding of multi-modal inputs such as camera and LiDAR. These measurements are often sparse, noisy, and difficult to interpret, particularly in the off-road setting. Existing systems are very engineering-intensive, often requiring hand-tuning of traversability rules and manual annotation of semantic labels. Furthermore, existing methods for analyzing traversability risk and uncertainty are computationally expensive or not well-calibrated. We propose Velociraptor, a traversability analysis system that performs [veloci]ty-informed, [r]isk-[a]ware [p]erception and [t]raversability for [o]ff-[r]oad driving without any human annotations. We achieve this via the use of visual foundation models (VFMs) and geometric mapping to produce a rich visual-geometric representation of the robot's local environment. We then leverage this representation to produce costmaps, speedmaps, and uncertainty maps using state-of-the-art fully self-supervised techniques. Our approach enables intelligent high-speed off-road navigation with zero human annotation, and with about forty minutes of expert data, outperforms several geometric and semantic traversability baselines, both in offline and real-world robot trials across multiple challenging off-road sites. | Velociraptor: Leveraging Visual Foundation Models for Label-Free, Risk-Aware Off-Road Navigation | [
"Samuel Triest",
"Matthew Sivaprakasam",
"Shubhra Aich",
"David Fan",
"Wenshan Wang",
"Sebastian Scherer"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=AGG1zlrrMw | @inproceedings{
wang2024neural,
title={Neural Attention Field: Emerging Point Relevance in 3D Scenes for One-Shot Dexterous Grasping},
author={Qianxu Wang and Congyue Deng and Tyler Ga Wei Lum and Yuanpei Chen and Yaodong Yang and Jeannette Bohg and Yixin Zhu and Leonidas Guibas},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AGG1zlrrMw}
} | One-shot transfer of dexterous grasps to novel scenes with object and context variations has been a challenging problem. While distilled feature fields from large vision models have enabled semantic correspondences across 3D scenes, their features are point-based and restricted to object surfaces, limiting their capability of modeling complex semantic feature distributions for hand-object interactions. In this work, we propose the *neural attention field* for representing semantic-aware dense feature fields in the 3D space by modeling inter-point relevance instead of individual point features. Core to it is a transformer decoder that computes the cross-attention between any 3D query point with all the scene points, and provides the query point feature with an attention-based aggregation. We further propose a self-supervised framework for training the transformer decoder from only a few 3D pointclouds without hand demonstrations. Post-training, the attention field can be applied to novel scenes for semantics-aware dexterous grasping from one-shot demonstration. Experiments show that our method provides better optimization landscapes by encouraging the end-effector to focus on task-relevant scene regions, resulting in significant improvements in success rates on real robots compared with the feature-field-based methods. | Neural Attention Field: Emerging Point Relevance in 3D Scenes for One-Shot Dexterous Grasping | [
"Qianxu Wang",
"Congyue Deng",
"Tyler Ga Wei Lum",
"Yuanpei Chen",
"Yaodong Yang",
"Jeannette Bohg",
"Yixin Zhu",
"Leonidas Guibas"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=AEq0onGrN2 | @inproceedings{
abou-chakra2024physically,
title={Physically Embodied Gaussian Splatting: A Realtime Correctable World Model for Robotics},
author={Jad Abou-Chakra and Krishan Rana and Feras Dayoub and Niko Suenderhauf},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=AEq0onGrN2}
} | For robots to robustly understand and interact with the physical world, it is highly beneficial to have a comprehensive representation -- modelling geometry, physics, and visual observations -- that informs perception, planning, and control algorithms. We propose a novel dual "Gaussian-Particle" representation that models the physical world while (i) enabling predictive simulation of future states and (ii) allowing online correction from visual observations in a dynamic world. Our representation comprises particles that capture the geometrical aspect of objects in the world and can be used alongside a particle-based physics system to anticipate physically plausible future states. Attached to these particles are 3D Gaussians that render images from any viewpoint through a splatting process thus capturing the visual state. By comparing the predicted and observed images, our approach generates "visual forces" that correct the particle positions while respecting known physical constraints. By integrating predictive physical modeling with continuous visually-derived corrections, our unified representation reasons about the present and future while synchronizing with reality. We validate our approach on 2D and 3D tracking tasks as well as photometric reconstruction quality. Videos are found at https://embodied-gaussians.github.io/ | Physically Embodied Gaussian Splatting: A Visually Learnt and Physically Grounded 3D Representation for Robotics | [
"Jad Abou-Chakra",
"Krishan Rana",
"Feras Dayoub",
"Niko Suenderhauf"
] | Conference | Oral | [
""
] | https://huggingface.co/papers/2406.10788 | 0 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | https://embodied-gaussians.github.io/ |
|
null | https://openreview.net/forum?id=A6ikGJRaKL | @inproceedings{
chen2024korol,
title={{KOROL}: Learning Visualizable Object Feature with Koopman Operator Rollout for Manipulation},
author={Hongyi Chen and ABULIKEMU ABUDUWEILI and Aviral Agrawal and Yunhai Han and Harish Ravichandar and Changliu Liu and Jeffrey Ichnowski},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=A6ikGJRaKL}
} | Learning dexterous manipulation skills presents significant challenges due to complex nonlinear dynamics that underlie the interactions between objects and multi-fingered hands. Koopman operators have emerged as a robust method for modeling such nonlinear dynamics within a linear framework.
However, current methods rely on runtime access to ground-truth (GT) object states, making them unsuitable for vision-based practical applications.
Unlike image-to-action policies that implicitly learn visual features for control, we use a dynamics model, specifically the Koopman operator, to learn visually interpretable object features critical for robotic manipulation within a scene.
We construct a Koopman operator using object features predicted by a feature extractor and utilize it to auto-regressively advance system states. We train the feature extractor to embed scene information into object features, thereby enabling the accurate propagation of robot trajectories.
We evaluate our approach on simulated and real-world robot tasks, with results showing that it outperformed the model-based imitation learning NDP by 1.08$\times$ and the image-to-action Diffusion Policy by 1.16$\times$. The results suggest that our method maintains task success rates with learned features and extends applicability to real-world manipulation without GT object states. Project video and code are available at: https://github.com/hychen-naza/KOROL. | KOROL: Learning Visualizable Object Feature with Koopman Operator Rollout for Manipulation | [
"Hongyi Chen",
"ABULIKEMU ABUDUWEILI",
"Aviral Agrawal",
"Yunhai Han",
"Harish Ravichandar",
"Changliu Liu",
"Jeffrey Ichnowski"
] | Conference | Poster | 2407.00548 | [
"https://github.com/hychen-naza/KOROL"
] | https://huggingface.co/papers/2407.00548 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | |
null | https://openreview.net/forum?id=A1hpY5RNiH | @inproceedings{
burns2024what,
title={What Makes Pre-Trained Visual Representations Successful for Robust Manipulation?},
author={Kaylee Burns and Zach Witzel and Jubayer Ibn Hamid and Tianhe Yu and Chelsea Finn and Karol Hausman},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=A1hpY5RNiH}
} | Inspired by the success of transfer learning in computer vision, roboticists have investigated visual pre-training as a means to improve the learning efficiency and generalization ability of policies learned from pixels. To that end, past work has favored large object interaction datasets, such as first-person videos of humans completing diverse tasks, in pursuit of manipulation-relevant features. Although this approach improves the efficiency of policy learning, it remains unclear how reliable these representations are in the presence of distribution shifts that arise commonly in robotic applications. Surprisingly, we find that visual representations designed for control tasks do not necessarily generalize under subtle changes in lighting and scene texture or the introduction of distractor objects. To understand what properties _do_ lead to robust representations, we compare the performance of 15 pre-trained vision models under different visual appearances. We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models. The rank order induced by this metric is more predictive than metrics that have previously guided generalization research within computer vision and machine learning, such as downstream ImageNet accuracy, in-domain accuracy, or shape-bias as evaluated by cue-conflict performance. We test this finding extensively on a suite of distribution shifts in ten tasks across two simulated manipulation environments. On the ALOHA setup, segmentation score predicts real-world performance after offline training with 50 demonstrations. | What Makes Pre-Trained Visual Representations Successful for Robust Manipulation? | [
"Kaylee Burns",
"Zach Witzel",
"Jubayer Ibn Hamid",
"Tianhe Yu",
"Chelsea Finn",
"Karol Hausman"
] | Conference | Poster | 2312.12444 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://kayburns.github.io/segmentingfeatures/ |
|
null | https://openreview.net/forum?id=9jJP2J1oBP | @inproceedings{
nguyen2024leveraging,
title={Leveraging Mutual Information for Asymmetric Learning under Partial Observability},
author={Hai Huu Nguyen and Long Dinh Van The and Christopher Amato and Robert Platt},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9jJP2J1oBP}
} | Even though partial observability is prevalent in robotics, most reinforcement learning studies avoid it due to the difficulty of learning a policy that can efficiently memorize past events and seek information. Fortunately, in many cases, learning can be done in an asymmetric setting where states are available during training but not during execution. Prior studies often leverage the state to indirectly influence the training of a history-based actor (actor-critic methods) or a history-based critic (value-based methods). Instead, we propose using state-observation and state-history mutual information to improve the agent's architecture and ability to seek information and memorize efficiently through intrinsic rewards and an auxiliary task. Our method outperforms strong baselines through extensive experiments and achieves successful sim-to-real transfers to a real robot. | Leveraging Mutual Information for Asymmetric Learning under Partial Observability | [
"Hai Huu Nguyen",
"Long Dinh Van The",
"Christopher Amato",
"Robert Platt"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://sites.google.com/view/mi-asym-pomdp |
||
null | https://openreview.net/forum?id=9iG3SEbMnL | @inproceedings{
huang2024rekep,
title={ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation},
author={Wenlong Huang and Chen Wang and Yunzhu Li and Ruohan Zhang and Li Fei-Fei},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9iG3SEbMnL}
} | Representing robotic manipulation tasks as constraints that associate the robot and the environment is a promising way to encode desired robot behaviors. However, it remains unclear how to formulate the constraints such that they are 1) versatile to diverse tasks, 2) free of manual labeling, and 3) optimizable by off-the-shelf solvers to produce robot actions in real-time. In this work, we introduce Relational Keypoint Constraints (ReKep), a visually-grounded representation for constraints in robotic manipulation. Specifically, ReKep are expressed as Python functions mapping a set of 3D keypoints in the environment to a numerical cost. We demonstrate that by representing a manipulation task as a sequence of Relational Keypoint Constraints, we can employ a hierarchical optimization procedure to solve for robot actions (represented by a sequence of end-effector poses in SE(3)) with a perception-action loop at a real-time frequency. Furthermore, in order to circumvent the need for manual specification of ReKep for each new task, we devise an automated procedure that leverages large vision models and vision-language models to produce ReKep from free-form language instructions and RGB-D observation. We present system implementations on a mobile single-arm platform and a stationary dual-arm platform that can perform a large variety of manipulation tasks, featuring multi-stage, in-the-wild, bimanual, and reactive behaviors, all without task-specific data or environment models. | ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation | [
"Wenlong Huang",
"Chen Wang",
"Yunzhu Li",
"Ruohan Zhang",
"Li Fei-Fei"
] | Conference | Poster | 2409.01652 | [
"https://github.com/huangwl18/ReKep"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | rekep-robot.github.io |
|
null | https://openreview.net/forum?id=9dsBQhoqVr | @inproceedings{
akcin2024fleet,
title={Fleet Supervisor Allocation: A Submodular Maximization Approach},
author={Oguzhan Akcin and Ahmet Ege Tanriverdi and Kaan Kale and Sandeep P. Chinchali},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9dsBQhoqVr}
} | In real-world scenarios, the data collected by robots in diverse and unpredictable environments is crucial for enhancing their perception and decision-making models. This data is predominantly collected under human supervision, particularly through imitation learning (IL), where robots learn complex tasks by observing human supervisors. However, the deployment of multiple robots and supervisors to accelerate the learning process often leads to data redundancy and inefficiencies, especially as the scale of robot fleets increases. Moreover, the reliance on teleoperation for supervision introduces additional challenges due to potential network connectivity issues.
To address these issues in data collection, we introduce an Adaptive Submodular Allocation policy, ASA, designed for efficient human supervision allocation within multi-robot systems under uncertain connectivity conditions. Our approach reduces data redundancy by balancing the informativeness and diversity of data collection, and is capable of accommodating connectivity variances. We evaluate the effectiveness of ASA in simulations with 100 robots across four different environments and various network settings, including a real-world teleoperation scenario over a 5G network. We train and test our policy, ASA, and state-of-the-art policies utilizing NVIDIA's Isaac Gym. Our results show that ASA enhances the return on human effort by up to $3.37\times$, outperforming current baselines in all simulated scenarios and providing robustness against connectivity disruptions. | Fleet Supervisor Allocation: A Submodular Maximization Approach | [
"Oguzhan Akcin",
"Ahmet Ege Tanriverdi",
"Kaan Kale",
"Sandeep P. Chinchali"
] | Conference | Poster | [
"https://github.com/UTAustin-SwarmLab/Fleet-Supervisor-Allocation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=9aZ4ehSTRc | @inproceedings{
sleiman2024guided,
title={Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation},
author={Jean Pierre Sleiman and Mayank Mittal and Marco Hutter},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9aZ4ehSTRc}
} | Reinforcement learning (RL) has shown remarkable proficiency in developing robust control policies for contact-rich applications. However, it typically requires meticulous Markov Decision Process (MDP) designing tailored to each task and robotic platform. This work addresses this challenge by creating a systematic approach to behavior synthesis and control for multi-contact loco-manipulation.
We define a task-independent MDP formulation to learn robust RL policies using a single demonstration (per task) generated from a fast model-based trajectory optimization method. Our framework is validated on diverse real-world tasks, such as navigating spring-loaded doors and manipulating heavy dishwashers. The learned behaviors can handle dynamic uncertainties and external disturbances, showcasing recovery maneuvers, such as re-grasping objects during execution. Finally, we successfully transfer the policies to a real robot, demonstrating the approach's practical viability. | Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation | [
"Jean Pierre Sleiman",
"Mayank Mittal",
"Marco Hutter"
] | Conference | Oral | 2410.13817 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://leggedrobotics.github.io/guided-rl-locoma/ |
|
null | https://openreview.net/forum?id=9XV3dBqcfe | @inproceedings{
yang2024generalized,
title={Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior},
author={Ruihan Yang and Zhuoqun Chen and Jianhan Ma and Chongyi Zheng and Yiyu Chen and Quan Nguyen and Xiaolong Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9XV3dBqcfe}
} | The agility of animals, particularly in complex activities such as running, turning, jumping, and backflipping, stands as an exemplar for robotic system design. Transferring this suite of behaviors to legged robotic systems introduces essential inquiries: How can a robot be trained to learn multiple locomotion behaviors simultaneously? How can the robot execute these tasks with a smooth transition? How to integrate these skills for wide-range applications? This paper introduces the Versatile Instructable Motion prior (VIM) – a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks suitable for advanced robotic applications. Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions. Our Functionality reward guides the robot's ability to adopt varied skills, and our Stylization reward ensures that robot motions align with reference motions. Our evaluations of the VIM framework span both simulation environments and real-world deployment. To the best of our knowledge, this is the first work that allows a robot to concurrently learn diverse agile locomotion skills using a single learning-based controller in the real world. | Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior | [
"Ruihan Yang",
"Zhuoqun Chen",
"Jianhan Ma",
"Chongyi Zheng",
"Yiyu Chen",
"Quan Nguyen",
"Xiaolong Wang"
] | Conference | Poster | 2310.01408 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://rchalyang.github.io/VIM/ |
|
null | https://openreview.net/forum?id=9HkElMlPbU | @inproceedings{
ma2024contrastive,
title={Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation},
author={Teli Ma and Jiaming Zhou and Zifan Wang and Ronghe Qiu and Junwei Liang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=9HkElMlPbU}
} | Developing robots capable of executing various manipulation tasks, guided by natural language instructions and visual observations of intricate real-world environments, remains a significant challenge in robotics. Such robot agents need to understand linguistic commands and distinguish between the requirements of different tasks. In this work, we present $\mathtt{\Sigma\mbox{-}agent}$, an end-to-end imitation learning agent for multi-task robotic manipulation. $\mathtt{\Sigma\mbox{-}agent}$ incorporates contrastive Imitation Learning (contrastive IL) modules to strengthen vision-language and current-future representations. An effective and efficient multi-view querying Transformer (MVQ-Former) for aggregating representative semantic information is introduced. $\mathtt{\Sigma\mbox{-}agent}$ shows substantial improvement over state-of-the-art methods under diverse settings in 18 RLBench tasks, surpassing RVT by an average of 5.2% and 5.9% in 10 and 100 demonstration training, respectively. $\mathtt{\Sigma\mbox{-}agent}$ also achieves 62% success rate with a single policy in 5 real-world manipulation tasks. The code will be released upon acceptance. | Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation | [
"Teli Ma",
"Jiaming Zhou",
"Zifan Wang",
"Ronghe Qiu",
"Junwei Liang"
] | Conference | Poster | 2406.09738 | [
""
] | https://huggingface.co/papers/2406.09738 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | https://teleema.github.io/projects/Sigma_Agent/index.html |
null | https://openreview.net/forum?id=97QXO0uBEO | @inproceedings{
g{\"u}nster2024handling,
title={Handling Long-Term Safety and Uncertainty in Safe Reinforcement Learning},
author={Jonas G{\"u}nster and Puze Liu and Jan Peters and Davide Tateo},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=97QXO0uBEO}
} | Safety is one of the key issues preventing the deployment of reinforcement learning techniques in real-world robots. While most approaches in the Safe Reinforcement Learning area do not require prior knowledge of constraints and robot kinematics and rely solely on data, it is often difficult to deploy them in complex real-world settings. Instead, model-based approaches that incorporate prior knowledge of the constraints and dynamics into the learning framework have proven capable of deploying the learning algorithm directly on the real robot.
Unfortunately, while an approximated model of the robot dynamics is often available, the safety constraints are task-specific and hard to obtain: they may be too complicated to encode analytically, too expensive to compute, or it may be difficult to envision a priori the long-term safety requirements. In this paper, we bridge this gap by extending the safe exploration method, ATACOM, with learnable constraints, with a particular focus on ensuring long-term safety and handling of uncertainty. Our approach is competitive or superior to state-of-the-art methods in final performance while maintaining safer behavior during training. | Handling Long-Term Safety and Uncertainty in Safe Reinforcement Learning | [
"Jonas Günster",
"Puze Liu",
"Jan Peters",
"Davide Tateo"
] | Conference | Poster | 2409.12045 | [
"https://github.com/cube1324/d-atacom"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | ||
null | https://openreview.net/forum?id=928V4Umlys | @inproceedings{
tian2024drivevlm,
title={Drive{VLM}: The Convergence of Autonomous Driving and Large Vision-Language Models},
author={Xiaoyu Tian and Junru Gu and Bailin Li and Yicheng Liu and Yang Wang and Zhiyong Zhao and Kun Zhan and Peng Jia and XianPeng Lang and Hang Zhao},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=928V4Umlys}
} | A primary hurdle of autonomous driving in urban environments is understanding complex and long-tail scenarios, such as challenging road conditions and delicate human behaviors. We introduce DriveVLM, an autonomous driving system leveraging Vision-Language Models (VLMs) for enhanced scene understanding and planning capabilities. DriveVLM integrates a unique combination of reasoning modules for scene description, scene analysis, and hierarchical planning. Furthermore, recognizing the limitations of VLMs in spatial reasoning and heavy computational requirements, we propose DriveVLM-Dual, a hybrid system that synergizes the strengths of DriveVLM with the traditional autonomous driving pipeline. Experiments on both the nuScenes dataset and our SUP-AD dataset demonstrate the efficacy of DriveVLM and DriveVLM-Dual in handling complex and unpredictable driving conditions. Finally, we deploy the DriveVLM-Dual on a production vehicle, verifying it is effective in real-world autonomous driving environments. | DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models | [
"Xiaoyu Tian",
"Junru Gu",
"Bailin Li",
"Yicheng Liu",
"Yang Wang",
"Zhiyong Zhao",
"Kun Zhan",
"Peng Jia",
"XianPeng Lang",
"Hang Zhao"
] | Conference | Poster | 2402.12289 | [
""
] | https://huggingface.co/papers/2402.12289 | 0 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | https://tsinghua-mars-lab.github.io/DriveVLM/ |
null | https://openreview.net/forum?id=8Yu0TNJNGK | @inproceedings{
yang2024anyrotate,
title={AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real Touch},
author={Max Yang and chenghua lu and Alex Church and Yijiong Lin and Christopher J. Ford and Haoran Li and Efi Psomopoulou and David A.W. Barton and Nathan F. Lepora},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8Yu0TNJNGK}
} | Human hands are capable of in-hand manipulation in the presence of different hand motions. For a robot hand, harnessing rich tactile information to achieve this level of dexterity still remains a significant challenge. In this paper, we present AnyRotate, a system for gravity-invariant multi-axis in-hand object rotation using dense featured sim-to-real touch. We tackle this problem by training a dense tactile policy in simulation and present a sim-to-real method for rich tactile sensing to achieve zero-shot policy transfer. Our formulation allows the training of a unified policy to rotate unseen objects about arbitrary rotation axes in any hand direction. In our experiments, we highlight the benefit of capturing detailed contact information when handling objects of varying properties. Interestingly, we found rich multi-fingered tactile sensing can detect unstable grasps and provide a reactive behavior that improves the robustness of the policy. | AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real Touch | [
"Max Yang",
"chenghua lu",
"Alex Church",
"Yijiong Lin",
"Christopher J. Ford",
"Haoran Li",
"Efi Psomopoulou",
"David A.W. Barton",
"Nathan F. Lepora"
] | Conference | Poster | 2405.07391 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://maxyang27896.github.io/anyrotate/ |
|
null | https://openreview.net/forum?id=8XFT1PatHy | @inproceedings{
shorinwa2024splatmover,
title={Splat-{MOVER}: Multi-Stage, Open-Vocabulary Robotic Manipulation via Editable Gaussian Splatting},
author={Olaolu Shorinwa and Johnathan Tucker and Aliyah Smith and Aiden Swann and Timothy Chen and Roya Firoozi and Monroe David Kennedy and Mac Schwager},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8XFT1PatHy}
} | We present Splat-MOVER, a modular robotics stack for open-vocabulary
robotic manipulation, which leverages the editability of Gaussian Splatting (GSplat)
scene representations to enable multi-stage manipulation tasks. Splat-MOVER
consists of: (i) ASK-Splat, a GSplat representation that distills semantic and grasp
affordance features into the 3D scene. ASK-Splat enables geometric, semantic,
and affordance understanding of 3D scenes, which is critical for many robotics
tasks; (ii) SEE-Splat, a real-time scene-editing module using 3D semantic masking
and infilling to visualize the motions of objects that result from robot interactions
in the real-world. SEE-Splat creates a “digital twin” of the evolving environment
throughout the manipulation task; and (iii) Grasp-Splat, a grasp generation module
that uses ASK-Splat and SEE-Splat to propose affordance-aligned candidate grasps
for open-world objects. ASK-Splat is trained in real-time from RGB images
in a brief scanning phase prior to operation, while SEE-Splat and Grasp-Splat
run in real-time during operation. We demonstrate the superior performance of
Splat-MOVER in hardware experiments on a Kinova robot compared to two recent
baselines in four single-stage, open-vocabulary manipulation tasks. In addition, we
demonstrate Splat-MOVER in four multi-stage manipulation tasks, using the edited
scene to reflect changes due to prior manipulation stages, which is not possible
with existing baselines. Video demonstrations and the code for the project are
available at https://splatmover.github.io. | Splat-MOVER: Multi-Stage, Open-Vocabulary Robotic Manipulation via Editable Gaussian Splatting | [
"Olaolu Shorinwa",
"Johnathan Tucker",
"Aliyah Smith",
"Aiden Swann",
"Timothy Chen",
"Roya Firoozi",
"Monroe David Kennedy",
"Mac Schwager"
] | Conference | Poster | 2405.04378 | [
"https://splatmover.github.io"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://splatmover.github.io |
|
null | https://openreview.net/forum?id=8PcRynpd1m | @inproceedings{
wei2024safe,
title={Safe Bayesian Optimization for the Control of High-Dimensional Embodied Systems},
author={Yunyue Wei and Zeji Yi and Hongda Li and Saraswati Soedarmadji and Yanan Sui},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8PcRynpd1m}
} | Learning to move is a primary goal for animals and robots, where ensuring safety is often important when optimizing control policies on the embodied systems. For complex tasks such as the control of human or humanoid control, the high-dimensional parameter space adds complexity to the safe optimization effort. Current safe exploration algorithms exhibit inefficiency and may even become infeasible with large high-dimensional input spaces. Furthermore, existing high-dimensional constrained optimization methods neglect safety in the search process. In this paper, we propose High-dimensional Safe Bayesian Optimization with local optimistic exploration (HdSafeBO), a novel approach designed to handle high-dimensional sampling problems under probabilistic safety constraints. We introduce a local optimistic strategy to efficiently and safely optimize the objective function, providing a probabilistic safety guarantee and a cumulative safety violation bound. Through the use of isometric embedding, HdSafeBO addresses problems ranging from a few hundred to several thousand dimensions while maintaining safety guarantees. To our knowledge, HdSafeBO is the first algorithm capable of optimizing the control of high-dimensional musculoskeletal systems with high safety probability. We also demonstrate the real-world applicability of HdSafeBO through its use in the safe online optimization of neural stimulation-induced human motion control. | Safe Bayesian Optimization for the Control of High-Dimensional Embodied Systems | [
"Yunyue Wei",
"Zeji Yi",
"Hongda Li",
"Saraswati Soedarmadji",
"Yanan Sui"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=8LPXeGhhbH | @inproceedings{
kuang2024ram,
title={{RAM}: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation},
author={Yuxuan Kuang and Junjie Ye and Haoran Geng and Jiageng Mao and Congyue Deng and Leonidas Guibas and He Wang and Yue Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8LPXeGhhbH}
} | This work proposes a retrieve-and-transfer framework for zero-shot robotic manipulation, dubbed RAM, featuring generalizability across various objects, environments, and embodiments. Unlike existing approaches that learn manipulation from expensive in-domain demonstrations, RAM capitalizes on a retrieval-based affordance transfer paradigm to acquire versatile manipulation capabilities from abundant out-of-domain data. RAM first extracts unified affordance at scale from diverse sources of demonstrations including robotic data, human-object interaction (HOI) data, and custom data to construct a comprehensive affordance memory. Then given a language instruction, RAM hierarchically retrieves the most similar demonstration from the affordance memory and transfers such out-of-domain 2D affordance to in-domain 3D actionable affordance in a zero-shot and embodiment-agnostic manner. Extensive simulation and real-world evaluations demonstrate that our RAM consistently outperforms existing works in diverse daily tasks. Additionally, RAM shows significant potential for downstream applications such as automatic and efficient data collection, one-shot visual imitation, and LLM/VLM-integrated long-horizon manipulation. | RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation | [
"Yuxuan Kuang",
"Junjie Ye",
"Haoran Geng",
"Jiageng Mao",
"Congyue Deng",
"Leonidas Guibas",
"He Wang",
"Yue Wang"
] | Conference | Oral | 2407.04689 | [
"https://github.com/yxKryptonite/RAM_code"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://yuxuank.com/RAM/ |
|
null | https://openreview.net/forum?id=8JLmTZsxGh | @inproceedings{
manda2024learning,
title={Learning Performance-oriented Control Barrier Functions Under Complex Safety Constraints and Limited Actuation},
author={Lakshmideepakreddy Manda and Shaoru Chen and Mahyar Fazlyab},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8JLmTZsxGh}
} | Control Barrier Functions (CBFs) offer an elegant framework for constraining nonlinear control system dynamics to an invariant subset of a pre-specified safe set. However, finding a CBF that simultaneously promotes performance by maximizing the resulting control invariant set while accommodating complex safety constraints, especially in high relative degree systems with actuation constraints, remains a significant challenge. In this work, we propose a novel self-supervised learning framework that holistically addresses these hurdles. Given a Boolean composition of multiple state constraints defining the safe set, our approach begins by constructing a smooth function whose zero superlevel set provides an inner approximation of the safe set. This function is then used with a smooth neural network to parameterize the CBF candidate. Finally, we design a physics-informed training loss function based on a Hamilton-Jacobi Partial Differential Equation (PDE) to train the PINN-CBF and enlarge the volume of the induced control invariant set. We demonstrate the effectiveness of our approach on a 2D double integrator (DI) system and a 7D fixed-wing aircraft system (F16). | Learning Performance-oriented Control Barrier Functions Under Complex Safety Constraints and Limited Actuation | [
"Lakshmideepakreddy Manda",
"Shaoru Chen",
"Mahyar Fazlyab"
] | Conference | Poster | 2401.05629 | [
"https://github.com/o4lc/PINN-CBF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | ||
null | https://openreview.net/forum?id=8Ar8b00GJC | @inproceedings{
zhou2024autonomous,
title={Autonomous Improvement of Instruction Following Skills via Foundation Models},
author={Zhiyuan Zhou and Pranav Atreya and Abraham Lee and Homer Rich Walke and Oier Mees and Sergey Levine},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=8Ar8b00GJC}
} | Intelligent robots capable of improving from autonomously collected experience have the potential to transform robot learning: instead of collecting costly teleoperated demonstration data, large-scale deployment of fleets of robots can quickly collect larger quantities of autonomous data useful for training better robot policies. However, autonomous improvement requires solving two key problems: (i) fully automating a scalable data collection procedure that can collect diverse and semantically meaningful robot data and (ii) learning from non-optimal, autonomous data with no human annotations. To this end, we propose a novel approach that addresses these challenges, allowing instruction following policies to improve from autonomously collected data without human supervision. Our framework leverages vision-language models to collect and evaluate semantically meaningful experiences in new environments, and then utilizes a decomposition of instruction following tasks into (semantic) language-conditioned image generation and (non-semantic) goal reaching, which makes it significantly more practical to improve from this autonomously collected data without any human annotations. We carry out extensive experiments in the real world to demonstrate the effectiveness of our approach, and find that in a suite of unseen environments, the robot policy can be improved significantly with autonomously collected data. We open-source the code for our semantic autonomous improvement pipeline, as well as our autonomous dataset of 25K trajectories collected across five tabletop environments: https://soar-autonomous-improvement.github.io | Autonomous Improvement of Instruction Following Skills via Foundation Models | [
"Zhiyuan Zhou",
"Pranav Atreya",
"Abraham Lee",
"Homer Rich Walke",
"Oier Mees",
"Sergey Levine"
] | Conference | Poster | 2407.20635 | [
"https://github.com/rail-berkeley/soar"
] | https://huggingface.co/papers/2407.20635 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | https://auto-improvement.github.io/ |
null | https://openreview.net/forum?id=82bpTugrMt | @inproceedings{
bhattacharya2024monocular,
title={Monocular Event-Based Vision for Dodging Static Obstacles with a Quadrotor},
author={Anish Bhattacharya and Marco Cannici and Nishanth Rao and Yuezhan Tao and Vijay Kumar and Nikolai Matni and Davide Scaramuzza},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=82bpTugrMt}
} | We present the first static-obstacle avoidance method for quadrotors using just an onboard, monocular event camera. Quadrotors are capable of fast and agile flight in cluttered environments when piloted manually, but vision-based autonomous flight in unknown environments is difficult in part due to the sensor limitations of traditional onboard cameras. Event cameras, however, promise nearly zero motion blur and high dynamic range, but produce a very large volume of events under significant ego-motion and further lack a continuous-time sensor model in simulation, making direct sim-to-real transfer not possible. By leveraging depth prediction as a pretext task in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy with approximated, simulated events and then fine-tune the perception component with limited events-and-depth real-world data to achieve obstacle avoidance in indoor and outdoor settings. We demonstrate this across two quadrotor-event camera platforms in multiple settings and find, contrary to traditional vision-based works, that low speeds (1m/s) make the task harder and more prone to collisions, while high speeds (5m/s) result in better event-based depth estimation and avoidance. We also find that success rates in outdoor scenes can be significantly higher than in certain indoor scenes. | Monocular Event-Based Vision for Obstacle Avoidance with a Quadrotor | [
"Anish Bhattacharya",
"Marco Cannici",
"Nishanth Rao",
"Yuezhan Tao",
"Vijay Kumar",
"Nikolai Matni",
"Davide Scaramuzza"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://www.anishbhattacharya.com/research/evfly |
||
null | https://openreview.net/forum?id=7yMZAUkXa4 | @inproceedings{
yu2024mimictouch,
title={MimicTouch: Leveraging Multi-modal Human Tactile Demonstrations for Contact-rich Manipulation},
author={Kelin Yu and Yunhai Han and Qixian Wang and Vaibhav Saxena and Danfei Xu and Ye Zhao},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7yMZAUkXa4}
} | Tactile sensing is critical to fine-grained, contact-rich manipulation tasks, such as insertion and assembly. Prior research has shown the possibility of learning tactile-guided policy from teleoperated demonstration data. However, to provide the demonstration, human users often rely on visual feedback to control the robot. This creates a gap between the sensing modality used for controlling the robot (visual) and the modality of interest (tactile). To bridge this gap, we introduce "MimicTouch'', a novel framework for learning policies directly from demonstrations provided by human users with their hands. The key innovations are i) a human tactile data collection system which collects multi-modal tactile dataset for learning human's tactile-guided control strategy, ii) an imitation learning-based framework for learning human's tactile-guided control strategy through such data, and iii) an online residual RL framework to bridge the embodiment gap between the human hand and the robot gripper. Through comprehensive experiments, we highlight the efficacy of utilizing human's tactile-guided control strategy to resolve contact-rich manipulation tasks. The project website is at https://sites.google.com/view/MimicTouch. | MimicTouch: Leveraging Multi-modal Human Tactile Demonstrations for Contact-rich Manipulation | [
"Kelin Yu",
"Yunhai Han",
"Qixian Wang",
"Vaibhav Saxena",
"Danfei Xu",
"Ye Zhao"
] | Conference | Poster | 2310.16917 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://sites.google.com/view/MimicTouch |
|
null | https://openreview.net/forum?id=7wMlwhCvjS | @inproceedings{
wang2024gendp,
title={Gen{DP}: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy},
author={Yixuan Wang and Guang Yin and Binghao Huang and Tarik Kelestemur and Jiuguang Wang and Yunzhu Li},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7wMlwhCvjS}
} | Diffusion-based policies have shown remarkable capability in executing complex robotic manipulation tasks but lack explicit characterization of geometry and semantics, which often limits their ability to generalize to unseen objects and layouts. To enhance the generalization capabilities of Diffusion Policy, we introduce a novel framework that incorporates explicit spatial and semantic information via 3D semantic fields. We generate 3D descriptor fields from multi-view RGBD observations with large foundational vision models, then compare these descriptor fields against reference descriptors to obtain semantic fields. The proposed method explicitly considers geometry and semantics, enabling strong generalization capabilities in tasks requiring category-level generalization, resolving geometric ambiguities, and attention to subtle geometric details. We evaluate our method across eight tasks involving articulated objects and instances with varying shapes and textures from multiple object categories. Our method demonstrates its effectiveness by increasing Diffusion Policy's average success rate on \textit{unseen} instances from 20\% to 93\%. Additionally, we provide a detailed analysis and visualization to interpret the sources of performance gain and explain how our method can generalize to novel instances. Project page: https://robopil.github.io/GenDP/ | GenDP: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy | [
"Yixuan Wang",
"Guang Yin",
"Binghao Huang",
"Tarik Kelestemur",
"Jiuguang Wang",
"Yunzhu Li"
] | Conference | Poster | 2410.17488 | [
"https://github.com/WangYixuan12/gendp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://robopil.github.io/GenDP/ |
|
null | https://openreview.net/forum?id=7vzDBvviRO | @inproceedings{
lin2024ubsoft,
title={{UBS}oft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments},
author={Chunru Lin and Jugang Fan and Yian Wang and Zeyuan Yang and Zhehuan Chen and Lixing Fang and Tsun-Hsuan Wang and Zhou Xian and Chuang Gan},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7vzDBvviRO}
} | It is desired to equip robots with the capability of interacting with various soft materials as they are ubiquitous in the real world. While physics simulations are one of the predominant methods for data collection and robot training, simulating soft materials presents considerable challenges. Specifically, it is significantly more costly than simulating rigid objects in terms of simulation speed and storage requirements. These limitations typically restrict the scope of studies on soft materials to small and bounded areas, thereby hindering the learning of skills in broader spaces. To address this issue, we introduce UBSoft, a new simulation platform designed to support unbounded soft environments for robot skill acquisition. Our platform utilizes spatially adaptive resolution scales, where simulation resolution dynamically adjusts based on proximity to active robotic agents. Our framework markedly reduces the demand for extensive storage space and computation costs required for large-scale scenarios involving soft materials. We also establish a set of benchmark tasks in our platform, including both locomotion and manipulation tasks, and conduct experiments to evaluate the efficacy of various reinforcement learning algorithms and trajectory optimization techniques, both gradient-based and sampling-based. Preliminary results indicate that sampling-based trajectory optimization generally achieves better results for obtaining one trajectory to solve the task. Additionally, we conduct experiments in real-world environments to demonstrate that advancements made in our UBSoft simulator could translate to improved robot interactions with large-scale soft material. More videos can be found at https://ubsoft24.github.io. | UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments | [
"Chunru Lin",
"Jugang Fan",
"Yian Wang",
"Zeyuan Yang",
"Zhehuan Chen",
"Lixing Fang",
"Tsun-Hsuan Wang",
"Zhou Xian",
"Chuang Gan"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://vis-www.cs.umass.edu/ubsoft/ |
||
null | https://openreview.net/forum?id=7ddT4eklmQ | @inproceedings{
yang2024ace,
title={{ACE}: A Cross-platform and visual-Exoskeletons System for Low-Cost Dexterous Teleoperation},
author={Shiqi Yang and Yuzhe Qin and Runyu Ding and Xuxin Cheng and Minghuan Liu and Ruihan Yang and Jialong Li and Sha Yi and Xiaolong Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7ddT4eklmQ}
} | Bimanual robotic manipulation with dexterous hands has a large potential workability and a wide workspace as it follows the most natural human workflow.
Learning from human demonstrations has proven highly effective for learning a dexterous manipulation policy. To collect such data, teleoperation serves as a straightforward and efficient way to do so.
However, a cost-effective and easy-to-use teleoperation system is lacking for anthropomorphic robot hands.
To fill the deficiency, we developed \our, a cross-platform visual-exoskeleton system for low-cost dexterous teleoperation.
Our system employs a hand-facing camera to capture 3D hand poses and an exoskeleton mounted on a base that can be easily carried on users' backs. ACE captures both the hand root end-effector and hand pose in real-time and enables cross-platform operations.
We evaluate the key system parameters compared with previous teleoperation systems and show clear advantages of \our.
We then showcase the desktop and mobile versions of our system on six different robot platforms (including humanoid-hands, arm-hands, arm-gripper, and quadruped-gripper systems), and demonstrate the effectiveness of learning three difficult real-world tasks through the collected demonstration on two of them. | ACE: A Cross-platform and visual-Exoskeletons System for Low-Cost Dexterous Teleoperation | [
"Shiqi Yang",
"Minghuan Liu",
"Yuzhe Qin",
"Runyu Ding",
"Jialong Li",
"Xuxin Cheng",
"Ruihan Yang",
"Sha Yi",
"Xiaolong Wang"
] | Conference | Poster | [
"https://github.com/ACETeleop/ACETeleop"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://ace-teleop.github.io/ |
||
null | https://openreview.net/forum?id=7c5rAY8oU3 | @inproceedings{
dai2024acdc,
title={{ACDC}: Automated Creation of Digital Cousins for Robust Policy Learning},
author={Tianyuan Dai and Josiah Wong and Yunfan Jiang and Chen Wang and Cem Gokmen and Ruohan Zhang and Jiajun Wu and Li Fei-Fei},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7c5rAY8oU3}
} | Training robot policies in the real world can be unsafe, costly, and difficult to scale. Simulation serves as an inexpensive and potentially limitless source of training data, but suffers from the semantics and physics disparity between simulated and real-world environments. These discrepancies can be minimized by training in *digital twins*, which serve as virtual replicas of a real scene but are expensive to generate and cannot produce cross-domain generalization. To address these limitations, we propose the concept of ***digital cousins***, a virtual asset or scene that, unlike a *digital twin*, does not explicitly model a real-world counterpart but still exhibits similar geometric and semantic affordances. As a result, *digital cousins* simultaneously reduce the cost of generating an analogous virtual environment while also facilitating better robustness during sim-to-real domain transfer by providing a distribution of similar training scenes. Leveraging digital cousins, we introduce a novel method for their automated creation, and propose a fully automated real-to-sim-to-real pipeline for generating fully interactive scenes and training robot policies that can be deployed zero-shot in the original scene. We find that digital cousin scenes that preserve geometric and semantic affordances can be produced automatically, and can be used to train policies that outperform policies trained on digital twins, achieving 90\% vs. 25\% success rates under zero-shot sim-to-real transfer. Additional details are available at https://digital-cousins.github.io/. | Automated Creation of Digital Cousins for Robust Policy Learning | [
"Tianyuan Dai",
"Josiah Wong",
"Yunfan Jiang",
"Chen Wang",
"Cem Gokmen",
"Ruohan Zhang",
"Jiajun Wu",
"Li Fei-Fei"
] | Conference | Poster | 2410.07408 | [
"https://github.com/cremebrule/digital-cousins"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://digital-cousins.github.io/ |
|
null | https://openreview.net/forum?id=7E3JAys1xO | @inproceedings{
wei2024droma,
title={D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation},
author={Songlin Wei and Haoran Geng and Jiayi Chen and Congyue Deng and Cui Wenbo and Chengyang Zhao and Xiaomeng Fang and Leonidas Guibas and He Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=7E3JAys1xO}
} | Depth sensing is an important problem for 3D vision-based robotics. Yet, a real-world active stereo or ToF depth camera often produces noisy and incomplete depth which bottlenecks robot performances. In this work, we propose D3RoMa, a learning-based depth estimation framework on stereo image pairs that predicts clean and accurate depth in diverse indoor scenes, even in the most challenging scenarios with translucent or specular surfaces where classical depth sensing completely fails. Key to our method is that we unify depth estimation and restoration into an image-to-image translation problem by predicting the disparity map with a denoising diffusion probabilistic model. At inference time, we further incorporated a left-right consistency constraint as classifier guidance to the diffusion process. Our framework combines recently advanced learning-based approaches and geometric constraints from traditional stereo vision. For model training, we create a large scene-level synthetic dataset with diverse transparent and specular objects to compensate for existing tabletop datasets. The trained model can be directly applied to real-world in-the-wild scenes and achieve state-of-the-art performance in multiple public depth estimation benchmarks. Further experiments in both simulated and real environments show that accurate depth prediction significantly improves robotic manipulation in various scenarios. | D^3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation | [
"Songlin Wei",
"Haoran Geng",
"Jiayi Chen",
"Congyue Deng",
"Cui Wenbo",
"Chengyang Zhao",
"Xiaomeng Fang",
"Leonidas Guibas",
"He Wang"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://pku-epic.github.io/D3RoMa/ |
||
null | https://openreview.net/forum?id=6oESa4g05O | @inproceedings{
chen2024distribution,
title={Distribution Discrepancy and Feature Heterogeneity for Active 3D Object Detection},
author={Huang-Yu Chen and Jia-Fong Yeh and Jiawei and Pin-Hsuan Peng and Winston H. Hsu},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=6oESa4g05O}
} | LiDAR-based 3D object detection is a critical technology for the development of autonomous driving and robotics. However, the high cost of data annotation limits its advancement. We propose a novel and effective active learning (AL) method called Distribution Discrepancy and Feature Heterogeneity (DDFH), which simultaneously considers geometric features and model embeddings, assessing information from both the instance-level and frame-level perspectives. Distribution Discrepancy evaluates the difference and novelty of instances within the unlabeled and labeled distributions, enabling the model to learn efficiently with limited data. Feature Heterogeneity ensures the heterogeneity of intra-frame instance features, maintaining feature diversity while avoiding redundant or similar instances, thus minimizing annotation costs. Finally, multiple indicators are efficiently aggregated using Quantile Transform, providing a unified measure of informativeness. Extensive experiments demonstrate that DDFH outperforms the current state-of-the-art (SOTA) methods on the KITTI and Waymo datasets, effectively reducing the bounding box annotation cost by 56.3% and showing robustness when working with both one-stage and two-stage models. | Distribution Discrepancy and Feature Heterogeneity for Active 3D Object Detection | [
"Huang-Yu Chen",
"Jia-Fong Yeh",
"Jiawei",
"Pin-Hsuan Peng",
"Winston H. Hsu"
] | Conference | Poster | 2409.05425 | [
"https://github.com/Coolshanlan/DDFH-active-3Ddet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | ||
null | https://openreview.net/forum?id=6X3ybeVpDi | @inproceedings{
chen2024online,
title={Online Transfer and Adaptation of Tactile Skill: A Teleoperation Framework},
author={Xiao Chen and Tianle Ni and K{\"u}bra Karacan and Hamid Sadeghian and Sami Haddadin},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=6X3ybeVpDi}
} | This paper presents a teleoperation framework designed for online learning and adaptation of tactile skills, which provides an intuitive interface without need for physical access to execution robot. The proposed tele-teaching approach utilizes periodical Dynamical Movement Primitives (DMP) and Recursive Least Square (RLS) for generating tactile skills. An autonomy allocation strategy, guided by the learning confidence and operator intention, ensures a smooth transition between human demonstration to autonomous robot operation. Our experimental results with two 7 Degree of Freedom (DoF) Franka Panda robot demonstrates that the tele-teaching framework facilitates online motion and force learning and adaptation within a few iterations. | Online Transfer and Adaptation of Tactile Skill: A Teleoperation Framework | [
"Xiao Chen",
"Tianle Ni",
"Kübra Karacan",
"Hamid Sadeghian",
"Sami Haddadin"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=6FGlpzC9Po | @inproceedings{
nakamoto2024steering,
title={Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance},
author={Mitsuhiko Nakamoto and Oier Mees and Aviral Kumar and Sergey Levine},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=6FGlpzC9Po}
} | Large, general-purpose robotic policies trained on diverse demonstration datasets have been shown to be remarkably effective both for controlling a variety of robots in a range of different scenes, and for acquiring broad repertoires of manipulation skills. However, the data that such policies are trained on is generally of mixed quality -- not only are human-collected demonstrations unlikely to perform the task perfectly, but the larger the dataset is, the harder it is to curate only the highest quality examples. It also remains unclear how optimal data from one embodiment is for training on another embodiment. In this paper, we present a general and broadly applicable approach that enhances the performance of such generalist robot policies at deployment time by re-ranking their actions according to a value function learned via offline RL. This approach, which we call Value-Guided Policy Steering (V-GPS), is compatible with a wide range of different generalist policies, without needing to fine-tune or even access the weights of the policy. We show that the same value function can improve the performance of five different state-of-the-art policies with different architectures, even though they were trained on distinct datasets, attaining consistent performance improvement on multiple robotic platforms across a total of 12 tasks. Code and videos can be found at: https://nakamotoo.github.io/V-GPS | Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance | [
"Mitsuhiko Nakamoto",
"Oier Mees",
"Aviral Kumar",
"Sergey Levine"
] | Conference | Poster | 2410.13816 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://nakamotoo.github.io/V-GPS |
|
null | https://openreview.net/forum?id=67tTQeO4HQ | @inproceedings{
maurer2024inflight,
title={In-Flight Attitude Control of a Quadruped using Deep Reinforcement Learning},
author={Finn Gross Maurer and Tarek El-Agroudi and J{\o}rgen Anker Olsen and Kostas Alexis},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=67tTQeO4HQ}
} | We present the development and real world demonstration of an in-flight attitude control law for a small low-cost quadruped with a five-bar-linkage leg design using only its legs as reaction masses. The control law is trained using deep reinforcement learning (DRL) and specifically through Proximal Policy Optimization (PPO) in the NVIDIA Omniverse Isaac Sim simulator with a GPU-accelerated DRL pipeline. To demonstrate the policy, a small quadruped is designed, constructed, and evaluated both on a rotating pole test setup and in free fall. During a free fall of 0.7 seconds, the quadruped follows commanded attitude steps of 45 degrees in all principal axes, and achieves an average base angular velocity of 110 degrees per second during large attitude reference steps. | In-Flight Attitude Control of a Quadruped using Deep Reinforcement Learning | [
"Tarek El-Agroudi",
"Finn Gross Maurer",
"Jørgen Anker Olsen",
"Kostas Alexis"
] | Conference | Poster | [
"https://github.com/ntnu-arl/Eurepus-RL and https://github.com/ntnu-arl/Eurepus-design"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://finnfi.github.io/ |
||
null | https://openreview.net/forum?id=5u9l6U61S7 | @inproceedings{
hua2024gensim,
title={GenSim2: Realistic Robot Task Generation with {LLM}},
author={Pu Hua and Minghuan Liu and Annabella Macaluso and Lirui Wang and Yunfeng Lin and Weinan Zhang and Huazhe Xu and Xiaolong Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=5u9l6U61S7}
} | Robotic simulation today remains challenging to scale up due to the human efforts required to create diverse simulation tasks and scenes. Simulation-trained policies also face scalability issues as many sim-to-real methods focus on a single task. To address these challenges, this work proposes GenSim2, a scalable framework that leverages coding LLMs with multi-modal and reasoning capabilities for complex and realistic simulation task creation, including long-horizon tasks with articulated objects. To automatically generate demonstration data for these tasks at scale, we propose planning and RL solvers that generalize within object categories. The pipeline can generate data for up to 100 articulated tasks with 200 objects and reduce the required human efforts. To utilize such data, we propose an effective multi-task language-conditioned policy architecture, dubbed proprioceptive point-cloud transformer (PPT), that learns from the generated demonstrations and exhibits strong sim-to-real zero-shot transfer. Combining the proposed pipeline and the policy architecture, we show a promising usage of GenSim2 that the generated data can be used for zero-shot transfer or co-train with real-world collected data, which enhances the policy performance by 20% compared with training exclusively on limited real data. | GenSim2: Scaling Robot Data Generation with Multi-modal and Reasoning LLMs | [
"Pu Hua",
"Minghuan Liu",
"Annabella Macaluso",
"Yunfeng Lin",
"Weinan Zhang",
"Huazhe Xu",
"Lirui Wang"
] | Conference | Poster | 2410.03645 | [
"https://github.com/GenSim2/GenSim2"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://gensim2.github.io/ |
|
null | https://openreview.net/forum?id=5lSkn5v4LK | @inproceedings{
lim2024equigraspflow,
title={EquiGraspFlow: {SE}(3)-Equivariant 6-DoF Grasp Pose Generative Flows},
author={Byeongdo Lim and Jongmin Kim and Jihwan Kim and Yonghyeon Lee and Frank C. Park},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=5lSkn5v4LK}
} | Traditional methods for synthesizing 6-DoF grasp poses from 3D observations often rely on geometric heuristics, resulting in poor generalizability, limited grasp options, and higher failure rates. Recently, data-driven methods have been proposed that use generative models to learn the distribution of grasp poses and generate diverse candidate poses. The main drawback of these methods is that they fail to achieve SE(3)-equivariance, meaning that the generated grasp poses do not transform correctly with object rotations and translations. In this paper, we propose \textit{EquiGraspFlow}, a flow-based SE(3)-equivariant 6-DoF grasp pose generative model that can learn complex conditional distributions on the SE(3) manifold while guaranteeing SE(3)-equivariance. Our model achieves the equivariance without relying on data augmentation, by using network architectures that guarantee the equivariance by construction. Extensive experiments show that \textit{EquiGraspFlow} accurately learns grasp pose distribution, achieves the SE(3)-equivariance, and significantly outperforms existing grasp pose generative models. Code is available at https://github.com/bdlim99/EquiGraspFlow. | EquiGraspFlow: SE(3)-Equivariant 6-DoF Grasp Pose Generative Flows | [
"Byeongdo Lim",
"Jongmin Kim",
"Jihwan Kim",
"Yonghyeon Lee",
"Frank C. Park"
] | Conference | Poster | [
"https://github.com/bdlim99/EquiGraspFlow"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=5iXG6EgByK | @inproceedings{
tan2024promptable,
title={Promptable Closed-loop Traffic Simulation},
author={Shuhan Tan and Boris Ivanovic and Yuxiao Chen and Boyi Li and Xinshuo Weng and Yulong Cao and Philipp Kraehenbuehl and Marco Pavone},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=5iXG6EgByK}
} | Simulation stands as a cornerstone for safe and efficient autonomous driving development. At its core a simulation system ought to produce realistic, reactive, and controllable traffic patterns. In this paper, we propose ProSim, a multimodal promptable closed-loop traffic simulation framework. ProSim allows the user to give a complex set of numerical, categorical or textual prompts to instruct each agent’s behavior and intention. ProSim then rolls out a traffic scenario in a closed-loop manner, modeling each agent’s interaction with other traffic participants. Our experiments show that ProSim achieves high prompt controllability given different user prompts, while reaching competitive performance on the Waymo Sim Agents Challenge when no prompt is given. To support research on promptable traffic simulation, we create ProSim-Instruct-520k, a multimodal prompt-scenario paired driving dataset with over 10M text prompts for over 520k real-world driving scenarios. We will release data, benchmark, and labeling tools of ProSim-Instruct-520k upon publication. | Promptable Closed-loop Traffic Simulation | [
"Shuhan Tan",
"Boris Ivanovic",
"Yuxiao Chen",
"Boyi Li",
"Xinshuo Weng",
"Yulong Cao",
"Philipp Kraehenbuehl",
"Marco Pavone"
] | Conference | Poster | 2409.05863 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://ariostgx.github.io/ProSim/ |
|
null | https://openreview.net/forum?id=5W0iZR9J7h | @inproceedings{
zhang2024dexgraspnet,
title={DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes},
author={Jialiang Zhang and Haoran Liu and Danshi Li and XinQiang Yu and Haoran Geng and Yufei Ding and Jiayi Chen and He Wang},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=5W0iZR9J7h}
} | Grasping in cluttered scenes remains highly challenging for dexterous hands due to the scarcity of data. To address this problem, we present a large-scale synthetic dataset, encompassing 1319 objects, 8270 scenes, and 426 million grasps. Beyond benchmarking, we also explore data-efficient learning strategies from grasping data. We reveal that the combination of a conditional generative model that focuses on local geometry and a grasp dataset that emphasizes complex scene variations is key to achieving effective generalization. Our proposed generative method outperforms all baselines in simulation experiments. Furthermore, it demonstrates zero-shot sim-to-real transfer through test-time depth restoration, attaining 91% real-world success rate, showcasing the robust potential of utilizing fully synthetic training data. | DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes | [
"Jialiang Zhang",
"Haoran Liu",
"Danshi Li",
"XinQiang Yu",
"Haoran Geng",
"Yufei Ding",
"Jiayi Chen",
"He Wang"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=5Awumz1VKU | @inproceedings{
chen2024learning,
title={Learning Differentiable Tensegrity Dynamics using Graph Neural Networks},
author={Nelson Chen and Kun Wang and William R. Johson III and Rebecca Kramer-Bottiglio and Kostas Bekris and Mridul Aanjaneya},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=5Awumz1VKU}
} | Tensegrity robots are composed of rigid struts and flexible cables. They constitute an emerging class of hybrid rigid-soft robotic systems and are promising systems for a wide array of applications, ranging from locomotion to assembly. They are difficult to control and model accurately, however, due to their compliance and high number of degrees of freedom. To address this issue, prior work has introduced a differentiable physics engine designed for tensegrity robots based on first principles. In contrast, this work proposes the use of graph neural networks to model contact dynamics over a graph representation of tensegrity robots, which leverages their natural graph-like cable connec- tivity between end caps of rigid rods. This learned simulator can accurately model 3-bar and 6-bar tensegrity robot dynamics in simulation-to-simulation experiments where MuJoCo is used as the ground truth. It can also achieve higher accuracy than the previous differentiable engine for a real 3-bar tensegrity robot, for which the robot state is only partially observable. When compared against direct applications of recent mesh-based graph neural network simulators, the proposed approach is computationally more efficient, both for training and inference, while achieving higher accuracy. Code and data are available at https://github.com/nchen9191/tensegrity_gnn_simulator_public | Learning Differentiable Tensegrity Dynamics using Graph Neural Networks | [
"Nelson Chen",
"Kun Wang",
"William R. Johson III",
"Rebecca Kramer-Bottiglio",
"Kostas Bekris",
"Mridul Aanjaneya"
] | Conference | Poster | 2410.12216 | [
"https://github.com/nchen9191/tensegrity_gnn_simulator_public"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | ||
null | https://openreview.net/forum?id=56IzghzjfZ | @inproceedings{
huang2024imagination,
title={{IMAGINATION} {POLICY}: Using Generative Point Cloud Models for Learning Manipulation Policies},
author={Haojie Huang and Karl Schmeckpeper and Dian Wang and Ondrej Biza and Yaoyao Qian and Haotian Liu and Mingxi Jia and Robert Platt and Robin Walters},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=56IzghzjfZ}
} | Humans can imagine goal states during planning and perform actions to match those goals. In this work, we propose IMAGINATION POLICY, a novel multi-task key-frame policy network for solving high-precision pick and place tasks. Instead of learning actions directly, IMAGINATION POLICY generates point clouds to imagine desired states which are then translated to actions using rigid action estimation. This transforms action inference into a local generative task. We leverage pick and place symmetries underlying the tasks in the generation process and achieve extremely high sample efficiency and generalizability to unseen configurations. Finally, we demonstrate state-of-the-art performance across various tasks on the RLbench benchmark compared with several strong baselines and validate our approach on a real robot. | IMAGINATION POLICY: Using Generative Point Cloud Models for Learning Manipulation Policies | [
"Haojie Huang",
"Karl Schmeckpeper",
"Dian Wang",
"Ondrej Biza",
"Yaoyao Qian",
"Haotian Liu",
"Mingxi Jia",
"Robert Platt",
"Robin Walters"
] | Conference | Poster | 2406.11740 | [
""
] | https://huggingface.co/papers/2406.11740 | 1 | 1 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 | https://haojhuang.github.io/imagine_page/ |
null | https://openreview.net/forum?id=55tYfHvanf | @inproceedings{
shaw2024bimanual,
title={Bimanual Dexterity for Complex Tasks},
author={Kenneth Shaw and Yulong Li and Jiahui Yang and Mohan Kumar Srirama and Ray Liu and Haoyu Xiong and Russell Mendonca and Deepak Pathak},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=55tYfHvanf}
} | To train generalist robot policies, machine learning methods often require a substantial amount of expert human teleoperation data. An ideal robot for humans collecting data is one that closely mimics them: bimanual arms and dexterous hands. However, creating such a bimanual teleoperation system with over 50 DoF is a significant challenge. To address this, we introduce Bidex, an extremely dexterous, low-cost, low-latency and portable bimanual dexterous teleoperation system which relies on motion capture gloves and teacher arms. We compare Bidex to a Vision Pro teleoperation system and a SteamVR system and find Bidex to produce better quality data for more complex tasks at a faster rate. Additionally, we show Bidex operating a mobile bimanual robot for in the wild tasks. Please refer to https://bidex-teleop.github.io for video results and instructions to recreate Bidex. The robot hands (5k USD) and teleoperation system (7k USD) is readily reproducible and can be used on many robot arms including two xArms ($16k USD). | Bimanual Dexterity for Complex Tasks | [
"Kenneth Shaw",
"Yulong Li",
"Jiahui Yang",
"Mohan Kumar Srirama",
"Ray Liu",
"Haoyu Xiong",
"Russell Mendonca",
"Deepak Pathak"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://bidex-teleop.github.io/ |
||
null | https://openreview.net/forum?id=4Of4UWyBXE | @inproceedings{
chen2024rpm,
title={{RP}1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands},
author={Le Chen and Yi Zhao and Jan Schneider and Quankai Gao and Juho Kannala and Bernhard Sch{\"o}lkopf and Joni Pajarinen and Dieter B{\"u}chler},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=4Of4UWyBXE}
} | Endowing robot hands with human-level dexterity is a long-lasting research objective. Bi-manual robot piano playing constitutes a task that combines challenges from dynamic tasks, such as generating fast while precise motions, with slower but contact-rich manipulation problems. Although reinforcement learning based approaches have shown promising results in single-task performance, these methods struggle in a multi-song setting. Our work aims to close this gap and, thereby, enable imitation learning approaches for robot piano playing at scale. To this end, we introduce the Robot Piano 1 Million (RP1M) dataset, containing bi-manual robot piano playing motion data of more than one million trajectories. We formulate finger placements as an optimal transport problem, thus, enabling automatic annotation of vast amounts of unlabeled songs. Benchmarking existing imitation learning approaches shows that such approaches reach state-of-the-art robot piano playing performance by leveraging RP1M. | RP1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands | [
"Yi Zhao",
"Le Chen",
"Jan Schneider",
"Quankai Gao",
"Juho Kannala",
"Bernhard Schölkopf",
"Joni Pajarinen",
"Dieter Büchler"
] | Conference | Poster | 2408.11048 | [
""
] | https://huggingface.co/papers/2408.11048 | 0 | 3 | 2 | 8 | [] | [] | [] | [] | [] | [] | 1 | https://rp1m.github.io/ |
null | https://openreview.net/forum?id=46SluHKoE9 | @inproceedings{
mendonca2024continuously,
title={Continuously Improving Mobile Manipulation with Autonomous Real-World {RL}},
author={Russell Mendonca and Emmanuel Panov and Bernadette Bucher and Jiuguang Wang and Deepak Pathak},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=46SluHKoE9}
} | We present a fully autonomous real-world RL framework for mobile manipulation that can learn policies without extensive instrumentation or human supervision. This is enabled by 1) task-relevant autonomy, which guides exploration towards object interactions and prevents stagnation near goal states, 2) efficient policy learning by leveraging basic task knowledge in behavior priors, and 3) formulating generic rewards that combine human-interpretable semantic information with low-level, fine-grained observations. We demonstrate that our approach allows Spot robots to continually improve their performance on a set of four challenging mobile manipulation tasks, obtaining an average success rate of 80% across tasks, a 3-4 times improvement over existing approaches. Videos can be found at https://continual-mobile-manip.github.io/. | Continuously Improving Mobile Manipulation with Autonomous Real-World RL | [
"Russell Mendonca",
"Emmanuel Panov",
"Bernadette Bucher",
"Jiuguang Wang",
"Deepak Pathak"
] | Conference | Poster | 2409.20568 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://continual-mobile-manip.github.io/ |
|
null | https://openreview.net/forum?id=3wBqoPfoeJ | @inproceedings{
lin2024twisting,
title={Twisting Lids (Off) with Two Hands},
author={Toru Lin and Zhao-Heng Yin and Haozhi Qi and Pieter Abbeel and Jitendra Malik},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3wBqoPfoeJ}
} | Manipulating objects with two multi-fingered hands has been a long-standing challenge in robotics, due to the contact-rich nature of many manipulation tasks and the complexity inherent in coordinating a high-dimensional bimanual system. In this work, we share novel insights into physical modeling, real-time perception, and reward design that enable policies trained in simulation using deep reinforcement learning (RL) to be effectively and efficiently transferred to the real world. Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands, demonstrating policies with generalization capabilities across a diverse set of unseen objects as well as dynamic and dexterous behaviors. To the best of our knowledge, this is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands. | Twisting Lids Off with Two Hands | [
"Toru Lin",
"Zhao-Heng Yin",
"Haozhi Qi",
"Pieter Abbeel",
"Jitendra Malik"
] | Conference | Poster | 2403.02338 | [
""
] | https://huggingface.co/papers/2403.02338 | 2 | 5 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 | https://toruowo.github.io/bimanual-twist/ |
null | https://openreview.net/forum?id=3jNEz3kUSl | @inproceedings{
gyenes2024pointpatchrl,
title={PointPatch{RL} - Masked Reconstruction Improves Reinforcement Learning on Point Clouds},
author={Balazs Gyenes and Nikolai Franke and Philipp Becker and Gerhard Neumann},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3jNEz3kUSl}
} | Perceiving the environment via cameras is crucial for Reinforcement Learning (RL) in robotics. While images are a convenient form of representation, they often complicate extracting important geometric details, especially with varying geometries or deformable objects. In contrast, point clouds naturally represent this geometry and easily integrate color and positional data from multiple camera views. However, while point-cloud processing with deep learning has seen many recent successes, RL on point clouds is under-researched, with only the simplest encoder architecture considered in the literature. We introduce PointPatchRL (PPRL), a method for RL on point clouds that builds on the common paradigm of dividing point clouds into overlapping patches, tokenizing them, and processing
the tokens with transformers. PPRL provides significant improvements compared with other point-cloud processing architectures previously used for RL. We then complement PPRL with masked reconstruction for representation learning and show that our method outperforms strong model-free and model-based baselines on image observations in complex manipulation tasks containing deformable objects and variations in target object geometry. | PointPatchRL - Masked Reconstruction Improves Reinforcement Learning on Point Clouds | [
"Balazs Gyenes",
"Nikolai Franke",
"Philipp Becker",
"Gerhard Neumann"
] | Conference | Poster | [
"https://github.com/balazsgyenes/pprl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://alrhub.github.io/pprl-website |
||
null | https://openreview.net/forum?id=3i7j8ZPnbm | @inproceedings{
ha2024umionlegs,
title={{UMI}-on-Legs: Making Manipulation Policies Mobile with a Manipulation-Centric Whole-body Controller},
author={Huy Ha and Yihuai Gao and Zipeng Fu and Jie Tan and Shuran Song},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3i7j8ZPnbm}
} | We introduce UMI-on-Legs, a new framework that combines real-world and simulation data for quadruped manipulation systems. We scale task-centric data collection in the real world using a handheld gripper (UMI), providing a cheap way to demonstrate task-relevant manipulation skills without a robot. Simultaneously, we scale robot-centric data in simulation by training a whole-body controller. The interface between these two policies are end-effector trajectories in the task-frame, which are inferred by the manipulation policy and passed to the whole-body controller for tracking. We evaluate UMI-on-Legs on prehensile, non-prehensile, and dynamic manipulation tasks, and report over 70% success rate for all tasks. Lastly, we also demonstrate the zero-shot cross-embodiment deployment of a pre-trained manipulation policy checkpoint from a prior work, originally intended for a fixed-base robot arm, on our quadruped system. We believe this framework provides a scalable path towards learning expressive manipulation skills on dynamic robot embodiments. | UMI-on-Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers | [
"Huy Ha",
"Yihuai Gao",
"Zipeng Fu",
"Jie Tan",
"Shuran Song"
] | Conference | Poster | [
"https://github.com/real-stanford/umi-on-legs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://umi-on-legs.github.io/ |
||
null | https://openreview.net/forum?id=3bcujpPikC | @inproceedings{
chen2024frea,
title={{FREA}: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality},
author={Keyu Chen and Yuheng Lei and Hao Cheng and Haoran Wu and Wenchao Sun and Sifa Zheng},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3bcujpPikC}
} | Generating safety-critical scenarios, which are essential yet difficult to collect at scale, offers an effective method to evaluate the robustness of autonomous vehicles (AVs). Existing methods focus on optimizing adversariality while preserving the naturalness of scenarios, aiming to achieve a balance through data-driven approaches. However, without an appropriate upper bound for adversariality, the scenarios might exhibit excessive adversariality, potentially leading to unavoidable collisions. In this paper, we introduce FREA, a novel safety-critical scenarios generation method that incorporates the Largest Feasible Region (LFR) of AV as guidance to ensure the reasonableness of the adversarial scenarios. Concretely, FREA initially pre-calculates the LFR of AV from offline datasets. Subsequently, it learns a reasonable adversarial policy that controls critical background vehicles (CBVs) in the scene to generate adversarial yet AV-feasible scenarios by maximizing a novel feasibility-dependent objective function. Extensive experiments illustrate that FREA can effectively generate safety-critical scenarios, yielding considerable near-miss events while ensuring AV's feasibility. Generalization analysis also confirms the robustness of FREA in AV testing across various surrogate AV methods and traffic environments. | FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality | [
"Keyu Chen",
"Yuheng Lei",
"Hao Cheng",
"Haoran Wu",
"Wenchao Sun",
"Sifa Zheng"
] | Conference | Oral | 2406.02983 | [
"https://github.com/CurryChen77/FREA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://currychen77.github.io/FREA/ |
|
null | https://openreview.net/forum?id=3ZAgXBRvla | @inproceedings{
li2024flowbothd,
title={FlowBot{HD}: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation},
author={Yishu Li and Wen Hui Leng and Yiming Fang and Ben Eisner and David Held},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3ZAgXBRvla}
} | We introduce a novel approach to manipulate articulated objects with ambiguities, such as opening a door, in which multi-modality and occlusions create ambiguities about the opening side and direction. Multi-modality occurs when the method to open a fully closed door (push, pull, slide) is uncertain, or the side from which it should be opened is uncertain. Occlusions further obscure the door’s shape from certain angles, creating further ambiguities during the occlusion. To tackle these challenges, we propose a history-aware diffusion network that models the multi-modal distribution of the articulated object and uses history to disambiguate actions and make stable predictions under occlusions. Experiments and analysis demonstrate the state-of-art performance of our method and specifically improvements in ambiguity-caused failure modes. Our project website is available at https://flowbothd.github.io/. | FlowBotHD: History-Aware Diffuser Handling Ambiguities in Articulated Objects Manipulation | [
"Yishu Li",
"Wen Hui Leng",
"Yiming Fang",
"Ben Eisner",
"David Held"
] | Conference | Poster | 2410.07078 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://flowbothd.github.io/ |
|
null | https://openreview.net/forum?id=3NI5SxsJqf | @inproceedings{
zhao2024accelerating,
title={Accelerating Visual Sparse-Reward Learning with Latent Nearest-Demonstration-Guided Explorations},
author={Ruihan Zhao and ufuk topcu and Sandeep P. Chinchali and Mariano Phielipp},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=3NI5SxsJqf}
} | Recent progress in deep reinforcement learning (RL) and computer vision enables artificial agents to solve complex tasks, including locomotion, manipulation, and video games from high-dimensional pixel observations. However, RL usually relies on domain-specific reward functions for sufficient learning signals, requiring expert knowledge. While vision-based agents could learn skills from only sparse rewards, exploration challenges arise. We present Latent Nearest-demonstration-guided Exploration (LaNE), a novel and efficient method to solve sparse-reward robot manipulation tasks from image observations and a few demonstrations. First, LaNE builds on the pre-trained DINOv2 feature extractor to learn an embedding space for forward prediction. Next, it rewards the agent for exploring near the demos, quantified by quadratic control costs in the embedding space. Finally, LaNE optimizes the policy for the augmented rewards with RL. Experiments demonstrate that our method achieves state-of-the-art sample efficiency in Robosuite simulation and enables under-an-hour RL training from scratch on a Franka Panda robot, using only a few demonstrations. | Accelerating Visual Sparse-Reward Learning with Latent Nearest-Demonstration-Guided Explorations | [
"Ruihan Zhao",
"ufuk topcu",
"Sandeep P. Chinchali",
"Mariano Phielipp"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=2sg4PY1W9d | @inproceedings{
baimukashev2024learning,
title={Learning Interpretable Reward Models via Unsupervised Feature Selection},
author={Daulet Baimukashev and Gokhan Alcan and Ville Kyrki and Kevin Sebastian Luck},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=2sg4PY1W9d}
} | In complex real-world tasks such as robotic manipulation and autonomous driving, collecting expert demonstrations is often more straightforward than specifying precise learning objectives and task descriptions. Learning from expert data can be achieved through behavioral cloning or by learning a reward function, i.e., inverse reinforcement learning. The latter allows for training with additional data outside the training distribution, guided by the inferred reward function. We propose a novel approach to construct compact and interpretable reward models from automatically selected state features. These inferred rewards have an explicit form and enable the learning of policies that closely match expert behavior by training standard reinforcement learning algorithms from scratch. We validate our method's performance in various robotic environments with continuous and high-dimensional state spaces. | Learning Transparent Reward Models via Unsupervised Feature Selection | [
"Daulet Baimukashev",
"Gokhan Alcan",
"Kevin Sebastian Luck",
"Ville Kyrki"
] | Conference | Poster | 2410.18608 | [
"https://github.com/baimukashev/reward-learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://sites.google.com/view/transparent-reward |
|
null | https://openreview.net/forum?id=2SYFDG4WRA | @inproceedings{
duan2024manipulateanything,
title={Manipulate-Anything: Automating Real-World Robots using Vision-Language Models},
author={Jiafei Duan and Wentao Yuan and Wilbert Pumacay and Yi Ru Wang and Kiana Ehsani and Dieter Fox and Ranjay Krishna},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=2SYFDG4WRA}
} | Large-scale endeavors like RT-1 and widespread community efforts such as Open-X-Embodiment have contributed to growing the scale of robot demonstration data. However, there is still an opportunity to improve the quality, quantity, and diversity of robot demonstration data. Although vision-language models have been shown to automatically generate demonstration data, their utility has been limited to environments with privileged state information, they require hand-designed skills, and are limited to interactions with few object instances. We propose Manipulate-Anything, a scalable automated generation method for real-world robotic manipulation.
Unlike prior work, our method can operate in real-world environments without any privileged state information, hand-designed skills, and can manipulate any static object. We evaluate our method using two setups. First, Manipulate-Anything successfully generates trajectories for all 5 real-world and 12 simulation tasks, significantly outperforming existing methods like VoxPoser.
Second, Manipulate-Anything's demonstrations can train more robust behavior cloning policies than training with human demonstrations, or from data generated by VoxPoser and Code-As-Policies.
We believe Manipulate-Anything can be the scalable method for both generating data for robotics and solving novel tasks in a zero-shot setting. Anonymous project page: manipulate-anything.github.io. | Manipulate-Anything: Automating Real-World Robots using Vision-Language Models | [
"Jiafei Duan",
"Wentao Yuan",
"Wilbert Pumacay",
"Yi Ru Wang",
"Kiana Ehsani",
"Dieter Fox",
"Ranjay Krishna"
] | Conference | Poster | 2406.18915 | [
"https://github.com/Robot-MA/manipulate-anything/tree/main"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://robot-ma.github.io/ |
|
null | https://openreview.net/forum?id=2LLu3gavF1 | @inproceedings{
kerr2024robot,
title={Robot See Robot Do: Part-Centric Feature Fields for Visual Imitation of Articulated Objects},
author={Justin Kerr and Chung Min Kim and Mingxuan Wu and Brent Yi and Qianqian Wang and Angjoo Kanazawa and Ken Goldberg},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=2LLu3gavF1}
} | Humans can learn to manipulate new objects by simply watching others; providing robots with the ability to learn from such demonstrations would enable a natural interface specifying new behaviors. This work develops Robot See Robot Do (RSRD), a method for imitating articulated object manipulation from a single monocular RGB human demonstration given a single static multi- view object scan. We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video with differentiable rendering. This analysis-by-synthesis approach uses part-centric feature fields in an iterative optimization which enables the use of geometric regularizers to re- cover 3D motions from only a single video. Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion. By representing demonstrations as part- centric trajectories, RSRD focuses on replicating the demonstration’s intended behavior while considering the robot’s own morphological limits, rather than at- tempting to reproduce the hand’s motion. We evaluate 4D-DPM’s 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD’s physical ex- ecution performance on 9 objects across 10 trials each on a bimanual YuMi robot. Each phase of RSRD achieves an average of 87% success rate, for a total end- to-end success rate of 60% across 90 trials. Notably, this is accomplished using only feature fields distilled from large pretrained vision models — without any task-specific training, fine-tuning, dataset collection, or annotation. Project page: https://robot-see-robot-do.github.io | Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction | [
"Justin Kerr",
"Chung Min Kim",
"Mingxuan Wu",
"Brent Yi",
"Qianqian Wang",
"Ken Goldberg",
"Angjoo Kanazawa"
] | Conference | Oral | 2409.18121 | [
"https://github.com/kerrj/rsrd"
] | https://huggingface.co/papers/2409.18121 | 5 | 7 | 2 | 7 | [] | [] | [] | [] | [] | [] | 1 | https://robot-see-robot-do.github.io/ |
null | https://openreview.net/forum?id=2CScZqkUPZ | @inproceedings{
song2024genetic,
title={Genetic Algorithm for Curriculum Design in Multi-Agent Reinforcement Learning},
author={Yeeho Song and Jeff Schneider},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=2CScZqkUPZ}
} | As the deployment of autonomous agents in real-world scenarios grows, so does the interest in their application to competitive environments with other robots. Self-play in Reinforcement Learning (RL) enables agents to develop competitive strategies. However, the complexity arising from multi-agent interactions and the tendency for RL agents to disrupt competitors' training introduce instability and a risk of overfitting. While traditional methods depend on costly Nash equilibrium approximations or random exploration for training scenario optimization, this can be inefficient in large search spaces often prevalent in multi-agent problems. However, related works in single-agent setups show that genetic algorithms perform better in large scenario spaces. Therefore, we propose using genetic algorithms to adaptively adjust environment parameters and opponent policies in a multi-agent context to find and synthesize coherent scenarios efficiently. We also introduce GenOpt Agent—a genetically optimized, open-loop agent executing scheduled actions. The open-loop aspect of GenOpt prevents RL agents from winning through adversarial perturbations, thereby fostering generalizable strategies. Also, GenOpt is genetically optimized without expert supervision, negating the need for expensive expert supervision to have meaningful opponents at the start of training. Our empirical studies indicate that this method surpasses several established baselines in two-player competitive settings with continuous action spaces, validating its effectiveness and stability in training. | Genetic Algorithm for Curriculum Design in Multi-Agent Reinforcement Learning | [
"Yeeho Song",
"Jeff Schneider"
] | Conference | Poster | [
"https://github.com/yeehos/GEnetic-Multiagent-Selfplay"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=2AZfKk9tRI | @inproceedings{
fu2024multiagent,
title={Multi-agent Reinforcement Learning with Hybrid Action Space for Free Gait Motion Planning of Hexapod Robots},
author={Huiqiao Fu and Kaiqiang Tang and Peng Li and Guizhou Deng and Chunlin Chen},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=2AZfKk9tRI}
} | Legged robots are able to overcome challenging terrains through diverse gaits formed by contact sequences. However, environments characterized by discrete footholds present significant challenges. In this paper, we tackle the problem of free gait motion planning for hexapod robots walking in randomly generated plum blossom pile environments. Specifically, we first address the complexity of multi-leg coordination in discrete environments by treating each leg of the hexapod robot as an individual agent. Then, we propose the Hybrid action space Multi-Agent Soft Actor Critic (Hybrid-MASAC) algorithm capable of handling both discrete and continuous actions. Finally, we present an integrated free gait motion planning method based on Hybrid-MASAC, streamlining gait, Center of Mass (COM), and foothold sequences planning into a single model. Comparative and ablation experiments in both of the simulated and real plum blossom pile environments demonstrate the feasibility and efficiency of our method. | Multi-agent Reinforcement Learning with Hybrid Action Space for Free Gait Motion Planning of Hexapod Robots | [
"Huiqiao Fu",
"Kaiqiang Tang",
"Peng Li",
"Guizhou Deng",
"Chunlin Chen"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=1tCteNSbFH | @inproceedings{
yang2024trajectory,
title={Trajectory Improvement and Reward Learning from Comparative Language Feedback},
author={Zhaojing Yang and Miru Jun and Jeremy Tien and Stuart Russell and Anca Dragan and Erdem Biyik},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=1tCteNSbFH}
} | Learning from human feedback has gained traction in fields like robotics and natural language processing in recent years. While prior works mostly rely on human feedback in the form of comparisons, language is a preferable modality that provides more informative insights into user preferences. In this work, we aim to incorporate comparative language feedback to iteratively improve robot trajectories and to learn reward functions that encode human preferences. To achieve this goal, we learn a shared latent space that integrates trajectory data and language feedback, and subsequently leverage the learned latent space to improve trajectories and learn human preferences. To the best of our knowledge, we are the first to incorporate comparative language feedback into reward learning. Our simulation experiments demonstrate the effectiveness of the learned latent space and the success of our learning algorithms. We also conduct human subject studies that show our reward learning algorithm achieves a 23.9% higher subjective score on average and is 11.3% more time-efficient compared to preference-based reward learning, underscoring the superior performance of our method. Our website is at https://liralab.usc.edu/comparative-language-feedback/. | Trajectory Improvement and Reward Learning from Comparative Language Feedback | [
"Zhaojing Yang",
"Miru Jun",
"Jeremy Tien",
"Stuart Russell",
"Anca Dragan",
"Erdem Biyik"
] | Conference | Poster | 2410.06401 | [
"https://github.com/USC-Lira/language-preference-learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://liralab.usc.edu/comparative-language-feedback/ |
|
null | https://openreview.net/forum?id=1jc2zA5Z6J | @inproceedings{
lum2024get,
title={Get a Grip: Multi-Finger Grasp Evaluation at Scale Enables Robust Sim-to-Real Transfer},
author={Tyler Ga Wei Lum and Albert H. Li and Preston Culbertson and Krishnan Srinivasan and Aaron Ames and Mac Schwager and Jeannette Bohg},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=1jc2zA5Z6J}
} | This work explores conditions under which multi-finger grasping algorithms can attain robust sim-to-real transfer. While numerous large datasets facilitate learning *generative* models for multi-finger grasping at scale, reliable real-world dexterous grasping remains challenging, with most methods degrading when deployed on hardware. An alternate strategy is to use *discriminative* grasp evaluation models for grasp selection and refinement, conditioned on real-world sensor measurements. This paradigm has produced state-of-the-art results for vision-based parallel-jaw grasping, but remains unproven in the multi-finger setting. In this work, we find that existing datasets and methods have been insufficient for training discriminitive models for multi-finger grasping. To train grasp evaluators at scale, datasets must provide on the order of millions of grasps, including both positive *and negative examples*, with corresponding visual data resembling measurements at inference time. To that end, we release a new, open-source dataset of 3.5M grasps on 4.3K objects annotated with RGB images, point clouds, and trained NeRFs. Leveraging this dataset, we train vision-based grasp evaluators that outperform both analytic and generative modeling-based baselines on extensive simulated and real-world trials across a diverse range of objects. We show via numerous ablations that the key factor for performance is indeed the evaluator, and that its quality degrades as the dataset shrinks, demonstrating the importance of our new dataset. Project website at: https://sites.google.com/view/get-a-grip-dataset. | Get a Grip: Multi-Finger Grasp Evaluation at Scale Enables Robust Sim-to-Real Transfer | [
"Tyler Ga Wei Lum",
"Albert H. Li",
"Preston Culbertson",
"Krishnan Srinivasan",
"Aaron Ames",
"Mac Schwager",
"Jeannette Bohg"
] | Conference | Poster | [
"https://github.com/tylerlum/get_a_grip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | https://sites.google.com/view/get-a-grip-dataset |
||
null | https://openreview.net/forum?id=1TEZ1hiY5m | @inproceedings{
escontrela2024learning,
title={Learning Robotic Locomotion Affordances and Photorealistic Simulators from Human-Captured Data},
author={Alejandro Escontrela and Justin Kerr and Kyle Stachowicz and Pieter Abbeel},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=1TEZ1hiY5m}
} | Learning reliable affordance models which satisfy human preferences is often hindered by a lack of high-quality training data. Similarly, learning visuomotor policies in simulation can be challenging due to the high cost of photo-realistic rendering. We present PAWS: a comprehensive robot learning framework that uses a novel portable data capture rig and processing pipeline to collect long-horizon trajectories that include camera poses, foot poses, terrain meshes, and 3D radiance fields. We also contribute PAWS-Data: an extensive dataset gathered with PAWS containing over 10 hours of indoor and outdoor trajectories spanning a variety of scenes. With PAWS-Data we leverage radiance fields' photo-realistic rendering to generate tens of thousands of viewpoint-augmented images, then produce pixel affordance labels by identifying semantically similar regions to those traversed by the user. On this data we finetune a navigation affordance model from a pretrained backbone, and perform detailed ablations. Additionally, We open source PAWS-Sim, a high-speed photo-realistic simulator which integrates PAWS-Data with IsaacSim, enabling research for visuomotor policy learning. We evaluate the utility of the affordance model on a quadrupedal robot, which plans through affordances to follow pathways and sidewalks, and avoid human collisions. Project resources are available on the [website](https://pawslocomotion.com). | Learning Robotic Locomotion Affordances and Photorealistic Simulators from Human-Captured Data | [
"Alejandro Escontrela",
"Justin Kerr",
"Kyle Stachowicz",
"Pieter Abbeel"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=1IzW0aniyg | @inproceedings{
chen2024escirl,
title={{ESCIRL}: Evolving Self-Contrastive {IRL} for Trajectory Prediction in Autonomous Driving},
author={Zhaorun Chen and Siyue Wang and Zhuokai Zhao and Chaoli Mao and Yiyang Zhou and Jiayu He and Albert Sibo Hu},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=1IzW0aniyg}
} | While deep neural networks (DNN) and inverse reinforcement learning (IRL) have both been commonly used in autonomous driving to predict trajectories through learning from expert demonstrations, DNN-based methods suffer from data-scarcity, while IRL-based approaches often struggle with generalizability, making both hard to apply to new driving scenarios. To address these issues, we introduce EscIRL, a novel decoupled bi-level training framework that iteratively learns robust reward models from only a few mixed-scenario demonstrations. At the inner level, EscIRL introduces a self-contrastive IRL module that learns a spectrum of specialized reward functions by contrasting demonstrations across different scenarios. At the outer level, ESCIRL employs an evolving loop that iteratively refines the contrastive sets, ensuring global convergence. Experiments on two multi-scenario datasets, CitySim and INTERACTION, demonstrate the effectiveness of EscIRL, outperforming state-of-the-art DNN and IRL-based methods by 41.3% on average. Notably, we show that EscIRL achieves superior generalizability compared to DNN-based approaches while requiring only a small fraction of the data, effectively addressing data-scarcity constraints. All code and data are available at https://github.com/SiyueWang-CiDi/EscIRL. | EscIRL: Evolving Self-Contrastive IRL for Trajectory Prediction in Autonomous Driving | [
"Siyue Wang",
"Zhaorun Chen",
"Zhuokai Zhao",
"Chaoli Mao",
"Yiyang Zhou",
"Jiayu He",
"Albert Sibo Hu"
] | Conference | Poster | [
"https://github.com/SiyueWang-CiDi/EscIRL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=0gDbaEtVrd | @inproceedings{
djeumou2024one,
title={One Model to Drift Them All: Physics-Informed Conditional Diffusion Model for Driving at the Limits},
author={Franck Djeumou and Thomas Jonathan Lew and NAN DING and Michael Thompson and Makoto Suminaka and Marcus Greiff and John Subosits},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=0gDbaEtVrd}
} | Enabling autonomous vehicles to reliably operate at the limits of handling— where tire forces are saturated — would improve their safety, particularly in scenarios like emergency obstacle avoidance or adverse weather conditions.
However, unlocking this capability is challenging due to the task's dynamic nature and the high sensitivity to uncertain multimodal properties of the road, vehicle, and their dynamic interactions.
Motivated by these challenges, we propose a framework to learn a conditional diffusion model for high-performance vehicle control using an unlabelled multimodal trajectory dataset.
We design the diffusion model to capture the distribution of parameters of a physics-informed data-driven dynamics model.
By conditioning the generation process on online measurements, we integrate the diffusion model into a real-time model predictive control framework for driving at the limits, and show that it can adapt on the fly to a given vehicle and environment.
Extensive experiments on a Toyota Supra and a Lexus LC 500 show that a single diffusion model enables reliable autonomous drifting on both vehicles when operating with different tires in varying road conditions.
The model matches the performance of task-specific expert models while outperforming them in generalization to unseen conditions, paving the way towards a general, reliable method for autonomous driving at the limits of handling. | One Model to Drift Them All: Physics-Informed Conditional Diffusion Model for Driving at the Limits | [
"Franck Djeumou",
"Thomas Jonathan Lew",
"NAN DING",
"Michael Thompson",
"Makoto Suminaka",
"Marcus Greiff",
"John Subosits"
] | Conference | Oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | |||
null | https://openreview.net/forum?id=0M7JiV1GFN | @inproceedings{
gao2024provably,
title={Provably Safe Online Multi-Agent Navigation in Unknown Environments},
author={Zhan Gao and Guang Yang and Jasmine Bayrooti and Amanda Prorok},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=0M7JiV1GFN}
} | Control Barrier Functions (CBFs) provide safety guarantees for multi-agent navigation. However, traditional approaches require full knowledge of the environment (e.g., obstacle positions and shapes) to formulate CBFs and hence, are not applicable in unknown environments. This paper overcomes this issue by proposing an Online Exploration-based Control Lyapunov Barrier Function (OE-CLBF) controller. It estimates the unknown environment by learning its corresponding CBF with a Support Vector Machine (SVM) in an online manner, using local neighborhood information, and leverages the latter to generate actions for safe navigation. To reduce the computation incurred by the online SVM training, we use an Imitation Learning (IL) framework to predict the importance of neighboring agents with Graph Attention Networks (GATs), and train the SVM only with information received from neighbors of high `value'. The OE-CLBF allows for decentralized deployment, and importantly, provides provable safety guarantees that we derive in this paper. Experiments corroborate theoretical findings and demonstrate superior performance w.r.t. state-of-the-art baselines in a variety of unknown environments. | Provably Safe Online Multi-Agent Navigation in Unknown Environments | [
"Zhan Gao",
"Guang Yang",
"Jasmine Bayrooti",
"Amanda Prorok"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |