Papers
arxiv:2406.19741

ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning

Published on Jun 28
· Submitted by hba123 on Jul 2
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

We present a framework for intuitive robot programming by non-experts, leveraging natural language prompts and contextual information from the Robot Operating System (ROS). Our system integrates large language models (LLMs), enabling non-experts to articulate task requirements to the system through a chat interface. Key features of the framework include: integration of ROS with an AI agent connected to a plethora of open-source and commercial LLMs, automatic extraction of a behavior from the LLM output and execution of ROS actions/services, support for three behavior modes (sequence, behavior tree, state machine), imitation learning for adding new robot actions to the library of possible actions, and LLM reflection via human and environment feedback. Extensive experiments validate the framework, showcasing robustness, scalability, and versatility in diverse scenarios, including long-horizon tasks, tabletop rearrangements, and remote supervisory control. To facilitate the adoption of our framework and support the reproduction of our results, we have made our code open-source. You can access it at: https://github.com/huawei-noah/HEBO/tree/master/ROSLLM.

Community

Paper author Paper submitter
edited 3 days ago

Ground your LLM in the real-world via LLM-RoS allows you to - All our Results are based on OPEN-SOURCE models:

  1. Instruct your robot via natural language using any OPEN-SOURCE LLM
  2. Solve long-horizon tasks simply from natural language commands - e.g., make me coffee, make me noodles etc ...
  3. Introduce new skills via tele-operation and imitation learning
  4. Provide feedback either from the environment or via natural language - used as reflection for the model to re-plan
  5. We have also OPEN-SOURCED our code: https://github.com/huawei-noah/HEBO/tree/rosllm/ROSLLM

No description provided.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.19741 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.19741 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.19741 in a Space README.md to link it from this page.

Collections including this paper 3