Papers
arxiv:2402.01030

Executable Code Actions Elicit Better LLM Agents

Published on Feb 1
Authors:
,
,
,
,
,

Abstract

Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by constrained action space (e.g., the scope of pre-defined tools) and restricted flexibility (e.g., inability to compose multiple tools). This work proposes to use executable Python code to consolidate LLM agents' actions into a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions. Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that CodeAct outperforms widely used alternatives (up to 20% higher success rate). The encouraging performance of CodeAct motivates us to build an open-source LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language. To this end, we collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. We show that it can be used with existing data to improve models in agent-oriented tasks without compromising their general capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with Python interpreter and uniquely tailored to perform sophisticated tasks (e.g., model training) using existing libraries and autonomously self-debug.

Community

@librarian-bot recommend

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Thanks to the authors for a great read!

๐— ๐˜† ๐˜€๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐˜†:

  • Allow agents to write actions in code (instead of the standard JSON dictionnary): CodeAct
  • Code is just a better way to write actions than JSON! Authors provide an excellent example:

image.png

This highlights several advantages of using code:

  • Code actions are much more concise than JSON.
    • Need to run 4 parallel streams of 5 consecutive actions ? In JSON, you would need to generate 20 JSON blobs, each in their separate step; in Code itโ€™s only 1 step.
    • On average, the paper shows that Code actions require 30% fewer steps than JSON, which amounts to an equivalent reduction in the tokens generated. Since LLM calls are often the dimensioning cost of agent systems, it means your agent system runs are ~30% cheaper.
  • Code enables to re-use tools from common libraries
  • Better performance in benchmarks (20%), due to two reasons:
    • More intuitive way to express actions
    • Extensive exposure of LLMs to code in training

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 1

Spaces citing this paper 2

Collections including this paper 8