Text Generation
Transformers
PyTorch
internlm
agent
tool
custom_code
lovesnowbest commited on
Commit
0d2465b
1 Parent(s): bd25608

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -38
README.md CHANGED
@@ -10,26 +10,18 @@ tags:
10
 
11
  # Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models
12
 
 
 
 
13
  ## ✨ Introduction
14
 
15
  [[🤗 HuggingFace](https://huggingface.co/lovesnowbest/Agent-FLAN-7b)]
16
- [[📃 Paper](https://arxiv.org/abs/)]
17
- [[🌐 Project Page](https://interlm.github.io/agent-flan/)]
18
 
19
  > Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents. How to integrate agent ability into general LLMs becomes a crucial and urgent problem. This paper first delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations. Based on the above findings, we propose Agent-FLAN to effectively Fine-tune LANguage models for Agents. Through careful decomposition and redesign of the training corpus, Agent-FLAN enables Llama2-7B to outperform prior best works by 3.5% across various agent evaluation datasets. With comprehensively constructed negative samples, Agent-FLAN greatly alleviates the hallucination issues based on our established evaluation benchmark. Besides, it consistently improves the agent capability of LLMs when scaling model sizes while slightly enhancing the general capability of LLMs.
20
 
21
- ## ♟️ Agent-FLAN
22
-
23
- Agent-FLAN series are finetuned on AgentInstruct and Toolbench by applying the data generation pipeline proposed in Agent-FLAN paper, which holds strong abilities on various agent tasks and tool utilization~
24
-
25
- <div>
26
- <center>
27
- <img src="docs/figure/teaser.png">
28
-
29
- Comparison of recent agent tuning approaches on Held-In, Held-Out tasks. Performances are normalized with GPT-4 results for better visualization. * denotes our re-implementation for a fair comparison.
30
- </div>
31
-
32
- ### 🤗 HuggingFace Model & Dataset
33
 
34
  Agent-FLAN is produced by mixed training on AgentInstruct, ToolBench, and ShareGPT datasets from the Llama2-chat series.
35
 
@@ -40,37 +32,20 @@ dict(role='system', begin='<|Human|>െ', end='\n '),
40
  dict(role='assistant', begin='<|Assistant|>െ', end='ി\n '),
41
  ```
42
 
43
- The 7B model is available on Huggingface model hub.
44
-
45
- | Model | Huggingface Repo |
46
- | :---------: | :------------------------------------------------------------: |
47
- | Agent-FLAN-7B | [🤗Huggingface Link](https://huggingface.co/lovesnowbest/Agent-FLAN-7b) |
48
-
49
- The Agent-FLAN dataset is also available on Huggingface dataset hub.
50
-
51
- | Dataset | Huggingface Repo |
52
- | :---------: | :------------------------------------------------------------: |
53
- | Agent-FLAN Dataset | [🤗Huggingface Link](https://huggingface.co/datasets/lovesnowbest/Agent-FLAN) |
54
-
55
- ## 💫 Detailed Results
56
-
57
- <div>
58
- <center>
59
- <img src="docs/figure/performance.png" width="100%">
60
- </div>
61
-
62
- Main results of Agent-FLAN. Agent-FLAN significantly outperforms previous agent-tuning approaches by a large margin on both held-in and held-out tasks. * denotes our re-implementation with the same amount of training data for a fair comparison. Since FireAct does not train on AgentInstruct dataset, we omit its performance on the HELD-IN set. Bold: the best in API-based and open-sourced models.
63
-
64
-
65
  ## ❤️ Acknowledgements
66
 
67
  Agent-FLAN is built with [Lagent](https://github.com/InternLM/lagent) and [T-Eval](https://github.com/open-compass/t-eval). Thanks for their awesome work!
68
 
69
  ## 🖊️ Citation
70
 
71
- If you find this project useful in your research, please consider cite:
72
  ```
73
- TBD
 
 
 
 
 
74
  ```
75
 
76
  ## 💳 License
 
10
 
11
  # Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models
12
 
13
+ This page holds the Llama2-7b model which is trained with Agent-FLAN dataset. It is fine-tuned on AgentInstruct and Toolbench by applying the data generation pipeline proposed in Agent-FLAN paper, which holds strong abilities on various agent tasks and tool utilization~
14
+
15
+
16
  ## ✨ Introduction
17
 
18
  [[🤗 HuggingFace](https://huggingface.co/lovesnowbest/Agent-FLAN-7b)]
19
+ [[📃 Paper](https://arxiv.org/abs/2403.12881)]
20
+ [[🌐 Project Page](https://internlm.github.io/Agent-FLAN/)]
21
 
22
  > Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents. How to integrate agent ability into general LLMs becomes a crucial and urgent problem. This paper first delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations. Based on the above findings, we propose Agent-FLAN to effectively Fine-tune LANguage models for Agents. Through careful decomposition and redesign of the training corpus, Agent-FLAN enables Llama2-7B to outperform prior best works by 3.5% across various agent evaluation datasets. With comprehensively constructed negative samples, Agent-FLAN greatly alleviates the hallucination issues based on our established evaluation benchmark. Besides, it consistently improves the agent capability of LLMs when scaling model sizes while slightly enhancing the general capability of LLMs.
23
 
24
+ ## 🤗 Agent-FLAN Model
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  Agent-FLAN is produced by mixed training on AgentInstruct, ToolBench, and ShareGPT datasets from the Llama2-chat series.
27
 
 
32
  dict(role='assistant', begin='<|Assistant|>െ', end='ി\n '),
33
  ```
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## ❤️ Acknowledgements
36
 
37
  Agent-FLAN is built with [Lagent](https://github.com/InternLM/lagent) and [T-Eval](https://github.com/open-compass/t-eval). Thanks for their awesome work!
38
 
39
  ## 🖊️ Citation
40
 
41
+ If you find this project useful in your research, please consider citing:
42
  ```
43
+ @article{chen2024agent,
44
+ title={Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models},
45
+ author={Chen, Zehui and Liu, Kuikun and Wang, Qiuchen and Liu, Jiangning and Zhang, Wenwei and Lin, Dahua and Chen, Kai and Zhao, Feng},
46
+ journal={arXiv preprint arXiv:2403.12881},
47
+ year={2024}
48
+ }
49
  ```
50
 
51
  ## 💳 License