Yutian010313
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
|
6 |
+
# Auto-RAG: Autonomous Retrieval-Augmented Generation for Large Language models
|
7 |
+
|
8 |
+
> [Tian Yu](https://tianyu0313.github.io/), [Shaolei Zhang](https://zhangshaolei1998.github.io/), and [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
|
9 |
+
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
|
13 |
+
<!-- Provide a longer summary of what this model is. -->
|
14 |
+
|
15 |
+
|
16 |
+
- **Discription:** These are the LoRA weights obtained by training with synthesized iterative retrieval instruction data. Details can be found in our paper.
|
17 |
+
- **Developed by:** ICTNLP Group. Authors: Tian Yu, Shaolei Zhang and Yang Feng.
|
18 |
+
- **Github Repository:** https://github.com/ictnlp/Auto-RAG
|
19 |
+
- **Finetuned from model:** Meta-Llama3-8B-Instruct
|
20 |
+
|
21 |
+
|
22 |
+
## Uses
|
23 |
+
|
24 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
25 |
+
|
26 |
+
Merge the Meta-Llama3-8B-Instruct weights and Adapter weights.
|
27 |
+
|
28 |
+
```
|
29 |
+
import os
|
30 |
+
from transformers import AutoTokenizer, LlamaForCausalLM
|
31 |
+
import torch
|
32 |
+
|
33 |
+
model = LlamaForCausalLM.from_pretrained(PATH_TO_META_LLAMA3_8B_INSTRUCT,
|
34 |
+
device_map="cpu",
|
35 |
+
)
|
36 |
+
from peft import PeftModel
|
37 |
+
|
38 |
+
model = PeftModel.from_pretrained(model,
|
39 |
+
PATH_TO_ADAPTER)
|
40 |
+
|
41 |
+
from transformers import AutoTokenizer
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_META_LLAMA3_8B_INSTRUCT)
|
43 |
+
|
44 |
+
model = model.merge_and_unload()
|
45 |
+
model.save_pretrained(SAVE_PATH)
|
46 |
+
tokenizer.save_pretrained(SAVE_PATH)
|
47 |
+
```
|
48 |
+
|
49 |
+
Subsequently, you can deploy using frameworks such as vllm.
|
50 |
+
|
51 |
+
## Citation [optional]
|
52 |
+
|
53 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
54 |
+
|
55 |
+
**BibTeX:**
|
56 |
+
|
57 |
+
[More Information Needed]
|
58 |
+
|