Upload 2 files
Browse files- README.md +13 -2
- README_en.md +12 -2
README.md
CHANGED
@@ -11,10 +11,14 @@ pipeline_tag: text-generation
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
-
* 2023.12.16更新:发布[论文(中文版)](https://cloud.tsinghua.edu.cn/d/5894ec4442e54a6aac96/)、[论文(英文版)](https://
|
15 |
* 2023.12.14更新:发布经过微调的Qwen-14b-chat-yarn-32k,微调后的模型能适应32k长度(约4万汉字)的中英问答,相较于之前的通过位置插值得到的32k模型,几乎完全解决了多文档问答任务下召回率低(即 lost in middle 现象)的问题。
|
16 |
<br>
|
|
|
|
|
|
|
17 |
<br>
|
|
|
18 |
# LongBench测试结果
|
19 |
### LongBench的passage_retrieval_zh的评测结果
|
20 |
| 模型 | 得分 (acc) |
|
@@ -73,9 +77,16 @@ print(response)
|
|
73 |
|
74 |
<br>
|
75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
# 问答例子
|
77 |
* 模型支持中文和英文,支持长文本总结、多文档问答、长文本问答、多轮对话等任务。
|
78 |
-
*
|
79 |
|
80 |
<details>
|
81 |
<summary>多文档QA</summary>
|
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
+
* 2023.12.16更新:发布[论文(中文版)](https://cloud.tsinghua.edu.cn/d/5894ec4442e54a6aac96/)、[论文(英文版)](https://arxiv.org/abs/2312.11193)
|
15 |
* 2023.12.14更新:发布经过微调的Qwen-14b-chat-yarn-32k,微调后的模型能适应32k长度(约4万汉字)的中英问答,相较于之前的通过位置插值得到的32k模型,几乎完全解决了多文档问答任务下召回率低(即 lost in middle 现象)的问题。
|
16 |
<br>
|
17 |
+
|
18 |
+
# 支持32k上下文的的Qwen-14b-chat模型
|
19 |
+
|
20 |
<br>
|
21 |
+
|
22 |
# LongBench测试结果
|
23 |
### LongBench的passage_retrieval_zh的评测结果
|
24 |
| 模型 | 得分 (acc) |
|
|
|
77 |
|
78 |
<br>
|
79 |
|
80 |
+
# 局限性
|
81 |
+
1. 指令微调数据中不包含“所有参考文档中均不存在相关信息”的样本,导致模型在参考文档中不存在相关信息时,可能产生幻觉,编造答案,而不是回答“没有找到信息,无法回答”。后续可能会改进数据。
|
82 |
+
2. 指令微调训练数据的类型依然不够多样,虽然已经涵盖了多个长文本场景,但可能难以适应一些长文本的新场景,例如agent能力、两个长文档的对比等。后续可能会增加数据多样性。
|
83 |
+
3. 由于时间和计算资源有限,模型的评估数据集不够多,目前还缺乏一个全方面的能力评估。
|
84 |
+
4. 由于时间和计算资源有限,我仅仅在32k的长度下进行训练和测试,使用同样的方法,能否使模型适应更长的context window(例如100k),还有待研究。
|
85 |
+
|
86 |
+
|
87 |
# 问答例子
|
88 |
* 模型支持中文和英文,支持长文本总结、多文档问答、长文本问答、多轮对话等任务。
|
89 |
+
* 经过指令微调,模型在面对多文档问答问题时,会先给出原文,再回答。这一回答方式,使内部幻觉显著降低。
|
90 |
|
91 |
<details>
|
92 |
<summary>多文档QA</summary>
|
README_en.md
CHANGED
@@ -11,9 +11,12 @@ pipeline_tag: text-generation
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
-
* Updated on December 16, 2023: Release [Paper](https://
|
15 |
* Updated on December 14, 2023: We have released the Qwen-14b-chat-yarn-32k model, which has been fine-tuned to handle Chinese and English question-answering tasks with a length of up to 32k (approximately 40,000 Chinese characters). This model addresses the low recall issue in multi-document question-answering tasks (also known as the "lost in middle" phenomenon) that was present in the previous 32k model obtained through position interpolation. <br>
|
16 |
<br>
|
|
|
|
|
|
|
17 |
# Evaluation results in LongBench
|
18 |
### Evaluation results for passage_retrieval_zh in LongBench
|
19 |
|
@@ -72,9 +75,16 @@ During training, use_dynamic_ntk was set to True.
|
|
72 |
|
73 |
<br>
|
74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
# Model Q&A examples
|
76 |
* The model supports both Chinese and English, and supports tasks such as long text summarization, multi document Q&A, long text Q&A, and multiple rounds of dialogue.
|
77 |
-
* The model will first provide the original text before answering multi document Q&A questions
|
78 |
|
79 |
<details>
|
80 |
<summary>Multi-Doc QA</summary>
|
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
+
* Updated on December 16, 2023: Release [Paper](https://arxiv.org/abs/2312.11193)
|
15 |
* Updated on December 14, 2023: We have released the Qwen-14b-chat-yarn-32k model, which has been fine-tuned to handle Chinese and English question-answering tasks with a length of up to 32k (approximately 40,000 Chinese characters). This model addresses the low recall issue in multi-document question-answering tasks (also known as the "lost in middle" phenomenon) that was present in the previous 32k model obtained through position interpolation. <br>
|
16 |
<br>
|
17 |
+
|
18 |
+
# Qwen-14b-chat model with 32k context window
|
19 |
+
|
20 |
# Evaluation results in LongBench
|
21 |
### Evaluation results for passage_retrieval_zh in LongBench
|
22 |
|
|
|
75 |
|
76 |
<br>
|
77 |
|
78 |
+
# Limitations
|
79 |
+
1. The fine-tuning data for instructions does not include samples where "there is no relevant information in any of the reference documents." This may cause the model to produce illusions and fabricate answers when there is no relevant information in the reference documents. The data may be improved in the future.
|
80 |
+
2. The types of training data for fine-tuning instructions are still not diverse enough, which may make it difficult to adapt to new scenarios of long texts, such as agent capabilities or comparing two long documents. Increasing data diversity may be considered in the future.
|
81 |
+
3. Due to limited time and computing resources, the evaluation for the model is not extensive enough, and there is currently a lack of comprehensive capability evaluation.
|
82 |
+
4. Due to limited time and computing resources, I only trained and tested the model with a length of 32k. It remains to be studied whether the same approach can make the model adapt to a longer context window, such as 100k.
|
83 |
+
|
84 |
+
|
85 |
# Model Q&A examples
|
86 |
* The model supports both Chinese and English, and supports tasks such as long text summarization, multi document Q&A, long text Q&A, and multiple rounds of dialogue.
|
87 |
+
* The model will first provide the original text before answering multi document Q&A questions. This answer format can significantly reduce internal hallucinations.
|
88 |
|
89 |
<details>
|
90 |
<summary>Multi-Doc QA</summary>
|