Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Untie-the-Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models
|
2 |
|
3 |
<div align="center">
|
@@ -75,6 +89,11 @@ For deployment, we recommend using **vLLM**:
|
|
75 |
|
76 |
|
77 |
|
|
|
|
|
|
|
|
|
|
|
78 |
## Citation
|
79 |
|
80 |
If you find this repo helpful, please cite our paper as follows:
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
tags:
|
7 |
+
- chat
|
8 |
+
- 128k
|
9 |
+
base_model:
|
10 |
+
- rgtjf/Qwen2-UtK-72B-128K
|
11 |
+
---
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
# Untie-the-Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models
|
16 |
|
17 |
<div align="center">
|
|
|
89 |
|
90 |
|
91 |
|
92 |
+
## License
|
93 |
+
|
94 |
+
The content of this project itself is licensed under [LICENSE](LICENSE).
|
95 |
+
|
96 |
+
|
97 |
## Citation
|
98 |
|
99 |
If you find this repo helpful, please cite our paper as follows:
|