Guanzheng commited on
Commit
8721983
1 Parent(s): 7dcff9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # CLEX: Continuous Length Extrapolation for Large Language Models
6
+ This repo stores the checkpoint of CLEX-Mixtral-8x7B-Chat-32K.
7
+
8
+
9
+ ## Features and Highlights of CLEX
10
+ ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf)
11
+
12
+ - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required.
13
+ - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)).
14
+ - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation.
15
+
16
+ If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com)
17
+
18
+ ## Model Zoo
19
+ <div align="center">
20
+
21
+ | Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo |
22
+ |:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:|
23
+ | CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) |
24
+ | CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) |
25
+ | CLEX-LLaMA-2-7B-64K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | Pending Upload |
26
+ | CLEX-Phi-2-7B-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | Pending Upload |
27
+ | CLEX-Mixtral-8x7B-32K | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | Pending Upload |
28
+ | CLEX-Mixtral-8x7B-Chat-32k | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | Pending Upload |
29
+ </div>
30
+
31
+
32
+ ## Usage
33
+
34
+
35
+ ```bash
36
+ import torch
37
+ from transformers import AutoTokenizer, AutoModelForCausalLM
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", trust_remote_code=True)
40
+ model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", torch_dtype=torch.bfloat16)
41
+ inputs = tokenizer("What is CLEX?", return_tensors="pt")
42
+ sample = model.generate(**inputs, max_length=128)
43
+ print(tokenizer.decode(sample[0]))
44
+ ```
45
+
46
+
47
+
48
+
49
+ ## Evaluation
50
+
51
+
52
+ ## InfiniteBench
53
+ We also evaluate CLEX-Mixtral-8x7B-Chat-32k on [InfiniteBench](https://github.com/OpenBMB/InfiniteBench), which is a 128k-length benchmark covering various tasks. We compare our CLEX-Mixtral-8x7B-Chat-32k with GPT-4, Claude, KimiChat, and vanilla Mixtral-8x7B.
54
+
55
+ | Task Name | GPT-4 | YaRN-Mistral-7B | Kimi-Chat | Claude 2 | CLEX-Mixtral-8x7B-Chat-32k | Mixtral-8x7B-Instruct-v0.1 |
56
+ | ------------------- | ------ | --------------- | --------- | -------- | -------------------------- | -------------------------- |
57
+ | Retrieve.PassKey | 100% | 92.71% | 98.14% | 97.80% | 99.72% | 96.78% |
58
+ | **Retrieve.Number** | 100% | 56.61% | 95.42% | 98.14% | 76.10% | 76.61% |
59
+ | **Retrieve.KV** | 89.00% | < 5% | 53.60% | 65.40% | <5% | <%5 |
60
+ | En.Sum | 14.73% | 9.09% | 17.93% | 14.45% | 15.48% | 14.3% |
61
+ | En.QA | 22.22% | 9.55% | 16.52% | 11.97% | 15.52% | 16.81% |
62
+ | En.MC | 67.25% | 27.95% | 72.49% | 62.88% | 58.96% | 56.77% |
63
+ | En.Dia | 8.50% | 7.50% | 11.50% | 46.50% | 9% | <5% |
64
+ | Code.Debug | 39.59% | < 5% | 18.02% | < 5% | 21.32% | <5% |
65
+ | Code.Run | 23.25% | < 5% | < 5% | < 5% | < 5% | <5% |
66
+ | Math.Calc | < 5% | < 5% | < 5% | < 5% | < 5% | <5% |
67
+ | Math.Find | 60.00% | 17.14% | 12.57% | 32.29% | 28% | 26.57% |
68
+
69
+
70
+
71
+ ## Citation
72
+ If you find our project useful, hope you can star our repo and cite our paper as follows:
73
+ ```
74
+ @article{damonlpsg2023clex,
75
+ author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
76
+ title = {CLEX: Continuous Length Extrapolation for Large Language Models},
77
+ year = 2023,
78
+ journal = {arXiv preprint arXiv:2310.16450},
79
+ url = {https://arxiv.org/abs/2310.16450}
80
+ }
81
+ ```
82
+