Guanzheng commited on
Commit
3a6d648
1 Parent(s): 47de81f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # CLEX: Continuous Length Extrapolation for Large Language Models
5
+ This repo stores the checkpoint of CLEX-Mixtral-8x7B-32K.
6
+
7
+
8
+ ## Features and Highlights of CLEX
9
+ ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf)
10
+
11
+ - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required.
12
+ - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)).
13
+ - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation.
14
+
15
+ If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com)
16
+
17
+ ## Model Zoo
18
+ <div align="center">
19
+
20
+ | Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo |
21
+ |:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:|
22
+ | CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) |
23
+ | CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) |
24
+ | CLEX-LLaMA-2-7B-64K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | Pending Upload |
25
+ | CLEX-Phi-2-7B-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | Pending Upload |
26
+ | CLEX-Mixtral-8x7B-32K | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | Pending Upload |
27
+ | CLEX-Mixtral-8x7B-Chat-32k | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | Pending Upload |
28
+ </div>
29
+
30
+
31
+ ## Usage
32
+
33
+
34
+ ```bash
35
+ import torch
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K", trust_remote_code=True)
39
+ model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K", torch_dtype=torch.bfloat16)
40
+ inputs = tokenizer("What is CLEX?", return_tensors="pt")
41
+ sample = model.generate(**inputs, max_length=128)
42
+ print(tokenizer.decode(sample[0]))
43
+ ```
44
+
45
+
46
+
47
+
48
+ ## Evaluation
49
+ ### Language Modelling
50
+
51
+
52
+
53
+
54
+ The CLEX-Phi-2-2.7B and CLEX-Mixtral-8x7B are trained on [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B), where the eval results on test set are listed below.
55
+
56
+ | | Train Length | Eval.(32k) | Eval.(64k) | Eval.(128k) | Eval.(256k) |
57
+ | ----------------- | ------------ | ---------- | ---------- | ----------- | ----------- |
58
+ | Mixtral-8x7B | 32k | 2.78 | 3.44 | 5.88 | 14.20 |
59
+ | CLEX-Mixtral-8x7B | 32k | 2.56 | 2.53 | 2.57 | 3.78 |
60
+
61
+
62
+
63
+
64
+
65
+ ## Citation
66
+ If you find our project useful, hope you can star our repo and cite our paper as follows:
67
+ ```
68
+ @article{damonlpsg2023clex,
69
+ author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
70
+ title = {CLEX: Continuous Length Extrapolation for Large Language Models},
71
+ year = 2023,
72
+ journal = {arXiv preprint arXiv:2310.16450},
73
+ url = {https://arxiv.org/abs/2310.16450}
74
+ }
75
+ ```
76
+