--- license: mit datasets: - DAMO-NLP-SG/LongCorpus-2.5B model-index: - name: CLEX-Mixtral-8x7B-Chat-32K results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 70.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.47 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K name: Open LLM Leaderboard --- # CLEX: Continuous Length Extrapolation for Large Language Models This repo stores the checkpoint of CLEX-Mixtral-8x7B-Chat-32K. ## Features and Highlights of CLEX ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf) - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required. - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)). - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation. If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com) ## Model Zoo
| Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo | |:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:| | CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) | | CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) | | CLEX-LLaMA-2-7B-64K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K) | | CLEX-Phi-2-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Phi-2-32K) | | CLEX-Mixtral-8x7B-32K | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K) | | **CLEX-Mixtral-8x7B-Chat-32k** (this checkpoint) | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K) |
## Usage ```bash import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", torch_dtype=torch.bfloat16, trust_remote_code=True) inputs = tokenizer("What is CLEX?", return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` ## Evaluation ## InfiniteBench We also evaluate CLEX-Mixtral-8x7B-Chat-32k on [InfiniteBench](https://github.com/OpenBMB/InfiniteBench), which is a 128k-length benchmark covering various tasks. We compare our CLEX-Mixtral-8x7B-Chat-32k with GPT-4, Claude, KimiChat, and vanilla Mixtral-8x7B. | Task Name | GPT-4 | YaRN-Mistral-7B | Kimi-Chat | Claude 2 | CLEX-Mixtral-8x7B-Chat-32k | Mixtral-8x7B-Instruct-v0.1 | | ------------------- | ------ | --------------- | --------- | -------- | -------------------------- | -------------------------- | | Retrieve.PassKey | 100% | 92.71% | 98.14% | 97.80% | 99.72% | 96.78% | | **Retrieve.Number** | 100% | 56.61% | 95.42% | 98.14% | 76.10% | 76.61% | | **Retrieve.KV** | 89.00% | < 5% | 53.60% | 65.40% | <5% | <5% | | En.Sum | 14.73% | 9.09% | 17.93% | 14.45% | 15.48% | 14.3% | | En.QA | 22.22% | 9.55% | 16.52% | 11.97% | 15.52% | 16.81% | | En.MC | 67.25% | 27.95% | 72.49% | 62.88% | 58.96% | 56.77% | | En.Dia | 8.50% | 7.50% | 11.50% | 46.50% | 9% | <5% | | Code.Debug | 39.59% | < 5% | 18.02% | < 5% | 21.32% | <5% | | Code.Run | 23.25% | < 5% | < 5% | < 5% | < 5% | <5% | | Math.Calc | < 5% | < 5% | < 5% | < 5% | < 5% | <5% | | Math.Find | 60.00% | 17.14% | 12.57% | 32.29% | 28% | 26.57% | ## Citation If you find our project useful, hope you can star our repo and cite our paper as follows: ``` @article{damonlpsg2023clex, author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong}, title = {CLEX: Continuous Length Extrapolation for Large Language Models}, year = 2023, journal = {arXiv preprint arXiv:2310.16450}, url = {https://arxiv.org/abs/2310.16450} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DAMO-NLP-SG__CLEX-Mixtral-8x7B-Chat-32K) | Metric |Value| |---------------------------------|----:| |Avg. |68.75| |AI2 Reasoning Challenge (25-Shot)|66.38| |HellaSwag (10-Shot) |86.48| |MMLU (5-Shot) |70.12| |TruthfulQA (0-shot) |56.47| |Winogrande (5-shot) |82.56| |GSM8k (5-shot) |50.49|