leaderboard-pr-bot's picture
Adding Evaluation Results
09471ed verified
|
raw
history blame
8.7 kB
metadata
license: mit
datasets:
  - DAMO-NLP-SG/LongCorpus-2.5B
model-index:
  - name: CLEX-Mixtral-8x7B-Chat-32K
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 66.38
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.48
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 70.12
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 56.47
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.56
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 50.49
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K
          name: Open LLM Leaderboard

CLEX: Continuous Length Extrapolation for Large Language Models

This repo stores the checkpoint of CLEX-Mixtral-8x7B-Chat-32K.

Features and Highlights of CLEX

CLEX_diagram

  • Simple and Clear: MINIMAL code and architecture changes. Only one up-and-down projection layer introduced, NO recurrent memory caching or sparse attention required.
  • Train Short, Test Long: NO performance drop on the sequences 4x~8x longer than the training ones (see here).
  • Continuous Length Extrapolation: Explicitly modeling the continuous dynamics of context window size during length extrapolation.

If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com)

Model Zoo

Model Name Model Type Starting Point Train Data Train Length MAX Test Length HF Repo
CLEX-LLaMA-2-7B-16K base LLaMA-2-7B Redpajama-Book 16K 64K link
CLEX-LLaMA-2-7B-Chat-16K chat CLEX-7B-16K UltraChat 16K 64K link
CLEX-LLaMA-2-7B-64K base LLaMA-2-7B Redpajama-Book 64k 256K link
CLEX-Phi-2-32K base Phi-2-2.7B LongCorpus-2.5B 32k 128K link
CLEX-Mixtral-8x7B-32K base Mixtral-8x7B-v0.1 LongCorpus-2.5B 32k >128K link
CLEX-Mixtral-8x7B-Chat-32k (this checkpoint) chat CLEX-Mixtral-8x7B-32K Ultrachat 200k 32k >128K link

Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K", torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer("What is CLEX?", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))

Evaluation

InfiniteBench

We also evaluate CLEX-Mixtral-8x7B-Chat-32k on InfiniteBench, which is a 128k-length benchmark covering various tasks. We compare our CLEX-Mixtral-8x7B-Chat-32k with GPT-4, Claude, KimiChat, and vanilla Mixtral-8x7B.

Task Name GPT-4 YaRN-Mistral-7B Kimi-Chat Claude 2 CLEX-Mixtral-8x7B-Chat-32k Mixtral-8x7B-Instruct-v0.1
Retrieve.PassKey 100% 92.71% 98.14% 97.80% 99.72% 96.78%
Retrieve.Number 100% 56.61% 95.42% 98.14% 76.10% 76.61%
Retrieve.KV 89.00% < 5% 53.60% 65.40% <5% <5%
En.Sum 14.73% 9.09% 17.93% 14.45% 15.48% 14.3%
En.QA 22.22% 9.55% 16.52% 11.97% 15.52% 16.81%
En.MC 67.25% 27.95% 72.49% 62.88% 58.96% 56.77%
En.Dia 8.50% 7.50% 11.50% 46.50% 9% <5%
Code.Debug 39.59% < 5% 18.02% < 5% 21.32% <5%
Code.Run 23.25% < 5% < 5% < 5% < 5% <5%
Math.Calc < 5% < 5% < 5% < 5% < 5% <5%
Math.Find 60.00% 17.14% 12.57% 32.29% 28% 26.57%

Citation

If you find our project useful, hope you can star our repo and cite our paper as follows:

@article{damonlpsg2023clex,
  author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
  title = {CLEX: Continuous Length Extrapolation for Large Language Models},
  year = 2023,
  journal = {arXiv preprint arXiv:2310.16450},
  url = {https://arxiv.org/abs/2310.16450}
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.75
AI2 Reasoning Challenge (25-Shot) 66.38
HellaSwag (10-Shot) 86.48
MMLU (5-Shot) 70.12
TruthfulQA (0-shot) 56.47
Winogrande (5-shot) 82.56
GSM8k (5-shot) 50.49