NeuralBeagle14-7B / README.md
mlabonne's picture
Update README.md
1567ad6 verified
metadata
license: cc-by-nc-4.0
tags:
  - merge
  - mergekit
  - lazymergekit
  - dpo
  - rlhf
base_model: mlabonne/Beagle14-7B
model-index:
  - name: NeuralBeagle14-7B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 72.95
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 88.34
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.55
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 69.93
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.4
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 70.28
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralBeagle14-7B
          name: Open LLM Leaderboard

🐢 NeuralBeagle14-7B

Update 01/16/24: NeuralBeagle14-7B is (probably) the best 7B model you can find! πŸŽ‰

NeuralBeagle14-7B is a DPO fine-tune of mlabonne/Beagle14-7B using the argilla/distilabel-intel-orca-dpo-pairs preference dataset and my DPO notebook from this article.

It is based on a merge of the following models using LazyMergekit:

Thanks Argilla for providing the dataset and the training recipe here. πŸ’ͺ

You can try it out in this Space (GGUF Q4_K_M).

πŸ” Applications

This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template.

Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.

⚑ Quantized models

πŸ† Evaluation

Open LLM Leaderboard

NeuralBeagle14-7B ranks first on the Open LLM Leaderboard in the ~7B category.

It has the same average score as Beagle14-7B ("Show merges"), which could be due to might be due to an unlucky run. I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point, since this dataset or its original version are present in multiple models. I need to find more high-quality preference data for the next DPO merge.

Note that some models like udkai/Turdus and nfaheem/Marcoroni-7b-DPO-Merge are unfortunately contaminated on purpose (see the very high Winogrande score).

Nous

The evaluation was performed using LLM AutoEval on Nous suite. It is the best 7B model to date.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralBeagle14-7B πŸ“„ 60.25 46.06 76.77 70.32 47.86
mlabonne/Beagle14-7B πŸ“„ 59.4 44.38 76.53 69.44 47.25
mlabonne/NeuralDaredevil-7B πŸ“„ 59.39 45.23 76.2 67.61 48.52
argilla/distilabeled-Marcoro14-7B-slerp πŸ“„ 58.93 45.38 76.48 65.68 48.18
mlabonne/NeuralMarcoro14-7B πŸ“„ 58.4 44.59 76.17 65.94 46.9
openchat/openchat-3.5-0106 πŸ“„ 53.71 44.17 73.72 52.53 44.4
teknium/OpenHermes-2.5-Mistral-7B πŸ“„ 52.42 42.75 72.99 52.99 40.94

You can find the complete benchmark on YALL - Yet Another LLM Leaderboard.

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Built with Distilabel