BigCodeLlama-92b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
048be16 verified
|
raw
history blame
4.11 kB
metadata
license: mit
tags:
  - mergekit
  - merge
  - code
base_model:
  - codellama/CodeLlama-70b-Instruct-hf
pipeline_tag: conversational
model-index:
  - name: BigCodeLlama-92b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 54.78
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 77.84
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 55.4
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 51.34
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 73.09
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 44.96
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nisten/BigCodeLlama-92b
          name: Open LLM Leaderboard

BigCodeLLama 92b LFG πŸš€

Experimental 92B CodeLlaMA frankenstein to see how it benchmarks

Models Merged with base codellama/CodeLlama-70b-Instruct-hf

Models Merged

The following models were included in the merge:

  • ../CodeLlama-70b-Python-hf
  • ../CodeLlama-70b-Instruct-hf

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 69]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
- sources:
  - layer_range: [42, 80]
    model:
      model:
        path: ../CodeLlama-70b-Python-hf

Gguf available here https://huggingface.co/nisten/BigCodeLlama-92b-GGUF

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.57
AI2 Reasoning Challenge (25-Shot) 54.78
HellaSwag (10-Shot) 77.84
MMLU (5-Shot) 55.40
TruthfulQA (0-shot) 51.34
Winogrande (5-shot) 73.09
GSM8k (5-shot) 44.96