woofwolfy's picture
Update README.md
cb96cc1 verified
metadata
base_model: icefog72/WestIceLemonTeaRP-32k-7b
library_name: transformers
license: cc-by-nc-4.0
tags:
  - mergekit
  - merge
  - alpaca
  - mistral
  - not-for-all-audiences
  - nsfw
  - llama-cpp
  - gguf-my-repo
model-index:
  - name: WestIceLemonTeaRP-32k-7b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 68.77
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.89
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.28
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 62.47
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 80.98
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.22
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
          name: Open LLM Leaderboard

woofwolfy/WestIceLemonTeaRP-32k-7b-Q5_K_M-GGUF-Imatrix

This model was converted to GGUF format from icefog72/WestIceLemonTeaRP-32k-7b using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

WestIceLemonTeaRP-32k-7b

image/png

This is a merge of pre-trained language models created using mergekit.

Merge Details

Prompt template: Alpaca, maybe ChatML

  • measurement.json for quanting exl2 included.

thx mradermacher and SilverFan for

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: IceLemonTeaRP-32k-7b
        layer_range: [0, 32]
      - model: WestWizardIceLemonTeaRP
        layer_range: [0, 32]
merge_method: slerp
base_model: IceLemonTeaRP-32k-7b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.27
AI2 Reasoning Challenge (25-Shot) 68.77
HellaSwag (10-Shot) 86.89
MMLU (5-Shot) 64.28
TruthfulQA (0-shot) 62.47
Winogrande (5-shot) 80.98
GSM8k (5-shot) 64.22