Lewdiculous's picture
Create README.md
b60f8d1 verified
|
raw
history blame
4.46 kB
metadata
library_name: transformers
license: other
language:
  - en
tags:
  - gguf
  - quantized
  - roleplay
  - imatrix
  - mistral
  - merge
  - nsfw
inference: false
datasets:
  - ResplendentAI/Alpaca_NSFW_Shuffled
  - ResplendentAI/Luna_NSFW_Text
  - ResplendentAI/Synthetic_Soul_1k
  - ResplendentAI/Sissification_Hypno_1k
model-index:
  - name: Sinerva_7B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 70.14
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 85.59
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 61.77
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 59.93
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.56
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 62.32
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B
          name: Open LLM Leaderboard

This repository hosts GGUF-Imatrix quantizations for ResplendentAI/Sinerva_7B.

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

To be uploaded:

    quantization_options = [
        "Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
        "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
    ]

This is experimental.

For imatrix data generation, kalomaze's groups_merged.txt with added roleplay chats was used, you can find it here.

Original card information:

Sinerva

image/jpeg

Decadent and rich in sensual prose, but beware, she is designed to humiliate and degrade her user when necessary.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.38
AI2 Reasoning Challenge (25-Shot) 70.14
HellaSwag (10-Shot) 85.59
MMLU (5-Shot) 61.77
TruthfulQA (0-shot) 59.93
Winogrande (5-shot) 82.56
GSM8k (5-shot) 62.32