trrapi-16b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
a8944e7 verified
|
raw
history blame
1.91 kB
metadata
license: cc-by-nc-nd-4.0
library_name: peft
tags:
  - generated_from_trainer
datasets:
  - noxneural/lilium_albanicum_eng_alb
base_model: rizla/rizla-17
model-index:
  - name: trrapi-16
    results: []

qlora finetune of frankensteined rizla/rizla-17 model

The original Rizla models, already displaying promising multilingual capabilities, underwent multiple rounds of customization and optimizations to further enhance their versatility across languages. The process involves not only fine-tuned adjustments for better language comprehension but also strategic modifications to the underlying framework itself.

This continual refinement in response to specific requirements exemplifies a dynamic approach towards tackling natural language understanding tasks, where adaptability and flexibility are key factors contributing to performance improvements. In essence, these iterative advancements strive to bridge the gap between generalized pre-trained models and highly specialized applications.

*To run a localhost 127.0.0.1:8080 server with llama.cpp do


wget https://huggingface.co/rizla/trrapi-16b/resolve/main/trrapi-q5km.gguf

git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j

./server -m ../trrapi-q5km.gguf --port 8080 -c 2000 -cb -t 8 -ngl 80

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.48
AI2 Reasoning Challenge (25-Shot) 72.10
HellaSwag (10-Shot) 88.88
MMLU (5-Shot) 64.26
TruthfulQA (0-shot) 74.13
Winogrande (5-shot) 86.35
GSM8k (5-shot) 61.18