|
--- |
|
language: |
|
- sr |
|
license: apache-2.0 |
|
task_categories: |
|
- question-answering |
|
dataset_info: |
|
features: |
|
- name: index |
|
dtype: int64 |
|
- name: questions |
|
dtype: string |
|
- name: options |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
- name: answer_index |
|
dtype: int64 |
|
splits: |
|
- name: test |
|
num_bytes: 200846 |
|
num_examples: 1003 |
|
download_size: 139630 |
|
dataset_size: 200846 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: test |
|
path: data/oz-eval-* |
|
--- |
|
|
|
# OZ Eval |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d01cf580c9d4ceb20254bb/Ft9w9DNRN_1bzuQC9XNSG.png) |
|
|
|
## Dataset Description |
|
OZ Eval (_sr._ Opšte Znanje Evaluacija) dataset was created for the purposes of evaluating General Knowledge of LLM models in Serbian language. |
|
Data consists of 1k+ high-quality questions and answers which were used as part of entry exams at the Faculty of Philosophy and Faculty of Organizational Sciences, University of Belgrade. |
|
The exams test the General Knowledge of students and were used in the enrollment periods from 2003 to 2024. |
|
|
|
This is a joint work with [@Stopwolf](https://huggingface.co/Stopwolf)! |
|
|
|
|
|
## Evaluation process |
|
Models are evaluated by the following principle using HuggingFace's library `lighteval`. We supply the model with the following template: |
|
``` |
|
Pitanje: {question} |
|
|
|
Ponuđeni odgovori: |
|
A. {option_a} |
|
B. {option_b} |
|
C. {option_c} |
|
D. {option_d} |
|
E. {option_e} |
|
|
|
Krajnji odgovor: |
|
``` |
|
We then compare likelihoods of each letter (`A, B, C, D, E`) and calculate the final accuracy. All evaluations are ran in a 0-shot manner using a **chat template**. |
|
|
|
GPT-like models were evaluated by taking top 20 probabilities of the first output token, which were further filtered for letters `A` to `E`. Letter with the highest probability was taken as a final answer. |
|
|
|
Exact code for the task can be found [here](https://github.com/Stopwolf/lighteval/blob/oz-eval/community_tasks/oz_evals.py). |
|
|
|
You should run the evaluation with the following command (do not forget to add --use_chat_template): |
|
``` |
|
accelerate launch lighteval/run_evals_accelerate.py \ |
|
--model_args "pretrained={MODEL_NAME},trust_remote_code=True" \ |
|
--use_chat_template \ |
|
--tasks "community|serbian_evals:oz_task|0|0" \ |
|
--custom_tasks "/content/lighteval/community_tasks/oz_evals.py" \ |
|
--output_dir "./evals" \ |
|
--override_batch_size 32 |
|
``` |
|
|
|
## Evaluation results |
|
| Model |Size|Accuracy| |Stderr| |
|
|-------|---:|-------:|--|-----:| |
|
|GPT-4-0125-preview|_???_|0.9199|±|0.002| |
|
|GPT-4o-2024-05-13|_???_|0.9196|±|0.0017| |
|
|GPT-3.5-turbo-0125|_???_|0.8245|±|0.0016| |
|
|[Llama3.1-70B-Instruct \[4bit\]](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-bnb-4bit)|70B|0.8185|±| 0.0122| |
|
|GPT-4o-mini-2024-07-18|_???_|0.7971|±|0.0005| |
|
|[Mustra-7B-Instruct-v0.2](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.2)|7B|0.7388|± |0.0098| |
|
|[Tito-7B-slerp](https://huggingface.co/Stopwolf/Tito-7B-slerp)|7B|0.7099|±|0.0101| |
|
|[Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT)|7B|0.6889|± |0.0103| |
|
|[Zamfir-7B-slerp](https://huggingface.co/Stopwolf/Zamfir-7B-slerp)|7B|0.6849|± |0.0104| |
|
|[Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|12.2B|0.6839|± |0.0104| |
|
|[Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)|8B|0.679|±|0.0147| |
|
[Qwen2-7B-instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)|7B|0.6730|±|0.0105| |
|
|[Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)|8B|0.661|± |0.0106| |
|
|[Yugo60-GPT](https://huggingface.co/datatab/Yugo60-GPT)|7B|0.6411|±|0.0107| |
|
|[DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat)|15.7B|0.6047|±|0.0109| |
|
|[Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)|8B|0.5972|±| 0.0155| |
|
|[Llama3-70B-Instruct \[4bit\]](https://huggingface.co/unsloth/llama-3-70b-Instruct-bnb-4bit)|70B|0.5942|±| 0.011| |
|
|[Hermes-3-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)|8B|0.5932|±|0.0155| |
|
|[Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)|8B|0.5852|±|0.011| |
|
|[Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)|7B|0.5753|±| 0.011| |
|
|[openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)|8B|0.5513|±|0.0111| |
|
|[Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|8B|0.5274|±|0.0111| |
|
|[Starling-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)|7B|0.5244|±|0.0112| |
|
|[Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|7B|0.5145|±|0.0112| |
|
|[Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)|1.5B|0.4506|±|0.0111| |
|
|[Perucac-7B-slerp](https://huggingface.co/Stopwolf/Perucac-7B-slerp)|7B|0.4247|±|0.011| |
|
|[Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)|3.8B|0.3719|±|0.0108| |
|
|[SambaLingo-Serbian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Chat)|7B|0.2802|±|0.01| |
|
|[Gemma-2-9B-it](https://huggingface.co/google/gemma-2-9b-it)|9B|0.2193|±|0.0092| |
|
|[Gemma-2-2B-it](https://huggingface.co/google/gemma-2-2b-it)|2.6B|0.1715|±|0.0084| |
|
|
|
|
|
### Citation |
|
``` |
|
@article{oz-eval, |
|
author = "Stanivuk Siniša & Đorđević Milena", |
|
title = "OZ Eval: Measuring General Knowledge Skill at University Level of LLMs in Serbian Language", |
|
year = "2024" |
|
howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}}, |
|
} |
|
``` |