datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.64M
| likes
int64 0
6.41k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
Ayush-Singh/skywork-sample | Ayush-Singh | "2024-12-01T17:11:15Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T17:11:12Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: reward_chosen
dtype: float64
- name: reward_rejected
dtype: float64
splits:
- name: train
num_bytes: 107247
num_examples: 10
download_size: 57710
dataset_size: 107247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MKJ-TOE/magpie-reasoning-llm-jp-13b-20k | MKJ-TOE | "2024-12-01T17:12:08Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T17:11:52Z" | ---
license: apache-2.0
---
|
taesiri/PhotoshopRequest-DailyDump-December-2024 | taesiri | "2024-12-02T17:53:46Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T17:22:48Z" | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: title
dtype: string
- name: source_image
dtype: image
- name: comment_id
dtype: string
- name: edited_image
dtype: image
- name: json_data
dtype: string
- name: permalink
dtype: string
- name: created_date
dtype: timestamp[ns]
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 12727655623.946
num_examples: 1902
download_size: 7611055971
dataset_size: 12727655623.946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ckcl/train_2 | ckcl | "2024-12-01T17:28:07Z" | 3 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-01T17:28:07Z" | ---
license: mit
---
|
anthracite-org/pixmo-cap-qa-images | anthracite-org | "2024-12-01T18:46:33Z" | 3 | 0 | [
"task_categories:visual-question-answering",
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering"
] | "2024-12-01T17:58:09Z" | ---
license: odc-by
task_categories:
- visual-question-answering
dataset_info:
features:
- name: image
dtype: image
- name: image_url
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 121905039183.176
num_examples: 268816
download_size: 87966670514
dataset_size: 121905039183.176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Big thanks to Ai2 for releasing the original [PixMo-CapQA](https://huggingface.co/datasets/allenai/pixmo-cap-qa) dataset. To preserve the images and simplify usage of the dataset, we are releasing this version, which includes downloaded images.
# PixMo-CapQA
PixMo-CapQA is a synthetic dataset of question/answer pairs about images. The data was generated by using the
[Claude](https://www.anthropic.com/claude) large language model to build Q/A pairs from [dense captions of images](https://huggingface.co/datasets/allenai/pixmo-cap) (the model did not see the actual images).
PixMo-CapQA is a part of the [PixMo dataset collection](https://huggingface.co/collections/allenai/pixmo-674746ea613028006285687b) and was used to train the [Molmo family of models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
Quick links:
- 📃 [Paper](https://molmo.allenai.org/paper.pdf)
- 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
## Loading
```python
data = datasets.load_dataset("anthracite-org/pixmo-cap-qa-images", split="train")
```
## Data Format
Unlike the original release, images are included in the dataset itself.
The `question` and `answer` fields contain the Q/A pairs.
The images can be repeated since many of the images have multiple Q/A pairs.
## License
This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
This dataset includes data generated from Claude which are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup). |
Kariander1/flux_10k_captions | Kariander1 | "2024-12-01T18:12:49Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T18:12:46Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image_path
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3703339
num_examples: 10000
download_size: 1766621
dataset_size: 3703339
---
# Dataset Card for "flux_10k_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
k4d3/akatan | k4d3 | "2024-12-01T18:16:22Z" | 3 | 1 | [
"license:wtfpl",
"region:us"
] | null | "2024-12-01T18:16:22Z" | ---
license: wtfpl
---
|
sebgrima/britishhland | sebgrima | "2024-12-01T18:28:20Z" | 3 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"created-with-pdfs-to-page-images-converter",
"pdf-to-image"
] | null | "2024-12-01T18:26:58Z" | ---
size_categories:
- n<1K
tags:
- created-with-pdfs-to-page-images-converter
- pdf-to-image
---
# Dataset Card for sebgrima/britishhland
## Dataset Description
This dataset contains images converted from PDFs using the PDFs to Page Images Converter Space.
- **Number of images:** 626
- **Number of PDFs processed:** 4
- **Sample size per PDF:** 100
- **Created on:** 2024-12-01 19:28:20
## Dataset Creation
### Source Data
The images in this dataset were generated from user-uploaded PDF files.
### Processing Steps
1. PDF files were uploaded to the PDFs to Page Images Converter.
2. Each PDF was processed, converting selected pages to images.
3. The resulting images were saved and uploaded to this dataset.
## Dataset Structure
The dataset consists of JPEG images, each representing a single page from the source PDFs.
### Data Fields
- `images/`: A folder containing all the converted images.
### Data Splits
This dataset does not have specific splits.
## Additional Information
- **Contributions:** Thanks to the PDFs to Page Images Converter for creating this dataset.
|
amang1802/lmsys_synthetic_instruction_preferences | amang1802 | "2024-12-01T19:09:57Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:07:10Z" | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: redacted
dtype: bool
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: user_input
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 3729227223.0
num_examples: 950000
- name: test
num_bytes: 196275117.0
num_examples: 50000
download_size: 2193670140
dataset_size: 3925502340.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
simonycl/Meta-Llama-3-8B-Instruct_ultrafeedback-annotate-judge-mtbench_cot_truth | simonycl | "2024-12-01T19:11:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:11:52Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: all_generated_responses
sequence: string
- name: scores
sequence: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 84286
num_examples: 6
download_size: 74798
dataset_size: 84286
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
StephanAkkerman/wikipron-words-ipa | StephanAkkerman | "2024-12-01T19:25:53Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-01T19:21:46Z" | ---
license: apache-2.0
---
# Wikipron Words IPA
This dataset is a copy of https://github.com/CUNY-CL/wikipron/tree/master/data/scrape
## Description
* Languages: 306
* Broad transcription files: 309
* Narrow transcription files: 174
* Dialects: 17
* Broad transcription files: 25
* Narrow transcription files: 22
* Scripts: 42
* Pronunciations: 3,958,916
| Link | ISO 639-3 Code | ISO 639 Language Name | Wiktionary Language Name | Script | Dialect | Filtered | Narrow/Broad | # of entries |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | ----: |
| [TSV](tsv/aar_latn_broad.tsv) | aar | Afar | Afar | Latin | | False | Broad | 1,584 |
| [TSV](tsv/aar_latn_narrow.tsv) | aar | Afar | Afar | Latin | | False | Narrow | 1,548 |
| [TSV](tsv/abk_cyrl_broad.tsv) | abk | Abkhazian | Abkhaz | Cyrillic | | False | Broad | 198 |
| [TSV](tsv/abk_cyrl_narrow.tsv) | abk | Abkhazian | Abkhaz | Cyrillic | | False | Narrow | 841 |
| [TSV](tsv/acw_arab_broad.tsv) | acw | Hijazi Arabic | Hijazi Arabic | Arabic | | False | Broad | 2,252 |
| [TSV](tsv/acw_arab_narrow.tsv) | acw | Hijazi Arabic | Hijazi Arabic | Arabic | | False | Narrow | 711 |
| [TSV](tsv/ady_cyrl_narrow.tsv) | ady | Adyghe | Adyghe | Cyrillic | | False | Narrow | 5,121 |
| [TSV](tsv/ady_cyrl_narrow_filtered.tsv) | ady | Adyghe | Adyghe | Cyrillic | | True | Narrow | 4,893 |
| [TSV](tsv/afb_arab_broad.tsv) | afb | Gulf Arabic | Gulf Arabic | Arabic | | False | Broad | 719 |
| [TSV](tsv/afr_latn_broad.tsv) | afr | Afrikaans | Afrikaans | Latin | | False | Broad | 2,022 |
| [TSV](tsv/afr_latn_broad_filtered.tsv) | afr | Afrikaans | Afrikaans | Latin | | True | Broad | 1,982 |
| [TSV](tsv/afr_latn_narrow.tsv) | afr | Afrikaans | Afrikaans | Latin | | False | Narrow | 134 |
| [TSV](tsv/aii_syrc_narrow.tsv) | aii | Assyrian Neo-Aramaic | Assyrian Neo-Aramaic | Syriac | | False | Narrow | 4,543 |
| [TSV](tsv/ajp_arab_broad.tsv) | ajp | South Levantine Arabic | South Levantine Arabic | Arabic | | False | Broad | 3,124 |
| [TSV](tsv/ajp_arab_narrow.tsv) | ajp | South Levantine Arabic | South Levantine Arabic | Arabic | | False | Narrow | 3,149 |
| [TSV](tsv/akk_latn_broad.tsv) | akk | Akkadian | Akkadian | Latin | | False | Broad | 603 |
| [TSV](tsv/ale_latn_broad.tsv) | ale | Aleut | Aleut | Latin | | False | Broad | 119 |
| [TSV](tsv/amh_ethi_broad.tsv) | amh | Amharic | Amharic | Ethiopic | | False | Broad | 378 |
| [TSV](tsv/ang_latn_broad.tsv) | ang | Old English (ca. 450-1100) | Old English | Latin | | False | Broad | 22,124 |
| [TSV](tsv/ang_latn_narrow.tsv) | ang | Old English (ca. 450-1100) | Old English | Latin | | False | Narrow | 11,243 |
| [TSV](tsv/aot_latn_broad.tsv) | aot | Atong (India) | Atong (India) | Latin | | False | Broad | 181 |
| [TSV](tsv/apw_latn_narrow.tsv) | apw | Western Apache | Western Apache | Latin | | False | Narrow | 147 |
| [TSV](tsv/ara_arab_broad.tsv) | ara | Arabic | Arabic | Arabic | | False | Broad | 13,339 |
| [TSV](tsv/ara_arab_narrow.tsv) | ara | Arabic | Arabic | Arabic | | False | Narrow | 104 |
| [TSV](tsv/arc_hebr_broad.tsv) | arc | Official Aramaic (700-300 BCE) | Aramaic | Hebrew | | False | Broad | 1,167 |
| [TSV](tsv/arg_latn_broad.tsv) | arg | Aragonese | Aragonese | Latin | | False | Broad | 298 |
| [TSV](tsv/ary_arab_broad.tsv) | ary | Moroccan Arabic | Moroccan Arabic | Arabic | | False | Broad | 2,043 |
| [TSV](tsv/arz_arab_broad.tsv) | arz | Egyptian Arabic | Egyptian Arabic | Arabic | | False | Broad | 200 |
| [TSV](tsv/asm_beng_broad.tsv) | asm | Assamese | Assamese | Bengali | | False | Broad | 2,925 |
| [TSV](tsv/ast_latn_broad.tsv) | ast | Asturian | Asturian | Latin | | False | Broad | 1,018 |
| [TSV](tsv/ast_latn_narrow.tsv) | ast | Asturian | Asturian | Latin | | False | Narrow | 986 |
| [TSV](tsv/ayl_arab_broad.tsv) | ayl | Libyan Arabic | Libyan Arabic | Arabic | | False | Broad | 163 |
| [TSV](tsv/aze_latn_broad.tsv) | aze | Azerbaijani | Azerbaijani | Latin | | False | Broad | 383 |
| [TSV](tsv/aze_latn_narrow.tsv) | aze | Azerbaijani | Azerbaijani | Latin | | False | Narrow | 4,226 |
| [TSV](tsv/aze_latn_narrow_filtered.tsv) | aze | Azerbaijani | Azerbaijani | Latin | | True | Narrow | 4,011 |
| [TSV](tsv/bak_cyrl_broad.tsv) | bak | Bashkir | Bashkir | Cyrillic | | False | Broad | 179 |
| [TSV](tsv/bak_cyrl_narrow.tsv) | bak | Bashkir | Bashkir | Cyrillic | | False | Narrow | 2,184 |
| [TSV](tsv/ban_bali_broad.tsv) | ban | Balinese | Balinese | Balinese | | False | Broad | 410 |
| [TSV](tsv/bar_latn_broad.tsv) | bar | Bavarian | Bavarian | Latin | | False | Broad | 1,542 |
| [TSV](tsv/bbl_geor_broad.tsv) | bbl | Bats | Bats | Georgian | | False | Broad | 167 |
| [TSV](tsv/bbn_latn_broad.tsv) | bbn | Uneapa | Uneapa | Latin | | False | Broad | 192 |
| [TSV](tsv/bcl_latn_broad.tsv) | bcl | Central Bikol | Bikol Central | Latin | | False | Broad | 4,928 |
| [TSV](tsv/bcl_latn_narrow.tsv) | bcl | Central Bikol | Bikol Central | Latin | | False | Narrow | 4,936 |
| [TSV](tsv/bdq_latn_broad.tsv) | bdq | Bahnar | Bahnar | Latin | | False | Broad | 193 |
| [TSV](tsv/bel_cyrl_narrow.tsv) | bel | Belarusian | Belarusian | Cyrillic | | False | Narrow | 5,516 |
| [TSV](tsv/ben_beng_broad.tsv) | ben | Bengali | Bengali | Bengali | | False | Broad | 6,666 |
| [TSV](tsv/ben_beng_dhaka_broad.tsv) | ben | Bengali | Bengali | Bengali | Dhaka | False | Broad | 7,998 |
| [TSV](tsv/ben_beng_dhaka_broad_filtered.tsv) | ben | Bengali | Bengali | Bengali | Dhaka | True | Broad | 6,496 |
| [TSV](tsv/ben_beng_dhaka_narrow.tsv) | ben | Bengali | Bengali | Bengali | Dhaka | False | Narrow | 7,821 |
| [TSV](tsv/ben_beng_narrow.tsv) | ben | Bengali | Bengali | Bengali | | False | Narrow | 5,933 |
| [TSV](tsv/ben_beng_rarh_broad.tsv) | ben | Bengali | Bengali | Bengali | Rarh, Standard Bengali | False | Broad | 4,980 |
| [TSV](tsv/ben_beng_rarh_broad_filtered.tsv) | ben | Bengali | Bengali | Bengali | Rarh, Standard Bengali | True | Broad | 4,123 |
| [TSV](tsv/ben_beng_rarh_narrow.tsv) | ben | Bengali | Bengali | Bengali | Rarh, Standard Bengali | False | Narrow | 6,474 |
| [TSV](tsv/bjb_latn_broad.tsv) | bjb | Banggarla | Barngarla | Latin | | False | Broad | 136 |
| [TSV](tsv/blt_tavt_narrow.tsv) | blt | Tai Dam | Tai Dam | Tai Viet | | False | Narrow | 239 |
| [TSV](tsv/bod_tibt_broad.tsv) | bod | Tibetan | Tibetan | Tibetan | | False | Broad | 2,699 |
| [TSV](tsv/bre_latn_broad.tsv) | bre | Breton | Breton | Latin | | False | Broad | 770 |
| [TSV](tsv/bua_cyrl_broad.tsv) | bua | Buriat | Buryat | Cyrillic | | False | Broad | 125 |
| [TSV](tsv/bua_cyrl_narrow.tsv) | bua | Buriat | Buryat | Cyrillic | | False | Narrow | 140 |
| [TSV](tsv/bul_cyrl_narrow.tsv) | bul | Bulgarian | Bulgarian | Cyrillic | | False | Narrow | 42,309 |
| [TSV](tsv/cat_latn_broad.tsv) | cat | Catalan | Catalan | Latin | | False | Broad | 176 |
| [TSV](tsv/cat_latn_narrow.tsv) | cat | Catalan | Catalan | Latin | | False | Narrow | 92,225 |
| [TSV](tsv/cbn_thai_broad.tsv) | cbn | Nyahkur | Nyah Kur | Thai | | False | Broad | 151 |
| [TSV](tsv/ceb_latn_broad.tsv) | ceb | Cebuano | Cebuano | Latin | | False | Broad | 2,953 |
| [TSV](tsv/ceb_latn_narrow.tsv) | ceb | Cebuano | Cebuano | Latin | | False | Narrow | 2,822 |
| [TSV](tsv/ces_latn_narrow.tsv) | ces | Czech | Czech | Latin | | False | Narrow | 43,717 |
| [TSV](tsv/chb_latn_broad.tsv) | chb | Chibcha | Chibcha | Latin | | False | Broad | 122 |
| [TSV](tsv/che_cyrl_broad.tsv) | che | Chechen | Chechen | Cyrillic | | False | Broad | 172 |
| [TSV](tsv/cho_latn_broad.tsv) | cho | Choctaw | Choctaw | Latin | | False | Broad | 124 |
| [TSV](tsv/chr_cher_broad.tsv) | chr | Cherokee | Cherokee | Cherokee | | False | Broad | 103 |
| [TSV](tsv/cic_latn_broad.tsv) | cic | Chickasaw | Chickasaw | Latin | | False | Broad | 286 |
| [TSV](tsv/ckb_arab_broad.tsv) | ckb | Central Kurdish | Central Kurdish | Arabic | | False | Broad | 288 |
| [TSV](tsv/cnk_latn_broad.tsv) | cnk | Khumi Chin | Khumi Chin | Latin | | False | Broad | 350 |
| [TSV](tsv/cop_copt_broad.tsv) | cop | Coptic | Coptic | Coptic | | False | Broad | 820 |
| [TSV](tsv/cor_latn_broad.tsv) | cor | Cornish | Cornish | Latin | | False | Broad | 174 |
| [TSV](tsv/cor_latn_narrow.tsv) | cor | Cornish | Cornish | Latin | | False | Narrow | 706 |
| [TSV](tsv/cos_latn_broad.tsv) | cos | Corsican | Corsican | Latin | | False | Broad | 476 |
| [TSV](tsv/crk_latn_broad.tsv) | crk | Plains Cree | Plains Cree | Latin | | False | Broad | 108 |
| [TSV](tsv/crk_latn_narrow.tsv) | crk | Plains Cree | Plains Cree | Latin | | False | Narrow | 144 |
| [TSV](tsv/crx_cans_broad.tsv) | crx | Carrier | Carrier | Canadian Aboriginal | | False | Broad | 175 |
| [TSV](tsv/csb_latn_broad.tsv) | csb | Kashubian | Kashubian | Latin | | False | Broad | 818 |
| [TSV](tsv/cym_latn_nw_broad.tsv) | cym | Welsh | Welsh | Latin | North Wales | False | Broad | 10,320 |
| [TSV](tsv/cym_latn_nw_broad_filtered.tsv) | cym | Welsh | Welsh | Latin | North Wales | True | Broad | 10,248 |
| [TSV](tsv/cym_latn_nw_narrow.tsv) | cym | Welsh | Welsh | Latin | North Wales | False | Narrow | 1,006 |
| [TSV](tsv/cym_latn_sw_broad.tsv) | cym | Welsh | Welsh | Latin | South Wales | False | Broad | 16,060 |
| [TSV](tsv/cym_latn_sw_broad_filtered.tsv) | cym | Welsh | Welsh | Latin | South Wales | True | Broad | 15,880 |
| [TSV](tsv/cym_latn_sw_narrow.tsv) | cym | Welsh | Welsh | Latin | South Wales | False | Narrow | 1,049 |
| [TSV](tsv/dan_latn_broad.tsv) | dan | Danish | Danish | Latin | | False | Broad | 4,657 |
| [TSV](tsv/dan_latn_narrow.tsv) | dan | Danish | Danish | Latin | | False | Narrow | 8,380 |
| [TSV](tsv/deu_latn_broad.tsv) | deu | German | German | Latin | | False | Broad | 49,829 |
| [TSV](tsv/deu_latn_broad_filtered.tsv) | deu | German | German | Latin | | True | Broad | 47,779 |
| [TSV](tsv/deu_latn_narrow.tsv) | deu | German | German | Latin | | False | Narrow | 18,430 |
| [TSV](tsv/div_thaa_broad.tsv) | div | Dhivehi | Dhivehi | Thaana | | False | Broad | 1,524 |
| [TSV](tsv/div_thaa_narrow.tsv) | div | Dhivehi | Dhivehi | Thaana | | False | Narrow | 1,608 |
| [TSV](tsv/dlm_latn_broad.tsv) | dlm | Dalmatian | Dalmatian | Latin | | False | Broad | 176 |
| [TSV](tsv/dng_cyrl_broad.tsv) | dng | Dungan | Dungan | Cyrillic | | False | Broad | 255 |
| [TSV](tsv/dsb_latn_broad.tsv) | dsb | Lower Sorbian | Lower Sorbian | Latin | | False | Broad | 2,258 |
| [TSV](tsv/dsb_latn_narrow.tsv) | dsb | Lower Sorbian | Lower Sorbian | Latin | | False | Narrow | 1,428 |
| [TSV](tsv/dum_latn_broad.tsv) | dum | Middle Dutch (ca. 1050-1350) | Middle Dutch | Latin | | False | Broad | 215 |
| [TSV](tsv/dzo_tibt_broad.tsv) | dzo | Dzongkha | Dzongkha | Tibetan | | False | Broad | 212 |
| [TSV](tsv/egy_latn_broad.tsv) | egy | Egyptian (Ancient) | Egyptian | Latin | | False | Broad | 4,046 |
| [TSV](tsv/ell_grek_broad.tsv) | ell | Modern Greek (1453-) | Greek | Greek | | False | Broad | 15,241 |
| [TSV](tsv/ell_grek_broad_filtered.tsv) | ell | Modern Greek (1453-) | Greek | Greek | | True | Broad | 14,825 |
| [TSV](tsv/ell_grek_narrow.tsv) | ell | Modern Greek (1453-) | Greek | Greek | | False | Narrow | 342 |
| [TSV](tsv/eng_latn_uk_broad.tsv) | eng | English | English | Latin | UK, Received Pronunciation | False | Broad | 79,409 |
| [TSV](tsv/eng_latn_uk_broad_filtered.tsv) | eng | English | English | Latin | UK, Received Pronunciation | True | Broad | 78,752 |
| [TSV](tsv/eng_latn_uk_narrow.tsv) | eng | English | English | Latin | UK, Received Pronunciation | False | Narrow | 1,787 |
| [TSV](tsv/eng_latn_us_broad.tsv) | eng | English | English | Latin | US, General American | False | Broad | 78,117 |
| [TSV](tsv/eng_latn_us_broad_filtered.tsv) | eng | English | English | Latin | US, General American | True | Broad | 77,566 |
| [TSV](tsv/eng_latn_us_narrow.tsv) | eng | English | English | Latin | US, General American | False | Narrow | 2,563 |
| [TSV](tsv/enm_latn_broad.tsv) | enm | Middle English (1100-1500) | Middle English | Latin | | False | Broad | 10,525 |
| [TSV](tsv/epo_latn_broad.tsv) | epo | Esperanto | Esperanto | Latin | | False | Broad | 3,999 |
| [TSV](tsv/epo_latn_narrow.tsv) | epo | Esperanto | Esperanto | Latin | | False | Narrow | 17,209 |
| [TSV](tsv/est_latn_broad.tsv) | est | Estonian | Estonian | Latin | | False | Broad | 1,789 |
| [TSV](tsv/est_latn_narrow.tsv) | est | Estonian | Estonian | Latin | | False | Narrow | 1,127 |
| [TSV](tsv/ett_ital_broad.tsv) | ett | Etruscan | Etruscan | Old Italic | | False | Broad | 207 |
| [TSV](tsv/eus_latn_broad.tsv) | eus | Basque | Basque | Latin | | False | Broad | 8,033 |
| [TSV](tsv/eus_latn_narrow.tsv) | eus | Basque | Basque | Latin | | False | Narrow | 8,010 |
| [TSV](tsv/evn_cyrl_broad.tsv) | evn | Evenki | Evenki | Cyrillic | | False | Broad | 126 |
| [TSV](tsv/ewe_latn_broad.tsv) | ewe | Ewe | Ewe | Latin | | False | Broad | 136 |
| [TSV](tsv/fao_latn_broad.tsv) | fao | Faroese | Faroese | Latin | | False | Broad | 1,947 |
| [TSV](tsv/fao_latn_narrow.tsv) | fao | Faroese | Faroese | Latin | | False | Narrow | 1,175 |
| [TSV](tsv/fas_arab_broad.tsv) | fas | Persian | Persian | Arabic | | False | Broad | 554 |
| [TSV](tsv/fas_arab_narrow.tsv) | fas | Persian | Persian | Arabic | | False | Narrow | 34,033 |
| [TSV](tsv/fax_latn_broad.tsv) | fax | Fala | Fala | Latin | | False | Broad | 538 |
| [TSV](tsv/fin_latn_broad.tsv) | fin | Finnish | Finnish | Latin | | False | Broad | 158,880 |
| [TSV](tsv/fin_latn_narrow.tsv) | fin | Finnish | Finnish | Latin | | False | Narrow | 158,871 |
| [TSV](tsv/fra_latn_broad.tsv) | fra | French | French | Latin | | False | Broad | 80,943 |
| [TSV](tsv/fra_latn_broad_filtered.tsv) | fra | French | French | Latin | | True | Broad | 80,690 |
| [TSV](tsv/fra_latn_narrow.tsv) | fra | French | French | Latin | | False | Narrow | 254 |
| [TSV](tsv/fro_latn_broad.tsv) | fro | Old French (842-ca. 1400) | Old French | Latin | | False | Broad | 929 |
| [TSV](tsv/frr_latn_broad.tsv) | frr | Northern Frisian | North Frisian | Latin | | False | Broad | 167 |
| [TSV](tsv/fry_latn_broad.tsv) | fry | Western Frisian | West Frisian | Latin | | False | Broad | 1,061 |
| [TSV](tsv/gla_latn_broad.tsv) | gla | Scottish Gaelic | Scottish Gaelic | Latin | | False | Broad | 3,131 |
| [TSV](tsv/gla_latn_narrow.tsv) | gla | Scottish Gaelic | Scottish Gaelic | Latin | | False | Narrow | 162 |
| [TSV](tsv/gle_latn_broad.tsv) | gle | Irish | Irish | Latin | | False | Broad | 14,379 |
| [TSV](tsv/gle_latn_narrow.tsv) | gle | Irish | Irish | Latin | | False | Narrow | 1,570 |
| [TSV](tsv/glg_latn_broad.tsv) | glg | Galician | Galician | Latin | | False | Broad | 5,076 |
| [TSV](tsv/glg_latn_narrow.tsv) | glg | Galician | Galician | Latin | | False | Narrow | 4,248 |
| [TSV](tsv/glv_latn_broad.tsv) | glv | Manx | Manx | Latin | | False | Broad | 208 |
| [TSV](tsv/glv_latn_narrow.tsv) | glv | Manx | Manx | Latin | | False | Narrow | 131 |
| [TSV](tsv/gml_latn_broad.tsv) | gml | Middle Low German | Middle Low German | Latin | | False | Broad | 171 |
| [TSV](tsv/goh_latn_broad.tsv) | goh | Old High German (ca. 750-1050) | Old High German | Latin | | False | Broad | 141 |
| [TSV](tsv/got_goth_broad.tsv) | got | Gothic | Gothic | Gothic | | False | Broad | 1,785 |
| [TSV](tsv/got_goth_narrow.tsv) | got | Gothic | Gothic | Gothic | | False | Narrow | 382 |
| [TSV](tsv/grc_grek_broad.tsv) | grc | Ancient Greek (to 1453) | Ancient Greek | Greek | | False | Broad | 120,580 |
| [TSV](tsv/grn_latn_broad.tsv) | grn | Guarani | Guaraní | Latin | | False | Broad | 213 |
| [TSV](tsv/gsw_latn_broad.tsv) | gsw | Swiss German | Alemannic German | Latin | | False | Broad | 468 |
| [TSV](tsv/guj_gujr_broad.tsv) | guj | Gujarati | Gujarati | Gujarati | | False | Broad | 2,058 |
| [TSV](tsv/gur_latn_broad.tsv) | gur | Farefare | Farefare | Latin | | False | Broad | 104 |
| [TSV](tsv/guw_latn_broad.tsv) | guw | Gun | Gun | Latin | | False | Broad | 682 |
| [TSV](tsv/hat_latn_broad.tsv) | hat | Haitian | Haitian Creole | Latin | | False | Broad | 1,456 |
| [TSV](tsv/hau_latn_broad.tsv) | hau | Hausa | Hausa | Latin | | False | Broad | 1,937 |
| [TSV](tsv/hau_latn_narrow.tsv) | hau | Hausa | Hausa | Latin | | False | Narrow | 1,912 |
| [TSV](tsv/haw_latn_broad.tsv) | haw | Hawaiian | Hawaiian | Latin | | False | Broad | 938 |
| [TSV](tsv/haw_latn_narrow.tsv) | haw | Hawaiian | Hawaiian | Latin | | False | Narrow | 878 |
| [TSV](tsv/hbs_cyrl_broad.tsv) | hbs | Serbo-Croatian | Serbo-Croatian | Cyrillic | | False | Broad | 23,019 |
| [TSV](tsv/hbs_cyrl_broad_filtered.tsv) | hbs | Serbo-Croatian | Serbo-Croatian | Cyrillic | | True | Broad | 22,849 |
| [TSV](tsv/hbs_latn_broad.tsv) | hbs | Serbo-Croatian | Serbo-Croatian | Latin | | False | Broad | 24,462 |
| [TSV](tsv/hbs_latn_broad_filtered.tsv) | hbs | Serbo-Croatian | Serbo-Croatian | Latin | | True | Broad | 24,142 |
| [TSV](tsv/heb_hebr_broad.tsv) | heb | Hebrew | Hebrew | Hebrew | | False | Broad | 1,957 |
| [TSV](tsv/heb_hebr_narrow.tsv) | heb | Hebrew | Hebrew | Hebrew | | False | Narrow | 212 |
| [TSV](tsv/hil_latn_broad.tsv) | hil | Hiligaynon | Hiligaynon | Latin | | False | Broad | 331 |
| [TSV](tsv/hil_latn_narrow.tsv) | hil | Hiligaynon | Hiligaynon | Latin | | False | Narrow | 314 |
| [TSV](tsv/hin_deva_broad.tsv) | hin | Hindi | Hindi | Devanagari | | False | Broad | 25,269 |
| [TSV](tsv/hin_deva_broad_filtered.tsv) | hin | Hindi | Hindi | Devanagari | | True | Broad | 24,640 |
| [TSV](tsv/hin_deva_narrow.tsv) | hin | Hindi | Hindi | Devanagari | | False | Narrow | 22,296 |
| [TSV](tsv/hrx_latn_broad.tsv) | hrx | Hunsrik | Hunsrik | Latin | | False | Broad | 1,713 |
| [TSV](tsv/hsb_latn_broad.tsv) | hsb | Upper Sorbian | Upper Sorbian | Latin | | False | Broad | 357 |
| [TSV](tsv/hsb_latn_narrow.tsv) | hsb | Upper Sorbian | Upper Sorbian | Latin | | False | Narrow | 150 |
| [TSV](tsv/hts_latn_broad.tsv) | hts | Hadza | Hadza | Latin | | False | Broad | 335 |
| [TSV](tsv/hun_latn_narrow.tsv) | hun | Hungarian | Hungarian | Latin | | False | Narrow | 62,497 |
| [TSV](tsv/hun_latn_narrow_filtered.tsv) | hun | Hungarian | Hungarian | Latin | | True | Narrow | 62,429 |
| [TSV](tsv/huu_latn_narrow.tsv) | huu | Murui Huitoto | Murui Huitoto | Latin | | False | Narrow | 314 |
| [TSV](tsv/hye_armn_e_broad.tsv) | hye | Armenian | Armenian | Armenian | Eastern Armenian | False | Broad | 16,826 |
| [TSV](tsv/hye_armn_e_narrow.tsv) | hye | Armenian | Armenian | Armenian | Eastern Armenian | False | Narrow | 17,056 |
| [TSV](tsv/hye_armn_e_narrow_filtered.tsv) | hye | Armenian | Armenian | Armenian | Eastern Armenian | True | Narrow | 16,979 |
| [TSV](tsv/hye_armn_w_broad.tsv) | hye | Armenian | Armenian | Armenian | Western Armenian | False | Broad | 16,364 |
| [TSV](tsv/hye_armn_w_narrow.tsv) | hye | Armenian | Armenian | Armenian | Western Armenian | False | Narrow | 16,556 |
| [TSV](tsv/hye_armn_w_narrow_filtered.tsv) | hye | Armenian | Armenian | Armenian | Western Armenian | True | Narrow | 16,488 |
| [TSV](tsv/iba_latn_broad.tsv) | iba | Iban | Iban | Latin | | False | Broad | 519 |
| [TSV](tsv/iba_latn_narrow.tsv) | iba | Iban | Iban | Latin | | False | Narrow | 176 |
| [TSV](tsv/ido_latn_broad.tsv) | ido | Ido | Ido | Latin | | False | Broad | 8,012 |
| [TSV](tsv/ilo_latn_broad.tsv) | ilo | Iloko | Ilocano | Latin | | False | Broad | 805 |
| [TSV](tsv/ilo_latn_narrow.tsv) | ilo | Iloko | Ilocano | Latin | | False | Narrow | 750 |
| [TSV](tsv/ina_latn_broad.tsv) | ina | Interlingua (International Auxiliary Language Association) | Interlingua | Latin | | False | Broad | 321 |
| [TSV](tsv/ind_latn_broad.tsv) | ind | Indonesian | Indonesian | Latin | | False | Broad | 4,952 |
| [TSV](tsv/ind_latn_narrow.tsv) | ind | Indonesian | Indonesian | Latin | | False | Narrow | 6,125 |
| [TSV](tsv/inh_cyrl_broad.tsv) | inh | Ingush | Ingush | Cyrillic | | False | Broad | 166 |
| [TSV](tsv/isl_latn_broad.tsv) | isl | Icelandic | Icelandic | Latin | | False | Broad | 9,866 |
| [TSV](tsv/isl_latn_broad_filtered.tsv) | isl | Icelandic | Icelandic | Latin | | True | Broad | 9,797 |
| [TSV](tsv/isl_latn_narrow.tsv) | isl | Icelandic | Icelandic | Latin | | False | Narrow | 376 |
| [TSV](tsv/ita_latn_broad.tsv) | ita | Italian | Italian | Latin | | False | Broad | 79,988 |
| [TSV](tsv/ita_latn_broad_filtered.tsv) | ita | Italian | Italian | Latin | | True | Broad | 79,865 |
| [TSV](tsv/izh_latn_broad.tsv) | izh | Ingrian | Ingrian | Latin | | False | Broad | 7,577 |
| [TSV](tsv/izh_latn_narrow.tsv) | izh | Ingrian | Ingrian | Latin | | False | Narrow | 12,334 |
| [TSV](tsv/jam_latn_broad.tsv) | jam | Jamaican Creole English | Jamaican Creole | Latin | | False | Broad | 207 |
| [TSV](tsv/jav_java_broad.tsv) | jav | Javanese | Javanese | Javanese | | False | Broad | 664 |
| [TSV](tsv/jje_hang_broad.tsv) | jje | Jejueo | Jeju | Hangul | | False | Broad | 739 |
| [TSV](tsv/jpn_hira_narrow.tsv) | jpn | Japanese | Japanese | Hiragana | | False | Narrow | 26,604 |
| [TSV](tsv/jpn_hira_narrow_filtered.tsv) | jpn | Japanese | Japanese | Hiragana | | True | Narrow | 26,460 |
| [TSV](tsv/jpn_kana_narrow.tsv) | jpn | Japanese | Japanese | Katakana | | False | Narrow | 6,903 |
| [TSV](tsv/jpn_kana_narrow_filtered.tsv) | jpn | Japanese | Japanese | Katakana | | True | Narrow | 6,289 |
| [TSV](tsv/kal_latn_broad.tsv) | kal | Kalaallisut | Greenlandic | Latin | | False | Broad | 1,528 |
| [TSV](tsv/kal_latn_narrow.tsv) | kal | Kalaallisut | Greenlandic | Latin | | False | Narrow | 1,324 |
| [TSV](tsv/kan_knda_broad.tsv) | kan | Kannada | Kannada | Kannada | | False | Broad | 884 |
| [TSV](tsv/kas_arab_broad.tsv) | kas | Kashmiri | Kashmiri | Arabic | | False | Broad | 421 |
| [TSV](tsv/kas_arab_narrow.tsv) | kas | Kashmiri | Kashmiri | Arabic | | False | Narrow | 253 |
| [TSV](tsv/kas_deva_broad.tsv) | kas | Kashmiri | Kashmiri | Devanagari | | False | Broad | 113 |
| [TSV](tsv/kat_geor_broad.tsv) | kat | Georgian | Georgian | Georgian | | False | Broad | 17,212 |
| [TSV](tsv/kat_geor_broad_filtered.tsv) | kat | Georgian | Georgian | Georgian | | True | Broad | 17,192 |
| [TSV](tsv/kat_geor_narrow.tsv) | kat | Georgian | Georgian | Georgian | | False | Narrow | 13,940 |
| [TSV](tsv/kaw_latn_broad.tsv) | kaw | Kawi | Old Javanese | Latin | | False | Broad | 593 |
| [TSV](tsv/kaz_cyrl_broad.tsv) | kaz | Kazakh | Kazakh | Cyrillic | | False | Broad | 274 |
| [TSV](tsv/kaz_cyrl_narrow.tsv) | kaz | Kazakh | Kazakh | Cyrillic | | False | Narrow | 1,396 |
| [TSV](tsv/kbd_cyrl_narrow.tsv) | kbd | Kabardian | Kabardian | Cyrillic | | False | Narrow | 859 |
| [TSV](tsv/kgp_latn_broad.tsv) | kgp | Kaingang | Kaingang | Latin | | False | Broad | 107 |
| [TSV](tsv/khb_talu_broad.tsv) | khb | Lü | Lü | New Tai Lue | | False | Broad | 499 |
| [TSV](tsv/khm_khmr_broad.tsv) | khm | Khmer | Khmer | Khmer | | False | Broad | 6,302 |
| [TSV](tsv/khm_khmr_broad_filtered.tsv) | khm | Khmer | Khmer | Khmer | | True | Broad | 6,300 |
| [TSV](tsv/kik_latn_broad.tsv) | kik | Kikuyu | Kikuyu | Latin | | False | Broad | 1,158 |
| [TSV](tsv/kir_cyrl_broad.tsv) | kir | Kirghiz | Kyrgyz | Cyrillic | | False | Broad | 583 |
| [TSV](tsv/kir_cyrl_narrow.tsv) | kir | Kirghiz | Kyrgyz | Cyrillic | | False | Narrow | 147 |
| [TSV](tsv/kix_latn_broad.tsv) | kix | Khiamniungan Naga | Khiamniungan Naga | Latin | | False | Broad | 181 |
| [TSV](tsv/kld_latn_broad.tsv) | kld | Gamilaraay | Gamilaraay | Latin | | False | Broad | 515 |
| [TSV](tsv/klj_latn_narrow.tsv) | klj | Khalaj | Khalaj | Latin | | False | Narrow | 2,001 |
| [TSV](tsv/kmr_latn_broad.tsv) | kmr | Northern Kurdish | Northern Kurdish | Latin | | False | Broad | 2,140 |
| [TSV](tsv/koi_cyrl_broad.tsv) | koi | Komi-Permyak | Komi-Permyak | Cyrillic | | False | Broad | 182 |
| [TSV](tsv/koi_cyrl_narrow.tsv) | koi | Komi-Permyak | Komi-Permyak | Cyrillic | | False | Narrow | 180 |
| [TSV](tsv/kok_deva_broad.tsv) | kok | Konkani (macrolanguage) | Konkani | Devanagari | | False | Broad | 172 |
| [TSV](tsv/kok_deva_narrow.tsv) | kok | Konkani (macrolanguage) | Konkani | Devanagari | | False | Narrow | 537 |
| [TSV](tsv/kor_hang_narrow.tsv) | kor | Korean | Korean | Hangul | | False | Narrow | 25,800 |
| [TSV](tsv/kor_hang_narrow_filtered.tsv) | kor | Korean | Korean | Hangul | | True | Narrow | 22,072 |
| [TSV](tsv/kpv_cyrl_broad.tsv) | kpv | Komi-Zyrian | Komi-Zyrian | Cyrillic | | False | Broad | 834 |
| [TSV](tsv/kpv_cyrl_narrow.tsv) | kpv | Komi-Zyrian | Komi-Zyrian | Cyrillic | | False | Narrow | 794 |
| [TSV](tsv/krl_latn_broad.tsv) | krl | Karelian | Karelian | Latin | | False | Broad | 419 |
| [TSV](tsv/ksw_mymr_broad.tsv) | ksw | S'gaw Karen | S'gaw Karen | Myanmar | | False | Broad | 177 |
| [TSV](tsv/ktz_latn_broad.tsv) | ktz | Juǀʼhoan | Juǀ'hoan | Latin | | False | Broad | 132 |
| [TSV](tsv/kwk_latn_broad.tsv) | kwk | Kwakiutl | Kwak'wala | Latin | | False | Broad | 107 |
| [TSV](tsv/kxd_latn_broad.tsv) | kxd | Brunei | Brunei Malay | Latin | | False | Broad | 351 |
| [TSV](tsv/kyu_kali_broad.tsv) | kyu | Western Kayah | Western Kayah | Kayah Li | | False | Broad | 128 |
| [TSV](tsv/lad_latn_broad.tsv) | lad | Ladino | Ladino | Latin | | False | Broad | 120 |
| [TSV](tsv/lao_laoo_narrow.tsv) | lao | Lao | Lao | Lao | | False | Narrow | 4,180 |
| [TSV](tsv/lat_latn_clas_broad.tsv) | lat | Latin | Latin | Latin | Classical | False | Broad | 36,066 |
| [TSV](tsv/lat_latn_clas_broad_filtered.tsv) | lat | Latin | Latin | Latin | Classical | True | Broad | 35,200 |
| [TSV](tsv/lat_latn_clas_narrow.tsv) | lat | Latin | Latin | Latin | Classical | False | Narrow | 36,068 |
| [TSV](tsv/lat_latn_eccl_broad.tsv) | lat | Latin | Latin | Latin | Ecclesiastical | False | Broad | 34,974 |
| [TSV](tsv/lat_latn_eccl_narrow.tsv) | lat | Latin | Latin | Latin | Ecclesiastical | False | Narrow | 35,564 |
| [TSV](tsv/lav_latn_narrow.tsv) | lav | Latvian | Latvian | Latin | | False | Narrow | 1,355 |
| [TSV](tsv/lav_latn_narrow_filtered.tsv) | lav | Latvian | Latvian | Latin | | True | Narrow | 1,255 |
| [TSV](tsv/lif_limb_broad.tsv) | lif | Limbu | Limbu | Limbu | | False | Broad | 108 |
| [TSV](tsv/lij_latn_broad.tsv) | lij | Ligurian | Ligurian | Latin | | False | Broad | 816 |
| [TSV](tsv/lim_latn_broad.tsv) | lim | Limburgan | Limburgish | Latin | | False | Broad | 949 |
| [TSV](tsv/lim_latn_narrow.tsv) | lim | Limburgan | Limburgish | Latin | | False | Narrow | 230 |
| [TSV](tsv/lit_latn_broad.tsv) | lit | Lithuanian | Lithuanian | Latin | | False | Broad | 370 |
| [TSV](tsv/lit_latn_narrow.tsv) | lit | Lithuanian | Lithuanian | Latin | | False | Narrow | 12,831 |
| [TSV](tsv/liv_latn_broad.tsv) | liv | Liv | Livonian | Latin | | False | Broad | 393 |
| [TSV](tsv/lmo_latn_broad.tsv) | lmo | Lombard | Lombard | Latin | | False | Broad | 486 |
| [TSV](tsv/lmo_latn_narrow.tsv) | lmo | Lombard | Lombard | Latin | | False | Narrow | 375 |
| [TSV](tsv/lmy_latn_narrow.tsv) | lmy | Lamboya | Laboya | Latin | | False | Narrow | 129 |
| [TSV](tsv/lou_latn_broad.tsv) | lou | Louisiana Creole | Louisiana Creole | Latin | | False | Broad | 240 |
| [TSV](tsv/lsi_latn_broad.tsv) | lsi | Lashi | Lashi | Latin | | False | Broad | 324 |
| [TSV](tsv/ltg_latn_narrow.tsv) | ltg | Latgalian | Latgalian | Latin | | False | Narrow | 444 |
| [TSV](tsv/ltz_latn_broad.tsv) | ltz | Luxembourgish | Luxembourgish | Latin | | False | Broad | 4,090 |
| [TSV](tsv/ltz_latn_narrow.tsv) | ltz | Luxembourgish | Luxembourgish | Latin | | False | Narrow | 2,654 |
| [TSV](tsv/lut_latn_broad.tsv) | lut | Lushootseed | Lushootseed | Latin | | False | Broad | 121 |
| [TSV](tsv/lwl_thai_broad.tsv) | lwl | Eastern Lawa | Eastern Lawa | Thai | | False | Broad | 253 |
| [TSV](tsv/lzz_geor_broad.tsv) | lzz | Laz | Laz | Georgian | | False | Broad | 305 |
| [TSV](tsv/mah_latn_broad.tsv) | mah | Marshallese | Marshallese | Latin | | False | Broad | 943 |
| [TSV](tsv/mah_latn_narrow.tsv) | mah | Marshallese | Marshallese | Latin | | False | Narrow | 1,060 |
| [TSV](tsv/mai_deva_narrow.tsv) | mai | Maithili | Maithili | Devanagari | | False | Narrow | 164 |
| [TSV](tsv/mak_latn_narrow.tsv) | mak | Makasar | Makasar | Latin | | False | Narrow | 432 |
| [TSV](tsv/mal_mlym_broad.tsv) | mal | Malayalam | Malayalam | Malayalam | | False | Broad | 7,100 |
| [TSV](tsv/mal_mlym_narrow.tsv) | mal | Malayalam | Malayalam | Malayalam | | False | Narrow | 375 |
| [TSV](tsv/mar_deva_broad.tsv) | mar | Marathi | Marathi | Devanagari | | False | Broad | 2,681 |
| [TSV](tsv/mar_deva_narrow.tsv) | mar | Marathi | Marathi | Devanagari | | False | Narrow | 599 |
| [TSV](tsv/mdf_cyrl_broad.tsv) | mdf | Moksha | Moksha | Cyrillic | | False | Broad | 131 |
| [TSV](tsv/mfe_latn_broad.tsv) | mfe | Morisyen | Mauritian Creole | Latin | | False | Broad | 205 |
| [TSV](tsv/mfe_latn_narrow.tsv) | mfe | Morisyen | Mauritian Creole | Latin | | False | Narrow | 105 |
| [TSV](tsv/mga_latn_broad.tsv) | mga | Middle Irish (900-1200) | Middle Irish | Latin | | False | Broad | 317 |
| [TSV](tsv/mic_latn_broad.tsv) | mic | Mi'kmaq | Mi'kmaq | Latin | | False | Broad | 203 |
| [TSV](tsv/mic_latn_narrow.tsv) | mic | Mi'kmaq | Mi'kmaq | Latin | | False | Narrow | 201 |
| [TSV](tsv/mkd_cyrl_narrow.tsv) | mkd | Macedonian | Macedonian | Cyrillic | | False | Narrow | 62,277 |
| [TSV](tsv/mlg_latn_broad.tsv) | mlg | Malagasy | Malagasy | Latin | | False | Broad | 185 |
| [TSV](tsv/mlt_latn_broad.tsv) | mlt | Maltese | Maltese | Latin | | False | Broad | 18,391 |
| [TSV](tsv/mlt_latn_broad_filtered.tsv) | mlt | Maltese | Maltese | Latin | | True | Broad | 18,361 |
| [TSV](tsv/mnc_mong_narrow.tsv) | mnc | Manchu | Manchu | Mongolian | | False | Narrow | 1,467 |
| [TSV](tsv/mnw_mymr_broad.tsv) | mnw | Mon | Mon | Myanmar | | False | Broad | 1,079 |
| [TSV](tsv/mon_cyrl_broad.tsv) | mon | Mongolian | Mongolian | Cyrillic | | False | Broad | 3,477 |
| [TSV](tsv/mon_cyrl_narrow.tsv) | mon | Mongolian | Mongolian | Cyrillic | | False | Narrow | 806 |
| [TSV](tsv/mqs_latn_broad.tsv) | mqs | West Makian | West Makian | Latin | | False | Broad | 793 |
| [TSV](tsv/msa_arab_ara_broad.tsv) | msa | Malay (macrolanguage) | Malay | Arabic | | False | Broad | 628 |
| [TSV](tsv/msa_arab_ara_narrow.tsv) | msa | Malay (macrolanguage) | Malay | Arabic | | False | Narrow | 220 |
| [TSV](tsv/msa_arab_broad.tsv) | msa | Malay (macrolanguage) | Malay | Arabic | | False | Broad | 653 |
| [TSV](tsv/msa_arab_narrow.tsv) | msa | Malay (macrolanguage) | Malay | Arabic | | False | Narrow | 204 |
| [TSV](tsv/msa_latn_broad.tsv) | msa | Malay (macrolanguage) | Malay | Latin | | False | Broad | 3,504 |
| [TSV](tsv/msa_latn_narrow.tsv) | msa | Malay (macrolanguage) | Malay | Latin | | False | Narrow | 1,246 |
| [TSV](tsv/mtq_latn_broad.tsv) | mtq | Muong | Muong | Latin | | False | Broad | 144 |
| [TSV](tsv/mww_latn_broad.tsv) | mww | Hmong Daw | White Hmong | Latin | | False | Broad | 419 |
| [TSV](tsv/mya_mymr_broad.tsv) | mya | Burmese | Burmese | Myanmar | | False | Broad | 6,075 |
| [TSV](tsv/mya_mymr_broad_filtered.tsv) | mya | Burmese | Burmese | Myanmar | | True | Broad | 6,062 |
| [TSV](tsv/nap_latn_broad.tsv) | nap | Neapolitan | Neapolitan | Latin | | False | Broad | 201 |
| [TSV](tsv/nap_latn_narrow.tsv) | nap | Neapolitan | Neapolitan | Latin | | False | Narrow | 455 |
| [TSV](tsv/nav_latn_broad.tsv) | nav | Navajo | Navajo | Latin | | False | Broad | 329 |
| [TSV](tsv/nci_latn_broad.tsv) | nci | Classical Nahuatl | Classical Nahuatl | Latin | | False | Broad | 855 |
| [TSV](tsv/nci_latn_narrow.tsv) | nci | Classical Nahuatl | Classical Nahuatl | Latin | | False | Narrow | 1,435 |
| [TSV](tsv/nds_latn_broad.tsv) | nds | Low German | Low German | Latin | | False | Broad | 210 |
| [TSV](tsv/nep_deva_narrow.tsv) | nep | Nepali (macrolanguage) | Nepali | Devanagari | | False | Narrow | 1,926 |
| [TSV](tsv/new_deva_narrow.tsv) | new | Newari | Newar | Devanagari | | False | Narrow | 413 |
| [TSV](tsv/nhg_latn_narrow.tsv) | nhg | Tetelcingo Nahuatl | Tetelcingo Nahuatl | Latin | | False | Narrow | 305 |
| [TSV](tsv/nhn_latn_broad.tsv) | nhn | Central Nahuatl | Central Nahuatl | Latin | | False | Broad | 167 |
| [TSV](tsv/nhx_latn_broad.tsv) | nhx | Isthmus-Mecayapan Nahuatl | Mecayapan Nahuatl | Latin | | False | Broad | 146 |
| [TSV](tsv/niv_cyrl_broad.tsv) | niv | Gilyak | Nivkh | Cyrillic | | False | Broad | 131 |
| [TSV](tsv/nld_latn_broad.tsv) | nld | Dutch | Dutch | Latin | | False | Broad | 40,908 |
| [TSV](tsv/nld_latn_broad_filtered.tsv) | nld | Dutch | Dutch | Latin | | True | Broad | 40,831 |
| [TSV](tsv/nld_latn_narrow.tsv) | nld | Dutch | Dutch | Latin | | False | Narrow | 693 |
| [TSV](tsv/nmy_latn_narrow.tsv) | nmy | Namuyi | Namuyi | Latin | | False | Narrow | 356 |
| [TSV](tsv/nno_latn_broad.tsv) | nno | Norwegian Nynorsk | Norwegian Nynorsk | Latin | | False | Broad | 4,693 |
| [TSV](tsv/nno_latn_narrow.tsv) | nno | Norwegian Nynorsk | Norwegian Nynorsk | Latin | | False | Narrow | 940 |
| [TSV](tsv/nob_latn_broad.tsv) | nob | Norwegian Bokmål | Norwegian Bokmål | Latin | | False | Broad | 3,207 |
| [TSV](tsv/nob_latn_broad_filtered.tsv) | nob | Norwegian Bokmål | Norwegian Bokmål | Latin | | True | Broad | 2,703 |
| [TSV](tsv/nob_latn_narrow.tsv) | nob | Norwegian Bokmål | Norwegian Bokmål | Latin | | False | Narrow | 692 |
| [TSV](tsv/non_latn_broad.tsv) | non | Old Norse | Old Norse | Latin | | False | Broad | 240 |
| [TSV](tsv/nor_latn_broad.tsv) | nor | Norwegian | Norwegian | Latin | | False | Broad | 1,397 |
| [TSV](tsv/nrf_latn_broad.tsv) | nrf | Jèrriais | Norman | Latin | | False | Broad | 186 |
| [TSV](tsv/nup_latn_broad.tsv) | nup | Nupe-Nupe-Tako | Nupe | Latin | | False | Broad | 442 |
| [TSV](tsv/nya_latn_broad.tsv) | nya | Nyanja | Chichewa | Latin | | False | Broad | 830 |
| [TSV](tsv/oci_latn_broad.tsv) | oci | Occitan (post 1500) | Occitan | Latin | | False | Broad | 580 |
| [TSV](tsv/oci_latn_narrow.tsv) | oci | Occitan (post 1500) | Occitan | Latin | | False | Narrow | 349 |
| [TSV](tsv/ofs_latn_broad.tsv) | ofs | Old Frisian | Old Frisian | Latin | | False | Broad | 170 |
| [TSV](tsv/okm_hang_broad.tsv) | okm | Middle Korean (10th-16th cent.) | Middle Korean | Hangul | | False | Broad | 592 |
| [TSV](tsv/okm_hang_narrow.tsv) | okm | Middle Korean (10th-16th cent.) | Middle Korean | Hangul | | False | Narrow | 245 |
| [TSV](tsv/olo_latn_broad.tsv) | olo | Livvi | Livvi | Latin | | False | Broad | 263 |
| [TSV](tsv/orv_cyrl_broad.tsv) | orv | Old Russian | Old East Slavic | Cyrillic | | False | Broad | 1,064 |
| [TSV](tsv/osp_latn_broad.tsv) | osp | Old Spanish | Old Spanish | Latin | | False | Broad | 615 |
| [TSV](tsv/osx_latn_broad.tsv) | osx | Old Saxon | Old Saxon | Latin | | False | Broad | 249 |
| [TSV](tsv/ota_arab_broad.tsv) | ota | Ottoman Turkish (1500-1928) | Ottoman Turkish | Arabic | | False | Broad | 189 |
| [TSV](tsv/ota_arab_narrow.tsv) | ota | Ottoman Turkish (1500-1928) | Ottoman Turkish | Arabic | | False | Narrow | 176 |
| [TSV](tsv/pag_latn_broad.tsv) | pag | Pangasinan | Pangasinan | Latin | | False | Broad | 209 |
| [TSV](tsv/pag_latn_narrow.tsv) | pag | Pangasinan | Pangasinan | Latin | | False | Narrow | 205 |
| [TSV](tsv/pam_latn_broad.tsv) | pam | Pampanga | Kapampangan | Latin | | False | Broad | 553 |
| [TSV](tsv/pam_latn_narrow.tsv) | pam | Pampanga | Kapampangan | Latin | | False | Narrow | 555 |
| [TSV](tsv/pan_arab_broad.tsv) | pan | Panjabi | Punjabi | Arabic | | False | Broad | 537 |
| [TSV](tsv/pan_guru_broad.tsv) | pan | Panjabi | Punjabi | Gurmukhi | | False | Broad | 704 |
| [TSV](tsv/pan_guru_narrow.tsv) | pan | Panjabi | Punjabi | Gurmukhi | | False | Narrow | 154 |
| [TSV](tsv/pbv_latn_broad.tsv) | pbv | Pnar | Pnar | Latin | | False | Broad | 101 |
| [TSV](tsv/pcc_latn_broad.tsv) | pcc | Bouyei | Bouyei | Latin | | False | Broad | 143 |
| [TSV](tsv/pdc_latn_broad.tsv) | pdc | Pennsylvania German | Pennsylvania German | Latin | | False | Broad | 166 |
| [TSV](tsv/phl_latn_broad.tsv) | phl | Phalura | Phalura | Latin | | False | Broad | 2,144 |
| [TSV](tsv/pjt_latn_narrow.tsv) | pjt | Pitjantjatjara | Pitjantjatjara | Latin | | False | Narrow | 124 |
| [TSV](tsv/pms_latn_broad.tsv) | pms | Piemontese | Piedmontese | Latin | | False | Broad | 866 |
| [TSV](tsv/pol_latn_broad.tsv) | pol | Polish | Polish | Latin | | False | Broad | 132,558 |
| [TSV](tsv/por_latn_bz_broad.tsv) | por | Portuguese | Portuguese | Latin | Brazil | False | Broad | 139,198 |
| [TSV](tsv/por_latn_bz_broad_filtered.tsv) | por | Portuguese | Portuguese | Latin | Brazil | True | Broad | 139,160 |
| [TSV](tsv/por_latn_bz_narrow.tsv) | por | Portuguese | Portuguese | Latin | Brazil | False | Narrow | 72,663 |
| [TSV](tsv/por_latn_po_broad.tsv) | por | Portuguese | Portuguese | Latin | Portugal | False | Broad | 73,236 |
| [TSV](tsv/por_latn_po_broad_filtered.tsv) | por | Portuguese | Portuguese | Latin | Portugal | True | Broad | 49,149 |
| [TSV](tsv/por_latn_po_narrow.tsv) | por | Portuguese | Portuguese | Latin | Portugal | False | Narrow | 27,795 |
| [TSV](tsv/pox_latn_broad.tsv) | pox | Polabian | Polabian | Latin | | False | Broad | 307 |
| [TSV](tsv/ppl_latn_broad.tsv) | ppl | Pipil | Pipil | Latin | | False | Broad | 264 |
| [TSV](tsv/pqm_latn_broad.tsv) | pqm | Malecite-Passamaquoddy | Malecite-Passamaquoddy | Latin | | False | Broad | 151 |
| [TSV](tsv/pqm_latn_narrow.tsv) | pqm | Malecite-Passamaquoddy | Malecite-Passamaquoddy | Latin | | False | Narrow | 158 |
| [TSV](tsv/pus_arab_broad.tsv) | pus | Pushto | Pashto | Arabic | | False | Broad | 1,252 |
| [TSV](tsv/rgn_latn_broad.tsv) | rgn | Romagnol | Romagnol | Latin | | False | Broad | 266 |
| [TSV](tsv/rgn_latn_narrow.tsv) | rgn | Romagnol | Romagnol | Latin | | False | Narrow | 617 |
| [TSV](tsv/rom_latn_broad.tsv) | rom | Romany | Romani | Latin | | False | Broad | 187 |
| [TSV](tsv/ron_latn_broad.tsv) | ron | Romanian | Romanian | Latin | | False | Broad | 6,095 |
| [TSV](tsv/ron_latn_narrow.tsv) | ron | Romanian | Romanian | Latin | | False | Narrow | 6,127 |
| [TSV](tsv/ron_latn_narrow_filtered.tsv) | ron | Romanian | Romanian | Latin | | True | Narrow | 6,033 |
| [TSV](tsv/rup_latn_narrow.tsv) | rup | Macedo-Romanian | Aromanian | Latin | | False | Narrow | 175 |
| [TSV](tsv/rus_cyrl_narrow.tsv) | rus | Russian | Russian | Cyrillic | | False | Narrow | 411,651 |
| [TSV](tsv/sah_cyrl_broad.tsv) | sah | Yakut | Yakut | Cyrillic | | False | Broad | 213 |
| [TSV](tsv/san_deva_broad.tsv) | san | Sanskrit | Sanskrit | Devanagari | | False | Broad | 13,390 |
| [TSV](tsv/san_deva_narrow.tsv) | san | Sanskrit | Sanskrit | Devanagari | | False | Narrow | 1,226 |
| [TSV](tsv/sce_latn_broad.tsv) | sce | Dongxiang | Dongxiang | Latin | | False | Broad | 125 |
| [TSV](tsv/scn_latn_broad.tsv) | scn | Sicilian | Sicilian | Latin | | False | Broad | 1,168 |
| [TSV](tsv/scn_latn_narrow.tsv) | scn | Sicilian | Sicilian | Latin | | False | Narrow | 349 |
| [TSV](tsv/sco_latn_broad.tsv) | sco | Scots | Scots | Latin | | False | Broad | 1,145 |
| [TSV](tsv/sco_latn_narrow.tsv) | sco | Scots | Scots | Latin | | False | Narrow | 463 |
| [TSV](tsv/sdc_latn_broad.tsv) | sdc | Sassarese Sardinian | Sassarese | Latin | | False | Broad | 318 |
| [TSV](tsv/sga_latn_broad.tsv) | sga | Old Irish (to 900) | Old Irish | Latin | | False | Broad | 2,046 |
| [TSV](tsv/sga_latn_narrow.tsv) | sga | Old Irish (to 900) | Old Irish | Latin | | False | Narrow | 1,150 |
| [TSV](tsv/shn_mymr_broad.tsv) | shn | Shan | Shan | Myanmar | | False | Broad | 2,455 |
| [TSV](tsv/sia_cyrl_broad.tsv) | sia | Akkala Sami | Akkala Sami | Cyrillic | | False | Broad | 180 |
| [TSV](tsv/sid_latn_broad.tsv) | sid | Sidamo | Sidamo | Latin | | False | Broad | 296 |
| [TSV](tsv/sin_sinh_broad.tsv) | sin | Sinhala | Sinhalese | Sinhala | | False | Broad | 282 |
| [TSV](tsv/sin_sinh_narrow.tsv) | sin | Sinhala | Sinhalese | Sinhala | | False | Narrow | 262 |
| [TSV](tsv/sjd_cyrl_broad.tsv) | sjd | Kildin Sami | Kildin Sami | Cyrillic | | False | Broad | 328 |
| [TSV](tsv/skr_arab_broad.tsv) | skr | Saraiki | Saraiki | Arabic | | False | Broad | 213 |
| [TSV](tsv/slk_latn_broad.tsv) | slk | Slovak | Slovak | Latin | | False | Broad | 2,558 |
| [TSV](tsv/slk_latn_narrow.tsv) | slk | Slovak | Slovak | Latin | | False | Narrow | 3,904 |
| [TSV](tsv/slr_latn_broad.tsv) | slr | Salar | Salar | Latin | | False | Broad | 182 |
| [TSV](tsv/slr_latn_narrow.tsv) | slr | Salar | Salar | Latin | | False | Narrow | 888 |
| [TSV](tsv/slv_latn_broad.tsv) | slv | Slovenian | Slovene | Latin | | False | Broad | 4,936 |
| [TSV](tsv/slv_latn_broad_filtered.tsv) | slv | Slovenian | Slovene | Latin | | True | Broad | 4,861 |
| [TSV](tsv/slv_latn_narrow.tsv) | slv | Slovenian | Slovene | Latin | | False | Narrow | 131 |
| [TSV](tsv/sme_latn_broad.tsv) | sme | Northern Sami | Northern Sami | Latin | | False | Broad | 4,103 |
| [TSV](tsv/sms_latn_broad.tsv) | sms | Skolt Sami | Skolt Sami | Latin | | False | Broad | 113 |
| [TSV](tsv/snd_arab_broad.tsv) | snd | Sindhi | Sindhi | Arabic | | False | Broad | 121 |
| [TSV](tsv/spa_latn_ca_broad.tsv) | spa | Spanish | Spanish | Latin | Castilian, Spain | False | Broad | 99,056 |
| [TSV](tsv/spa_latn_ca_broad_filtered.tsv) | spa | Spanish | Spanish | Latin | Castilian, Spain | True | Broad | 99,043 |
| [TSV](tsv/spa_latn_ca_narrow.tsv) | spa | Spanish | Spanish | Latin | Castilian, Spain | False | Narrow | 99,002 |
| [TSV](tsv/spa_latn_la_broad.tsv) | spa | Spanish | Spanish | Latin | Latin America | False | Broad | 99,051 |
| [TSV](tsv/spa_latn_la_broad_filtered.tsv) | spa | Spanish | Spanish | Latin | Latin America | True | Broad | 99,038 |
| [TSV](tsv/spa_latn_la_narrow.tsv) | spa | Spanish | Spanish | Latin | Latin America | False | Narrow | 98,997 |
| [TSV](tsv/sqi_latn_broad.tsv) | sqi | Albanian | Albanian | Latin | | False | Broad | 1,997 |
| [TSV](tsv/sqi_latn_narrow.tsv) | sqi | Albanian | Albanian | Latin | | False | Narrow | 935 |
| [TSV](tsv/srd_latn_broad.tsv) | srd | Sardinian | Sardinian | Latin | | False | Broad | 690 |
| [TSV](tsv/srd_latn_narrow.tsv) | srd | Sardinian | Sardinian | Latin | | False | Narrow | 103 |
| [TSV](tsv/srn_latn_broad.tsv) | srn | Sranan Tongo | Sranan Tongo | Latin | | False | Broad | 196 |
| [TSV](tsv/srs_latn_broad.tsv) | srs | Sarsi | Tsuut'ina | Latin | | False | Broad | 137 |
| [TSV](tsv/stq_latn_broad.tsv) | stq | Saterfriesisch | Saterland Frisian | Latin | | False | Broad | 805 |
| [TSV](tsv/swa_latn_broad.tsv) | swa | Swahili (macrolanguage) | Swahili | Latin | | False | Broad | 110 |
| [TSV](tsv/swe_latn_broad.tsv) | swe | Swedish | Swedish | Latin | | False | Broad | 4,631 |
| [TSV](tsv/swe_latn_narrow.tsv) | swe | Swedish | Swedish | Latin | | False | Narrow | 478 |
| [TSV](tsv/syc_syrc_narrow.tsv) | syc | Classical Syriac | Classical Syriac | Syriac | | False | Narrow | 6,319 |
| [TSV](tsv/syl_sylo_broad.tsv) | syl | Sylheti | Sylheti | Syloti Nagri | | False | Broad | 292 |
| [TSV](tsv/szl_latn_broad.tsv) | szl | Silesian | Silesian | Latin | | False | Broad | 1,887 |
| [TSV](tsv/tam_taml_broad.tsv) | tam | Tamil | Tamil | Tamil | | False | Broad | 6,903 |
| [TSV](tsv/tam_taml_narrow.tsv) | tam | Tamil | Tamil | Tamil | | False | Narrow | 3,309 |
| [TSV](tsv/tby_latn_narrow.tsv) | tby | Tabaru | Tabaru | Latin | | False | Narrow | 100 |
| [TSV](tsv/tel_telu_broad.tsv) | tel | Telugu | Telugu | Telugu | | False | Broad | 3,295 |
| [TSV](tsv/tel_telu_narrow.tsv) | tel | Telugu | Telugu | Telugu | | False | Narrow | 1,146 |
| [TSV](tsv/tft_latn_broad.tsv) | tft | Ternate | Ternate | Latin | | False | Broad | 229 |
| [TSV](tsv/tft_latn_narrow.tsv) | tft | Ternate | Ternate | Latin | | False | Narrow | 1,017 |
| [TSV](tsv/tgk_cyrl_broad.tsv) | tgk | Tajik | Tajik | Cyrillic | | False | Broad | 702 |
| [TSV](tsv/tgk_cyrl_narrow.tsv) | tgk | Tajik | Tajik | Cyrillic | | False | Narrow | 652 |
| [TSV](tsv/tgl_latn_broad.tsv) | tgl | Tagalog | Tagalog | Latin | | False | Broad | 18,256 |
| [TSV](tsv/tgl_latn_narrow.tsv) | tgl | Tagalog | Tagalog | Latin | | False | Narrow | 19,824 |
| [TSV](tsv/tha_thai_broad.tsv) | tha | Thai | Thai | Thai | | False | Broad | 16,689 |
| [TSV](tsv/tkl_latn_narrow.tsv) | tkl | Tokelau | Tokelauan | Latin | | False | Narrow | 332 |
| [TSV](tsv/ton_latn_broad.tsv) | ton | Tonga (Tonga Islands) | Tongan | Latin | | False | Broad | 165 |
| [TSV](tsv/tpw_latn_broad.tsv) | tpw | Tupí | Old Tupi | Latin | | False | Broad | 356 |
| [TSV](tsv/tru_syrc_broad.tsv) | tru | Turoyo | Turoyo | Syriac | | False | Broad | 163 |
| [TSV](tsv/tuk_latn_broad.tsv) | tuk | Turkmen | Turkmen | Latin | | False | Broad | 133 |
| [TSV](tsv/tur_latn_broad.tsv) | tur | Turkish | Turkish | Latin | | False | Broad | 7,266 |
| [TSV](tsv/tur_latn_narrow.tsv) | tur | Turkish | Turkish | Latin | | False | Narrow | 2,188 |
| [TSV](tsv/tur_latn_narrow_filtered.tsv) | tur | Turkish | Turkish | Latin | | True | Narrow | 1,724 |
| [TSV](tsv/twf_latn_broad.tsv) | twf | Northern Tiwa | Taos | Latin | | False | Broad | 135 |
| [TSV](tsv/tyv_cyrl_broad.tsv) | tyv | Tuvinian | Tuvan | Cyrillic | | False | Broad | 493 |
| [TSV](tsv/tzm_tfng_broad.tsv) | tzm | Central Atlas Tamazight | Central Atlas Tamazight | Tifinagh | | False | Broad | 694 |
| [TSV](tsv/tzm_tfng_narrow.tsv) | tzm | Central Atlas Tamazight | Central Atlas Tamazight | Tifinagh | | False | Narrow | 728 |
| [TSV](tsv/uby_cyrl_narrow.tsv) | uby | Ubykh | Ubykh | Cyrillic | | False | Narrow | 1,315 |
| [TSV](tsv/uig_arab_ara_broad.tsv) | uig | Uighur | Uyghur | Arabic | | False | Broad | 260 |
| [TSV](tsv/uig_arab_broad.tsv) | uig | Uighur | Uyghur | Arabic | | False | Broad | 1,411 |
| [TSV](tsv/ukr_cyrl_narrow.tsv) | ukr | Ukrainian | Ukrainian | Cyrillic | | False | Narrow | 39,641 |
| [TSV](tsv/urd_arab_broad.tsv) | urd | Urdu | Urdu | Arabic | | False | Broad | 4,493 |
| [TSV](tsv/urd_arab_narrow.tsv) | urd | Urdu | Urdu | Arabic | | False | Narrow | 104 |
| [TSV](tsv/urk_thai_broad.tsv) | urk | Urak Lawoi' | Urak Lawoi' | Thai | | False | Broad | 565 |
| [TSV](tsv/urk_thai_narrow.tsv) | urk | Urak Lawoi' | Urak Lawoi' | Thai | | False | Narrow | 565 |
| [TSV](tsv/vie_latn_hanoi_narrow.tsv) | vie | Vietnamese | Vietnamese | Latin | Hà Nội | False | Narrow | 23,320 |
| [TSV](tsv/vie_latn_hanoi_narrow_filtered.tsv) | vie | Vietnamese | Vietnamese | Latin | Hà Nội | True | Narrow | 23,320 |
| [TSV](tsv/vie_latn_hue_narrow.tsv) | vie | Vietnamese | Vietnamese | Latin | Huế | False | Narrow | 26,372 |
| [TSV](tsv/vie_latn_hue_narrow_filtered.tsv) | vie | Vietnamese | Vietnamese | Latin | Huế | True | Narrow | 26,357 |
| [TSV](tsv/vie_latn_saigon_narrow.tsv) | vie | Vietnamese | Vietnamese | Latin | Saigon | False | Narrow | 27,232 |
| [TSV](tsv/vie_latn_saigon_narrow_filtered.tsv) | vie | Vietnamese | Vietnamese | Latin | Saigon | True | Narrow | 27,221 |
| [TSV](tsv/vol_latn_broad.tsv) | vol | Volapük | Volapük | Latin | | False | Broad | 388 |
| [TSV](tsv/vol_latn_narrow.tsv) | vol | Volapük | Volapük | Latin | | False | Narrow | 564 |
| [TSV](tsv/vot_latn_broad.tsv) | vot | Votic | Votic | Latin | | False | Broad | 2,118 |
| [TSV](tsv/vot_latn_narrow.tsv) | vot | Votic | Votic | Latin | | False | Narrow | 2,124 |
| [TSV](tsv/wau_latn_broad.tsv) | wau | Waurá | Wauja | Latin | | False | Broad | 151 |
| [TSV](tsv/wbk_latn_broad.tsv) | wbk | Waigali | Waigali | Latin | | False | Broad | 112 |
| [TSV](tsv/wiy_latn_broad.tsv) | wiy | Wiyot | Wiyot | Latin | | False | Broad | 152 |
| [TSV](tsv/wlm_latn_broad.tsv) | wlm | Middle Welsh | Middle Welsh | Latin | | False | Broad | 151 |
| [TSV](tsv/wln_latn_broad.tsv) | wln | Walloon | Walloon | Latin | | False | Broad | 2,545 |
| [TSV](tsv/xal_cyrl_broad.tsv) | xal | Kalmyk | Kalmyk | Cyrillic | | False | Broad | 328 |
| [TSV](tsv/xho_latn_narrow.tsv) | xho | Xhosa | Xhosa | Latin | | False | Narrow | 876 |
| [TSV](tsv/xsl_latn_narrow.tsv) | xsl | South Slavey | South Slavey | Latin | | False | Narrow | 137 |
| [TSV](tsv/ybi_deva_broad.tsv) | ybi | Yamphu | Yamphu | Devanagari | | False | Broad | 136 |
| [TSV](tsv/ycl_latn_narrow.tsv) | ycl | Lolopo | Lolopo | Latin | | False | Narrow | 110 |
| [TSV](tsv/yid_hebr_broad.tsv) | yid | Yiddish | Yiddish | Hebrew | | False | Broad | 3,572 |
| [TSV](tsv/yid_hebr_narrow.tsv) | yid | Yiddish | Yiddish | Hebrew | | False | Narrow | 346 |
| [TSV](tsv/yor_latn_broad.tsv) | yor | Yoruba | Yoruba | Latin | | False | Broad | 5,199 |
| [TSV](tsv/yrk_cyrl_narrow.tsv) | yrk | Nenets | Tundra Nenets | Cyrillic | | False | Narrow | 233 |
| [TSV](tsv/yue_hani_broad.tsv) | yue | Yue Chinese | Cantonese | Han | | False | Broad | 102,453 |
| [TSV](tsv/yue_latn_broad.tsv) | yue | Yue Chinese | Cantonese | Latin | | False | Broad | 432 |
| [TSV](tsv/yux_cyrl_narrow.tsv) | yux | Southern Yukaghir | Southern Yukaghir | Cyrillic | | False | Narrow | 200 |
| [TSV](tsv/zha_latn_broad.tsv) | zha | Zhuang | Zhuang | Latin | | False | Broad | 1,405 |
| [TSV](tsv/zho_hani_broad.tsv) | zho | Chinese | Chinese | Han | | False | Broad | 158,873 |
| [TSV](tsv/zho_latn_broad.tsv) | zho | Chinese | Chinese | Latin | | False | Broad | 174 |
| [TSV](tsv/zom_latn_broad.tsv) | zom | Zou | Zou | Latin | | False | Broad | 142 |
| [TSV](tsv/zul_latn_broad.tsv) | zul | Zulu | Zulu | Latin | | False | Broad | 1,743 |
| [TSV](tsv/zza_latn_narrow.tsv) | zza | Zaza | Zazaki | Latin | | False | Narrow | 199 | |
mlfoundations-dev/union-openhermes2.5-source-prompts | mlfoundations-dev | "2024-12-01T19:24:55Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:22:13Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1297626642
num_examples: 2832744
download_size: 736711189
dataset_size: 1297626642
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_mrpc_pair_clare | DT4LM | "2024-12-01T19:28:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:28:19Z" | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 238313
num_examples: 926
download_size: 166354
dataset_size: 238313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_mrpc_pair_clare_original | DT4LM | "2024-12-01T19:28:27Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:28:24Z" | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 230800
num_examples: 926
download_size: 160872
dataset_size: 230800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
StephanAkkerman/open-dict-words-ipa | StephanAkkerman | "2024-12-01T19:35:07Z" | 3 | 0 | [
"license:mit",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-01T19:30:09Z" | ---
license: mit
---
# Open-dict Words IPA
This dataset is a copy of https://github.com/open-dict-data/ipa-dict
## Languages
IPA data is currently available for the following languages:
Language | Code
-------- | ----
ar | Arabic (Modern Standard)
de | German
en_UK | English (Received Pronunciation)
en_US | English (General American)
eo | Esperanto
es_ES | Spanish (Spain)
es_MX | Spanish (Mexico)
fa | Persian
fi | Finnish
fr_FR | French (France)
fr_QC | French (Québec)
is | Icelandic
ja | Japanese
jam | Jamaican Creole
km | Khmer
ko | Korean
ma | Malay (Malaysian and Indonesian)
nb | Norwegian Bokmål
nl | Dutch
or | Odia
ro | Romanian
sv | Swedish
sw | Swahili
tts | Isan
vi_C | Vietnamese (Central)
vi_N | Vietnamese (Northern)
vi_S | Vietnamese (Southern)
yue | Cantonese
zh | Mandarin
|
Nash-pAnDiTa/Moamn-5N6Rl40Hl_g | Nash-pAnDiTa | "2024-12-01T19:41:57Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:41:38Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 169832905.0
num_examples: 16
download_size: 169046837
dataset_size: 169832905.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CodeDPO/rl_dataset_20241201 | CodeDPO | "2024-12-01T19:47:04Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T19:46:52Z" | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: sample_id
dtype: int64
- name: prompt_pretokenized
dtype: string
- name: prompt_tokenized
sequence: int64
- name: response
dtype: string
- name: tokenized_response
sequence: int64
- name: accuracy
dtype: float64
- name: logp
dtype: float64
- name: rm_score
dtype: float64
splits:
- name: train
num_bytes: 2286987762
num_examples: 486202
download_size: 213607658
dataset_size: 2286987762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-8e6109 | sdiazlor | "2024-12-01T20:12:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-01T20:12:47Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 279
num_examples: 1
download_size: 3134
dataset_size: 279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-8e6109
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8e6109/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-8e6109/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "The incorporation of quantum entanglement into existing quantum field theory has led to a paradigm shift in our understanding of spacetime and its relationship to matter, but further research is needed to fully elucidate its implications on the cosmological constant."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8e6109", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-8e6109")
```
</details>
|
yobro4619/mistral_7b_rm | yobro4619 | "2024-12-01T20:22:36Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T20:22:32Z" | ---
dataset_info:
features:
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_score
dtype: float64
- name: chosen_score
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reward_chosen
dtype: float64
- name: reward_rejected
dtype: float64
splits:
- name: train
num_bytes: 16493427
num_examples: 5000
download_size: 9266184
dataset_size: 16493427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sssssssshhhhhu/movielens_dpo_dataset_2 | sssssssshhhhhu | "2024-12-01T21:31:47Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:31:44Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 7455876
num_examples: 1000
download_size: 2346699
dataset_size: 7455876
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mathreward/8b_llama31_greedy_pass1 | mathreward | "2024-12-01T21:32:15Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:32:14Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
dtype: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 9844437
num_examples: 5000
download_size: 3476225
dataset_size: 9844437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LLMsForHepth/infer_hep-th_hep-ph_gr-qc | LLMsForHepth | "2024-12-01T21:56:55Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:56:48Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: orig_abstract
dtype: string
- name: versions
list:
- name: created
dtype: string
- name: version
dtype: string
- name: update_date
dtype: string
- name: authors_parsed
sequence:
sequence: string
- name: abstract
dtype: string
- name: prompt
dtype: string
- name: y_true
dtype: string
- name: comp_s3-L-3.1-8B-base_v3
dtype: string
- name: preds_s3-L-3.1-8B-base_v3
dtype: string
splits:
- name: test
num_bytes: 185022862
num_examples: 45195
download_size: 101502826
dataset_size: 185022862
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mathreward/8b_llama31_tmp07_pass2 | mathreward | "2024-12-01T21:58:07Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:58:06Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 20851450
num_examples: 5000
download_size: 6199760
dataset_size: 20851450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mathreward/8b_llama31_tmp1_pass2 | mathreward | "2024-12-01T21:59:53Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T21:59:51Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 23184018
num_examples: 5000
download_size: 8531235
dataset_size: 23184018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mathreward/8b_llama31_tmp03_pass2 | mathreward | "2024-12-01T22:11:52Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:11:51Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 19752662
num_examples: 5000
download_size: 5319205
dataset_size: 19752662
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Erland/NLP701_Assignment2_Subtask3_KTO_Dataset_4 | Erland | "2024-12-01T22:12:50Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:12:46Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
- name: bertscore_f1
dtype: float64
- name: rank
dtype: int64
- name: file_name
dtype: string
- name: categories
dtype: string
- name: subcategories
dtype: string
- name: reference_explanation
dtype: string
splits:
- name: train
num_bytes: 1772726
num_examples: 440
download_size: 267579
dataset_size: 1772726
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yobro4619/skywork_Llama_3.1 | yobro4619 | "2024-12-01T22:58:48Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T22:58:46Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: reward_chosen
dtype: float64
- name: reward_rejected
dtype: float64
splits:
- name: train
num_bytes: 25019105
num_examples: 5000
download_size: 11754585
dataset_size: 25019105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mathreward/8b_llama31_selfcorr_horizon2_tmp1 | mathreward | "2024-12-01T23:00:10Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:00:07Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
dtype: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 32584970
num_examples: 5000
download_size: 12630264
dataset_size: 32584970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myyim/yoga_asana_poses | myyim | "2024-12-01T23:30:21Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-01T23:30:21Z" | ---
license: apache-2.0
---
|
DT4LM/debertav3base_mrpc_pair_clare | DT4LM | "2024-12-01T23:33:07Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:33:03Z" | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 221856
num_examples: 844
download_size: 156502
dataset_size: 221856
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3base_mrpc_pair_clare_original | DT4LM | "2024-12-01T23:33:11Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T23:33:08Z" | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 214638
num_examples: 844
download_size: 150343
dataset_size: 214638
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dddixyy/latino_italiano_traduzioni_DIRETTE | Dddixyy | "2024-12-01T23:56:44Z" | 3 | 0 | [
"task_categories:translation",
"language:la",
"language:it",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ancient",
"ancient literature",
"translation",
"ancient latin",
"italian",
"italian datasets"
] | [
"translation"
] | "2024-12-01T23:51:16Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 1517821
num_examples: 735
- name: validation
num_bytes: 181277
num_examples: 82
- name: test
num_bytes: 433785
num_examples: 205
download_size: 1397811
dataset_size: 2132883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: mit
task_categories:
- translation
language:
- la
- it
tags:
- ancient
- ancient literature
- translation
- ancient latin
- italian
- italian datasets
--- |
nielsr/gemini-results-2024-12-01 | nielsr | "2024-12-02T00:06:10Z" | 3 | 0 | [
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:06:09Z" | ---
dataset_info:
features:
- name: arxiv_id
dtype: 'null'
- name: github
dtype: 'null'
- name: title
dtype: 'null'
- name: upvotes
dtype: int64
- name: num_comments
dtype: int64
- name: github_mention_hf
dtype: float64
- name: num_models
dtype: float64
- name: num_datasets
dtype: float64
- name: num_spaces
dtype: float64
- name: reached_out_link
dtype: 'null'
- name: reached_out_success
dtype: float64
- name: has_artifact
dtype: bool
- name: submitted_by
dtype: 'null'
- name: reached_out_note
dtype: 'null'
- name: date
dtype: 'null'
- name: gemini_results
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 4099
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/stf_regex_ner_completo | juliadollis | "2024-12-02T00:24:20Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:15:09Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
splits:
- name: train
num_bytes: 8503837647
num_examples: 78477
download_size: 2333511885
dataset_size: 8503837647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/stf_regex_ner_1_fuzzy_80 | juliadollis | "2024-12-02T00:44:21Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:44:10Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
- name: desambiguacao
list:
- name: class
dtype: string
- name: count
dtype: int64
- name: elements
sequence: string
- name: entity
dtype: string
splits:
- name: train
num_bytes: 122506511
num_examples: 1000
download_size: 33172345
dataset_size: 122506511
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/stf_regex_ner_1_fuzzy_85 | juliadollis | "2024-12-02T00:48:31Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:48:21Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
- name: desambiguacao
list:
- name: class
dtype: string
- name: count
dtype: int64
- name: elements
sequence: string
- name: entity
dtype: string
splits:
- name: train
num_bytes: 122739508
num_examples: 1000
download_size: 33209135
dataset_size: 122739508
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/stf_regex_ner_1_fuzzy_90 | juliadollis | "2024-12-02T00:50:59Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T00:50:49Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
- name: desambiguacao
list:
- name: class
dtype: string
- name: count
dtype: int64
- name: elements
sequence: string
- name: entity
dtype: string
splits:
- name: train
num_bytes: 122974029
num_examples: 1000
download_size: 33231007
dataset_size: 122974029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khairi/pubmed-text-06 | khairi | "2024-12-02T01:48:21Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:04:14Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2563725779
num_examples: 2498771
- name: test
num_bytes: 1008377
num_examples: 1000
- name: valid
num_bytes: 496320
num_examples: 501
download_size: 1485477306
dataset_size: 2565230476
---
# Dataset Card for "pubmed-text-06"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
koml/smart-hr-synthetic-data-single-image-multiple-queries | koml | "2024-12-02T01:30:30Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:30:05Z" | ---
dataset_info:
features:
- name: index
dtype: int64
- name: image
dtype: image
- name: question_en
dtype: string
- name: question_jp
dtype: string
- name: pdf_name
dtype: string
- name: pdf_page
dtype: int64
splits:
- name: train
num_bytes: 420577824.0
num_examples: 1000
download_size: 165094552
dataset_size: 420577824.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dogtooth/tulu_8b_diverse_responses_gold_scored_uf | dogtooth | "2024-12-02T01:42:40Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:42:36Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reference_completion
dtype: string
- name: reference_completion_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
- name: chosen_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
- name: rejected_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
splits:
- name: train
num_bytes: 186053612
num_examples: 37074
download_size: 103375935
dataset_size: 186053612
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khairi/pubmed-text-07 | khairi | "2024-12-02T02:30:47Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:48:21Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2455422680
num_examples: 2371946
- name: test
num_bytes: 1042512
num_examples: 999
- name: valid
num_bytes: 503878
num_examples: 500
download_size: 1422858713
dataset_size: 2456969070
---
# Dataset Card for "pubmed-text-07"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtruong9/gt_smt_grandstaff_random_10percent_max_700_length | mtruong9 | "2024-12-02T01:48:58Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:48:52Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 32040144.430020433
num_examples: 2857
- name: val
num_bytes: 3620965.0439108806
num_examples: 323
- name: test
num_bytes: 5979717.989296436
num_examples: 533
download_size: 30972996
dataset_size: 41640827.46322775
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
ashercn97/multi-step-v1-500 | ashercn97 | "2024-12-02T01:50:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"arxiv:2408.02442",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-12-02T01:50:19Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: step_labels
sequence: string
splits:
- name: train
num_bytes: 74859
num_examples: 50
download_size: 27417
dataset_size: 74859
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for multi-step-v1-500
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/multi-step-v1-500/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/ashercn97/multi-step-v1-500/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"step_labels": [
"logical",
"logical",
"logical",
"logical",
"logical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"logical",
"logical",
"illogical",
"logical",
"logical",
"illogical",
"illogical",
"logical",
"illogical",
"illogical",
"illogical",
"logical",
"illogical",
"logical",
"illogical",
"logical",
"logical",
"logical",
"logical",
"illogical",
"logical",
"illogical",
"illogical",
"logical",
"logical",
"illogical",
"illogical",
"illogical",
"logical",
"illogical",
"logical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"illogical",
"logical",
"illogical",
"logical",
"illogical",
"logical",
"logical",
"logical",
"logical",
"illogical",
"logical",
"logical",
"logical",
"illogical"
],
"text": "Your husband enjoys the intense and competitive nature of playing PUBG, which is popular among many gamers. This could indicate his preference for action-packed activities and online interactions. On the other hand, your love for listening to country music suggests that you appreciate lyrical storytelling and perhaps a more laid-back experience. It\u0027s interesting how gaming and music can coexist in relationships, providing both partners with their own forms of entertainment. Maybe you could introduce him to some country songs that highlight themes of resilience and adventure, similar to the experiences in gaming. Or perhaps he could share his PUBG experience with you in a way that aligns with the storytelling of country music. It\u0027s also possible"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("ashercn97/multi-step-v1-500", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("ashercn97/multi-step-v1-500")
```
</details>
## References
```
@misc{2408.02442,
Author = {Zhi Rui Tam and Cheng-Kuang Wu and Yi-Lin Tsai and Chieh-Yen Lin and Hung-yi Lee and Yun-Nung Chen},
Title = {Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models},
Year = {2024},
Eprint = {arXiv:2408.02442},
}
```
|
yguooo/summarize_from_feedback_oai_preprocessing_pythia_scene0_the | yguooo | "2024-12-02T02:04:56Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T01:59:15Z" | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: query_token
sequence: int64
- name: query
dtype: string
- name: chosen
dtype: string
- name: chosen_token
sequence: int64
- name: chosen_token_len
dtype: int64
- name: rejected
dtype: string
- name: rejected_token
sequence: int64
- name: rejected_token_len
dtype: int64
- name: chosen_policy
dtype: string
- name: rejected_policy
dtype: string
- name: policies
dtype: string
- name: query_chosen
dtype: string
- name: query_chosen_token
sequence: int64
- name: query_chosen_token_len
dtype: int64
- name: query_rejected
dtype: string
- name: query_rejected_token
sequence: int64
- name: query_rejected_token_len
dtype: int64
- name: query_token_len
dtype: int64
- name: query_chosen_token_response_label
sequence: int64
- name: query_rejected_token_response_label
sequence: int64
splits:
- name: train
num_bytes: 3142801508
num_examples: 92858
- name: validation
num_bytes: 2844094875
num_examples: 83802
- name: validation_cnndm
num_bytes: 225359437
num_examples: 2284
download_size: 288101074
dataset_size: 6212255820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: validation_cnndm
path: data/validation_cnndm-*
---
|
yguooo/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_pythia_scene0_sheboygan | yguooo | "2024-12-02T02:07:32Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:04:10Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 2127314487
num_examples: 116722
- name: validation
num_bytes: 117534549
num_examples: 6447
- name: test
num_bytes: 119507098
num_examples: 6553
download_size: 560959231
dataset_size: 2364356134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b',
'check_length_correctness': True,
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=None,
max_sft_query_response_length=None,
max_rm_response_length=155,
max_rm_query_response_length=2021),
'debug': False,
'ds_name': 'pythia_scene0_sheboygan',
'hf_entity': 'yguooo',
'push_to_hub': True,
'scenario': 0,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: '
'r/{subreddit}\\n\\nTITLE: '
'{title}\\n\\nPOST: '
'{post}\\n\\nSheboygan:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=53,
max_sft_query_response_length=562,
max_rm_response_length=169,
max_rm_query_response_length=634)}
```
|
yguooo/summarize_from_feedback_oai_preprocessing_pythia_scene0_sheboygan | yguooo | "2024-12-02T02:09:34Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:05:44Z" | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: query_token
sequence: int64
- name: query
dtype: string
- name: chosen
dtype: string
- name: chosen_token
sequence: int64
- name: chosen_token_len
dtype: int64
- name: rejected
dtype: string
- name: rejected_token
sequence: int64
- name: rejected_token_len
dtype: int64
- name: chosen_policy
dtype: string
- name: rejected_policy
dtype: string
- name: policies
dtype: string
- name: query_chosen
dtype: string
- name: query_chosen_token
sequence: int64
- name: query_chosen_token_len
dtype: int64
- name: query_rejected
dtype: string
- name: query_rejected_token
sequence: int64
- name: query_rejected_token_len
dtype: int64
- name: query_token_len
dtype: int64
- name: query_chosen_token_response_label
sequence: int64
- name: query_rejected_token_response_label
sequence: int64
splits:
- name: train
num_bytes: 3150117694
num_examples: 92858
- name: validation
num_bytes: 2850640554
num_examples: 83802
- name: validation_cnndm
num_bytes: 225359437
num_examples: 2284
download_size: 288565458
dataset_size: 6226117685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: validation_cnndm
path: data/validation_cnndm-*
---
|
hula1/Appollo_math_V2 | hula1 | "2024-12-02T02:36:33Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:18:23Z" | ---
license: apache-2.0
---
|
rasyosef/2AIRTC-Amharic-Adhoc-Information-Retrieval-Test-Collection | rasyosef | "2024-12-02T02:35:30Z" | 3 | 0 | [
"language:am",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:21:05Z" | ---
dataset_info:
features:
- name: doc_no
dtype: int64
- name: doc_text
dtype: string
- name: relevant_topic_nos
sequence: int64
- name: relevant_topic_titles
sequence: string
- name: relevant_topic_descriptions
sequence: string
- name: relevant_topic_narratives
sequence: string
splits:
- name: documents
num_bytes: 68777070
num_examples: 12587
download_size: 27722236
dataset_size: 68777070
configs:
- config_name: default
data_files:
- split: documents
path: data/documents-*
language:
- am
---
## Original Dataset and Paper
Original dataset: https://www.irit.fr/AmharicResources/airtc-the-amharic-adhoc-information-retrieval-test-collection/
> Evaluation is highly important for designing, developing, and maintaining information retrieval (IR) systems. The IR community has developed shared tasks where evaluation framework, evaluation measures and test collections have been developed for different languages. Although Amharic is the official language of Ethiopia currently having an estimated population of over 110 million, it is one of the under-resourced languages and there is no Amharic adhoc IR test collection to date. In this paper, we promote the monolingual Amharic IR test collection that we build for the IR community. Following the framework of Cranfield project and TREC, the collection that we named 2AIRTC consists of 12,583 documents, 240 topics and the corresponding relevance judgments.
```
@inproceedings{yeshambel20202airtc,
title={2AIRTC: The Amharic Adhoc Information Retrieval Test Collection},
author={Yeshambel, Tilahun and Mothe, Josiane and Assabie, Yaregal},
booktitle={International Conference of the Cross-Language Evaluation Forum for European Languages},
pages={55--66},
year={2020},
organization={Springer}
}
``` |
mm-reasoning/EMMA | mm-reasoning | "2024-12-02T02:53:47Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:30:33Z" | ---
dataset_info:
- config_name: Chemistry
features:
- name: pid
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: solution
dtype: string
- name: subject
dtype: string
- name: task
dtype: string
- name: category
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: context
dtype: string
splits:
- name: test
num_bytes: 6216934.0
num_examples: 105
download_size: 6039924
dataset_size: 6216934.0
- config_name: Math
features:
- name: pid
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: solution
dtype: string
- name: subject
dtype: string
- name: task
dtype: string
- name: category
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: context
dtype: string
splits:
- name: test
num_bytes: 55271944.0
num_examples: 892
download_size: 49466220
dataset_size: 55271944.0
configs:
- config_name: Chemistry
data_files:
- split: test
path: Chemistry/test-*
- config_name: Math
data_files:
- split: test
path: Math/test-*
---
|
khairi/pubmed-text-08 | khairi | "2024-12-02T03:11:49Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:30:48Z" | ---
dataset_info:
features:
- name: pubMedId
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2391356165
num_examples: 2290061
- name: test
num_bytes: 1050546
num_examples: 1000
- name: valid
num_bytes: 535648
num_examples: 500
download_size: 1380379923
dataset_size: 2392942359
---
# Dataset Card for "pubmed-text-08"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlfoundations-dev/oh-dcft-v2.0_no-curation_gpt-4o-mini | mlfoundations-dev | "2024-12-02T02:35:33Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:34:07Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 5178468768
num_examples: 2832441
download_size: 2768833612
dataset_size: 5178468768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/sabersaleh__Llama2-7B-CPO-details | open-llm-leaderboard | "2024-12-02T02:52:47Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:49:44Z" | ---
pretty_name: Evaluation run of sabersaleh/Llama2-7B-CPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sabersaleh/Llama2-7B-CPO](https://huggingface.co/sabersaleh/Llama2-7B-CPO)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/sabersaleh__Llama2-7B-CPO-details\"\
,\n\tname=\"sabersaleh__Llama2-7B-CPO__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-12-02T02-49-44.267310](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-CPO-details/blob/main/sabersaleh__Llama2-7B-CPO/results_2024-12-02T02-49-44.267310.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.10166358595194085,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.013004849611340378,\n \"\
inst_level_strict_acc,none\": 0.20743405275779375,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.006797583081570997,\n \
\ \"exact_match_stderr,none\": 0.002262169974437948,\n \"acc_norm,none\"\
: 0.3370086911402257,\n \"acc_norm_stderr,none\": 0.00511348077103276,\n\
\ \"inst_level_loose_acc,none\": 0.2182254196642686,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.11090573012939002,\n \"prompt_level_loose_acc_stderr,none\": 0.01351306974704948,\n\
\ \"acc,none\": 0.1605718085106383,\n \"acc_stderr,none\"\
: 0.0033471529742732163,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.34264884568651277,\n\
\ \"acc_norm_stderr,none\": 0.00588596195329697,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5187165775401069,\n\
\ \"acc_norm_stderr,none\": 0.03663608375537843\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\":\
\ 0.03063032594455827\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.532,\n\
\ \"acc_norm_stderr,none\": 0.031621252575725574\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.044,\n \"acc_norm_stderr,none\":\
\ 0.012997373846574952\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.256,\n\
\ \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3356164383561644,\n \"acc_norm_stderr,none\"\
: 0.039214533254314086\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.232,\n \"acc_norm_stderr,none\":\
\ 0.026750070374865202\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.4606741573033708,\n\
\ \"acc_norm_stderr,none\": 0.03746587736387869\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.152,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\":\
\ 0.029844039047465857\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.01282512448593109,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.22727272727272727,\n \"acc_norm_stderr,none\": 0.029857515673386438\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2857142857142857,\n\
\ \"acc_norm_stderr,none\": 0.019351013185102753\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.10166358595194085,\n \"prompt_level_strict_acc_stderr,none\": 0.013004849611340378,\n\
\ \"inst_level_strict_acc,none\": 0.20743405275779375,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.11090573012939002,\n \"prompt_level_loose_acc_stderr,none\": 0.01351306974704948,\n\
\ \"inst_level_loose_acc,none\": 0.2182254196642686,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.002262169974437948,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.009771986970684038,\n\
\ \"exact_match_stderr,none\": 0.005623391633915856\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.007575757575757576,\n\
\ \"exact_match_stderr,none\": 0.007575757575757577\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0035714285714285713,\n \"exact_match_stderr,none\": 0.0035714285714285713\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.1605718085106383,\n\
\ \"acc_stderr,none\": 0.0033471529742732163\n },\n \"\
leaderboard_musr\": {\n \"acc_norm,none\": 0.40343915343915343,\n \
\ \"acc_norm_stderr,none\": 0.017266770806898563,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23046875,\n\
\ \"acc_norm_stderr,none\": 0.026372364120563745\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.10166358595194085,\n \"prompt_level_strict_acc_stderr,none\": 0.013004849611340378,\n\
\ \"inst_level_strict_acc,none\": 0.20743405275779375,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.006797583081570997,\n \"exact_match_stderr,none\"\
: 0.002262169974437948,\n \"acc_norm,none\": 0.3370086911402257,\n \
\ \"acc_norm_stderr,none\": 0.00511348077103276,\n \"inst_level_loose_acc,none\"\
: 0.2182254196642686,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_loose_acc,none\": 0.11090573012939002,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01351306974704948,\n \"acc,none\": 0.1605718085106383,\n \"acc_stderr,none\"\
: 0.0033471529742732163,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.34264884568651277,\n \"acc_norm_stderr,none\"\
: 0.00588596195329697,\n \"alias\": \" - leaderboard_bbh\"\n },\n \"\
leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.044,\n \"acc_norm_stderr,none\": 0.012997373846574952\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3356164383561644,\n\
\ \"acc_norm_stderr,none\": 0.039214533254314086\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.232,\n \"acc_norm_stderr,none\": 0.026750070374865202\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4606741573033708,\n \"acc_norm_stderr,none\"\
: 0.03746587736387869\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.152,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.01282512448593109,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.22727272727272727,\n\
\ \"acc_norm_stderr,none\": 0.029857515673386438\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2857142857142857,\n \"acc_norm_stderr,none\": 0.019351013185102753\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.10166358595194085,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.013004849611340378,\n \
\ \"inst_level_strict_acc,none\": 0.20743405275779375,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.11090573012939002,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01351306974704948,\n \"inst_level_loose_acc,none\"\
: 0.2182254196642686,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.006797583081570997,\n\
\ \"exact_match_stderr,none\": 0.002262169974437948,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.009771986970684038,\n \"exact_match_stderr,none\": 0.005623391633915856\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.007575757575757576,\n \"exact_match_stderr,none\"\
: 0.007575757575757577\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.010362694300518135,\n \"exact_match_stderr,none\": 0.007308424386792209\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.007407407407407408,\n\
\ \"exact_match_stderr,none\": 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.1605718085106383,\n\
\ \"acc_stderr,none\": 0.0033471529742732163\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.40343915343915343,\n \"acc_norm_stderr,none\"\
: 0.017266770806898563,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23046875,\n\
\ \"acc_norm_stderr,none\": 0.026372364120563745\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n }\n}\n```"
repo_url: https://huggingface.co/sabersaleh/Llama2-7B-CPO
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_ifeval
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_ifeval_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T02-49-44.267310.jsonl'
- config_name: sabersaleh__Llama2-7B-CPO__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T02_49_44.267310
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T02-49-44.267310.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T02-49-44.267310.jsonl'
---
# Dataset Card for Evaluation run of sabersaleh/Llama2-7B-CPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sabersaleh/Llama2-7B-CPO](https://huggingface.co/sabersaleh/Llama2-7B-CPO)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/sabersaleh__Llama2-7B-CPO-details",
name="sabersaleh__Llama2-7B-CPO__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T02-49-44.267310](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-CPO-details/blob/main/sabersaleh__Llama2-7B-CPO/results_2024-12-02T02-49-44.267310.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.10166358595194085,
"prompt_level_strict_acc_stderr,none": 0.013004849611340378,
"inst_level_strict_acc,none": 0.20743405275779375,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002262169974437948,
"acc_norm,none": 0.3370086911402257,
"acc_norm_stderr,none": 0.00511348077103276,
"inst_level_loose_acc,none": 0.2182254196642686,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.11090573012939002,
"prompt_level_loose_acc_stderr,none": 0.01351306974704948,
"acc,none": 0.1605718085106383,
"acc_stderr,none": 0.0033471529742732163,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.34264884568651277,
"acc_norm_stderr,none": 0.00588596195329697,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.044,
"acc_norm_stderr,none": 0.012997373846574952
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3356164383561644,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.232,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4606741573033708,
"acc_norm_stderr,none": 0.03746587736387869
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.01282512448593109,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.22727272727272727,
"acc_norm_stderr,none": 0.029857515673386438
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.10166358595194085,
"prompt_level_strict_acc_stderr,none": 0.013004849611340378,
"inst_level_strict_acc,none": 0.20743405275779375,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.11090573012939002,
"prompt_level_loose_acc_stderr,none": 0.01351306974704948,
"inst_level_loose_acc,none": 0.2182254196642686,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002262169974437948,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.1605718085106383,
"acc_stderr,none": 0.0033471529742732163
},
"leaderboard_musr": {
"acc_norm,none": 0.40343915343915343,
"acc_norm_stderr,none": 0.017266770806898563,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23046875,
"acc_norm_stderr,none": 0.026372364120563745
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.10166358595194085,
"prompt_level_strict_acc_stderr,none": 0.013004849611340378,
"inst_level_strict_acc,none": 0.20743405275779375,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002262169974437948,
"acc_norm,none": 0.3370086911402257,
"acc_norm_stderr,none": 0.00511348077103276,
"inst_level_loose_acc,none": 0.2182254196642686,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.11090573012939002,
"prompt_level_loose_acc_stderr,none": 0.01351306974704948,
"acc,none": 0.1605718085106383,
"acc_stderr,none": 0.0033471529742732163,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.34264884568651277,
"acc_norm_stderr,none": 0.00588596195329697,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.044,
"acc_norm_stderr,none": 0.012997373846574952
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3356164383561644,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.232,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4606741573033708,
"acc_norm_stderr,none": 0.03746587736387869
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.01282512448593109,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.22727272727272727,
"acc_norm_stderr,none": 0.029857515673386438
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.10166358595194085,
"prompt_level_strict_acc_stderr,none": 0.013004849611340378,
"inst_level_strict_acc,none": 0.20743405275779375,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.11090573012939002,
"prompt_level_loose_acc_stderr,none": 0.01351306974704948,
"inst_level_loose_acc,none": 0.2182254196642686,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006797583081570997,
"exact_match_stderr,none": 0.002262169974437948,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.1605718085106383,
"acc_stderr,none": 0.0033471529742732163
},
"leaderboard_musr": {
"acc_norm,none": 0.40343915343915343,
"acc_norm_stderr,none": 0.017266770806898563,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23046875,
"acc_norm_stderr,none": 0.026372364120563745
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/sabersaleh__Llama2-7B-IPO-details | open-llm-leaderboard | "2024-12-02T02:52:59Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T02:49:58Z" | ---
pretty_name: Evaluation run of sabersaleh/Llama2-7B-IPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sabersaleh/Llama2-7B-IPO](https://huggingface.co/sabersaleh/Llama2-7B-IPO)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/sabersaleh__Llama2-7B-IPO-details\"\
,\n\tname=\"sabersaleh__Llama2-7B-IPO__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-12-02T02-49-58.344530](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-IPO-details/blob/main/sabersaleh__Llama2-7B-IPO/results_2024-12-02T02-49-58.344530.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.3384355947593722,\n \"acc_norm_stderr,none\"\
: 0.0051477738332105305,\n \"acc,none\": 0.16173537234042554,\n \
\ \"acc_stderr,none\": 0.003356929443211976,\n \"inst_level_strict_acc,none\"\
: 0.22062350119904076,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.005287009063444109,\n \"exact_match_stderr,none\"\
: 0.0019919658841467073,\n \"prompt_level_loose_acc,none\": 0.14048059149722736,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.014953371656822667,\n \
\ \"prompt_level_strict_acc,none\": 0.133086876155268,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.014617009342904459,\n \"inst_level_loose_acc,none\": 0.2314148681055156,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.34455823641728867,\n \"acc_norm_stderr,none\": 0.005936889218956276,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5187165775401069,\n\
\ \"acc_norm_stderr,none\": 0.03663608375537843\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\":\
\ 0.03056207062099311\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.532,\n\
\ \"acc_norm_stderr,none\": 0.031621252575725574\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.112,\n \"acc_norm_stderr,none\":\
\ 0.019985536939171485\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.264,\n \"acc_norm_stderr,none\": 0.027934518957690866\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.3,\n\
\ \"acc_norm_stderr,none\": 0.029040893477575783\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3219178082191781,\n \"acc_norm_stderr,none\"\
: 0.038799816296271356\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\":\
\ 0.02572139890141637\n },\n \"leaderboard_bbh_ruin_names\": {\n \
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.196,\n \"acc_norm_stderr,none\": 0.025156857313255926\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.46629213483146065,\n\
\ \"acc_norm_stderr,none\": 0.0374968006036899\n },\n \"\
leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\":\
\ 0.03148684942554571\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\":\
\ 0.030186568464511673\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.012836462594431608,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.25252525252525254,\n \"acc_norm_stderr,none\": 0.03095405547036587\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2765567765567766,\n\
\ \"acc_norm_stderr,none\": 0.019160027479692504\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.133086876155268,\n \"prompt_level_strict_acc_stderr,none\": 0.014617009342904457,\n\
\ \"inst_level_strict_acc,none\": 0.22062350119904076,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.14048059149722736,\n \"prompt_level_loose_acc_stderr,none\": 0.014953371656822667,\n\
\ \"inst_level_loose_acc,none\": 0.2314148681055156,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.005287009063444109,\n \"exact_match_stderr,none\"\
: 0.0019919658841467073,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.006514657980456026,\n\
\ \"exact_match_stderr,none\": 0.004599025618546258\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.015151515151515152,\n\
\ \"exact_match_stderr,none\": 0.01067276863717474\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.014814814814814815,\n \"exact_match_stderr,none\"\
: 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.16173537234042554,\n\
\ \"acc_stderr,none\": 0.0033569294432119765\n },\n \"\
leaderboard_musr\": {\n \"acc_norm,none\": 0.40343915343915343,\n \
\ \"acc_norm_stderr,none\": 0.01729312559671818,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23828125,\n\
\ \"acc_norm_stderr,none\": 0.026679160987075002\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.3384355947593722,\n\
\ \"acc_norm_stderr,none\": 0.0051477738332105305,\n \"acc,none\"\
: 0.16173537234042554,\n \"acc_stderr,none\": 0.003356929443211976,\n \
\ \"inst_level_strict_acc,none\": 0.22062350119904076,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.005287009063444109,\n \"exact_match_stderr,none\"\
: 0.0019919658841467073,\n \"prompt_level_loose_acc,none\": 0.14048059149722736,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.014953371656822667,\n \
\ \"prompt_level_strict_acc,none\": 0.133086876155268,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.014617009342904459,\n \"inst_level_loose_acc,none\": 0.2314148681055156,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.34455823641728867,\n\
\ \"acc_norm_stderr,none\": 0.005936889218956276,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.112,\n \"acc_norm_stderr,none\": 0.019985536939171485\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.264,\n \"acc_norm_stderr,none\": 0.027934518957690866\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3219178082191781,\n\
\ \"acc_norm_stderr,none\": 0.038799816296271356\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.196,\n \"acc_norm_stderr,none\": 0.025156857313255926\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.46629213483146065,\n \"acc_norm_stderr,none\"\
: 0.0374968006036899\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.012836462594431608,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.25252525252525254,\n\
\ \"acc_norm_stderr,none\": 0.03095405547036587\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2765567765567766,\n \"acc_norm_stderr,none\": 0.019160027479692504\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.133086876155268,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.014617009342904457,\n \
\ \"inst_level_strict_acc,none\": 0.22062350119904076,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.14048059149722736,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014953371656822667,\n \"inst_level_loose_acc,none\"\
: 0.2314148681055156,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.005287009063444109,\n\
\ \"exact_match_stderr,none\": 0.0019919658841467073,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.006514657980456026,\n \"exact_match_stderr,none\": 0.004599025618546258\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.014814814814814815,\n\
\ \"exact_match_stderr,none\": 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.16173537234042554,\n\
\ \"acc_stderr,none\": 0.0033569294432119765\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.40343915343915343,\n \"acc_norm_stderr,none\"\
: 0.01729312559671818,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23828125,\n\
\ \"acc_norm_stderr,none\": 0.026679160987075002\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n }\n}\n```"
repo_url: https://huggingface.co/sabersaleh/Llama2-7B-IPO
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_ifeval
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_ifeval_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T02-49-58.344530.jsonl'
- config_name: sabersaleh__Llama2-7B-IPO__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T02_49_58.344530
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T02-49-58.344530.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T02-49-58.344530.jsonl'
---
# Dataset Card for Evaluation run of sabersaleh/Llama2-7B-IPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sabersaleh/Llama2-7B-IPO](https://huggingface.co/sabersaleh/Llama2-7B-IPO)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/sabersaleh__Llama2-7B-IPO-details",
name="sabersaleh__Llama2-7B-IPO__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T02-49-58.344530](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-IPO-details/blob/main/sabersaleh__Llama2-7B-IPO/results_2024-12-02T02-49-58.344530.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.3384355947593722,
"acc_norm_stderr,none": 0.0051477738332105305,
"acc,none": 0.16173537234042554,
"acc_stderr,none": 0.003356929443211976,
"inst_level_strict_acc,none": 0.22062350119904076,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.005287009063444109,
"exact_match_stderr,none": 0.0019919658841467073,
"prompt_level_loose_acc,none": 0.14048059149722736,
"prompt_level_loose_acc_stderr,none": 0.014953371656822667,
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904459,
"inst_level_loose_acc,none": 0.2314148681055156,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.34455823641728867,
"acc_norm_stderr,none": 0.005936889218956276,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.112,
"acc_norm_stderr,none": 0.019985536939171485
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3219178082191781,
"acc_norm_stderr,none": 0.038799816296271356
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.196,
"acc_norm_stderr,none": 0.025156857313255926
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.46629213483146065,
"acc_norm_stderr,none": 0.0374968006036899
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.012836462594431608,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25252525252525254,
"acc_norm_stderr,none": 0.03095405547036587
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2765567765567766,
"acc_norm_stderr,none": 0.019160027479692504
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904457,
"inst_level_strict_acc,none": 0.22062350119904076,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.14048059149722736,
"prompt_level_loose_acc_stderr,none": 0.014953371656822667,
"inst_level_loose_acc,none": 0.2314148681055156,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.005287009063444109,
"exact_match_stderr,none": 0.0019919658841467073,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.006514657980456026,
"exact_match_stderr,none": 0.004599025618546258
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.16173537234042554,
"acc_stderr,none": 0.0033569294432119765
},
"leaderboard_musr": {
"acc_norm,none": 0.40343915343915343,
"acc_norm_stderr,none": 0.01729312559671818,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
}
},
"leaderboard": {
"acc_norm,none": 0.3384355947593722,
"acc_norm_stderr,none": 0.0051477738332105305,
"acc,none": 0.16173537234042554,
"acc_stderr,none": 0.003356929443211976,
"inst_level_strict_acc,none": 0.22062350119904076,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.005287009063444109,
"exact_match_stderr,none": 0.0019919658841467073,
"prompt_level_loose_acc,none": 0.14048059149722736,
"prompt_level_loose_acc_stderr,none": 0.014953371656822667,
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904459,
"inst_level_loose_acc,none": 0.2314148681055156,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.34455823641728867,
"acc_norm_stderr,none": 0.005936889218956276,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.112,
"acc_norm_stderr,none": 0.019985536939171485
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3219178082191781,
"acc_norm_stderr,none": 0.038799816296271356
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.196,
"acc_norm_stderr,none": 0.025156857313255926
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.46629213483146065,
"acc_norm_stderr,none": 0.0374968006036899
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.012836462594431608,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25252525252525254,
"acc_norm_stderr,none": 0.03095405547036587
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2765567765567766,
"acc_norm_stderr,none": 0.019160027479692504
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904457,
"inst_level_strict_acc,none": 0.22062350119904076,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.14048059149722736,
"prompt_level_loose_acc_stderr,none": 0.014953371656822667,
"inst_level_loose_acc,none": 0.2314148681055156,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.005287009063444109,
"exact_match_stderr,none": 0.0019919658841467073,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.006514657980456026,
"exact_match_stderr,none": 0.004599025618546258
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.16173537234042554,
"acc_stderr,none": 0.0033569294432119765
},
"leaderboard_musr": {
"acc_norm,none": 0.40343915343915343,
"acc_norm_stderr,none": 0.01729312559671818,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dgambettaphd/D_gen5_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-02T03:11:51Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T03:11:48Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 584258
num_examples: 1000
download_size: 352003
dataset_size: 584258
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/sabersaleh__Llama2-7B-SPO-details | open-llm-leaderboard | "2024-12-02T03:38:36Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T03:35:44Z" | ---
pretty_name: Evaluation run of sabersaleh/Llama2-7B-SPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sabersaleh/Llama2-7B-SPO](https://huggingface.co/sabersaleh/Llama2-7B-SPO)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/sabersaleh__Llama2-7B-SPO-details\"\
,\n\tname=\"sabersaleh__Llama2-7B-SPO__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-12-02T03-35-43.412546](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-SPO-details/blob/main/sabersaleh__Llama2-7B-SPO/results_2024-12-02T03-35-43.412546.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.014350453172205438,\n \"exact_match_stderr,none\"\
: 0.0032717098633934637,\n \"acc,none\": 0.17569813829787234,\n \
\ \"acc_stderr,none\": 0.003469571620440863,\n \"prompt_level_strict_acc,none\"\
: 0.10351201478743069,\n \"prompt_level_strict_acc_stderr,none\": 0.013109035446484243,\n\
\ \"inst_level_loose_acc,none\": 0.2158273381294964,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.20983213429256595,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_loose_acc,none\": 0.10905730129390019,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.013413909746312102,\n \"\
acc_norm,none\": 0.3309119211311454,\n \"acc_norm_stderr,none\": 0.005128975300894813,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.334837701787884,\n \"acc_norm_stderr,none\"\
: 0.005901746897666035,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.576,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.376,\n\
\ \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\":\
\ 0.031621252575725574\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.484,\n \
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.248,\n\
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3150684931506849,\n \"acc_norm_stderr,none\"\
: 0.03857820876541411\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\":\
\ 0.025901884690541117\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.4943820224719101,\n\
\ \"acc_norm_stderr,none\": 0.03757992900475984\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\":\
\ 0.03153986449255664\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\":\
\ 0.021172081336336534\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\":\
\ 0.021172081336336534\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\":\
\ 0.029462657598578648\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27684563758389263,\n\
\ \"acc_norm_stderr,none\": 0.012960912249614355,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.25757575757575757,\n \"acc_norm_stderr,none\": 0.031156269519646826\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.30036630036630035,\n\
\ \"acc_norm_stderr,none\": 0.019636438043304946\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.25669642857142855,\n \"acc_norm_stderr,none\"\
: 0.020660425491724744\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.10351201478743069,\n \"prompt_level_strict_acc_stderr,none\": 0.013109035446484243,\n\
\ \"inst_level_strict_acc,none\": 0.20983213429256595,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.10905730129390019,\n \"prompt_level_loose_acc_stderr,none\": 0.013413909746312102,\n\
\ \"inst_level_loose_acc,none\": 0.2158273381294964,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.014350453172205438,\n \"exact_match_stderr,none\"\
: 0.0032717098633934637,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.019543973941368076,\n\
\ \"exact_match_stderr,none\": 0.007913339243755165\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.015151515151515152,\n\
\ \"exact_match_stderr,none\": 0.01067276863717474\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.007142857142857143,\n \"exact_match_stderr,none\": 0.005041703051390571\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.012987012987012988,\n\
\ \"exact_match_stderr,none\": 0.009153145279150204\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.02962962962962963,\n \"exact_match_stderr,none\"\
: 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.17569813829787234,\n\
\ \"acc_stderr,none\": 0.003469571620440863\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3862433862433862,\n \"acc_norm_stderr,none\"\
: 0.017179183382758968,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.23828125,\n \"acc_norm_stderr,none\"\
: 0.026679160987075002\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ }\n },\n \"leaderboard\": {\n \"exact_match,none\": 0.014350453172205438,\n\
\ \"exact_match_stderr,none\": 0.0032717098633934637,\n \"acc,none\"\
: 0.17569813829787234,\n \"acc_stderr,none\": 0.003469571620440863,\n \
\ \"prompt_level_strict_acc,none\": 0.10351201478743069,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.013109035446484243,\n \"inst_level_loose_acc,none\": 0.2158273381294964,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.20983213429256595,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_loose_acc,none\": 0.10905730129390019,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.013413909746312102,\n \"acc_norm,none\": 0.3309119211311454,\n \
\ \"acc_norm_stderr,none\": 0.005128975300894813,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.334837701787884,\n\
\ \"acc_norm_stderr,none\": 0.005901746897666035,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3150684931506849,\n\
\ \"acc_norm_stderr,none\": 0.03857820876541411\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.4943820224719101,\n \"acc_norm_stderr,none\"\
: 0.03757992900475984\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27684563758389263,\n\
\ \"acc_norm_stderr,none\": 0.012960912249614355,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.25757575757575757,\n\
\ \"acc_norm_stderr,none\": 0.031156269519646826\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.30036630036630035,\n \"acc_norm_stderr,none\": 0.019636438043304946\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.25669642857142855,\n \"acc_norm_stderr,none\"\
: 0.020660425491724744\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.10351201478743069,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.013109035446484243,\n \
\ \"inst_level_strict_acc,none\": 0.20983213429256595,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.10905730129390019,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.013413909746312102,\n \"inst_level_loose_acc,none\"\
: 0.2158273381294964,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.014350453172205438,\n\
\ \"exact_match_stderr,none\": 0.0032717098633934637,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.019543973941368076,\n \"exact_match_stderr,none\": 0.007913339243755165\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.007142857142857143,\n \"exact_match_stderr,none\"\
: 0.005041703051390571\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.012987012987012988,\n \"exact_match_stderr,none\": 0.009153145279150204\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.02962962962962963,\n \"exact_match_stderr,none\": 0.014648038602753809\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.17569813829787234,\n \"acc_stderr,none\": 0.003469571620440863\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.3862433862433862,\n\
\ \"acc_norm_stderr,none\": 0.017179183382758968,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.23828125,\n \"acc_norm_stderr,none\": 0.026679160987075002\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ }\n}\n```"
repo_url: https://huggingface.co/sabersaleh/Llama2-7B-SPO
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_ifeval
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_ifeval_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T03-35-43.412546.jsonl'
- config_name: sabersaleh__Llama2-7B-SPO__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T03_35_43.412546
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T03-35-43.412546.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T03-35-43.412546.jsonl'
---
# Dataset Card for Evaluation run of sabersaleh/Llama2-7B-SPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sabersaleh/Llama2-7B-SPO](https://huggingface.co/sabersaleh/Llama2-7B-SPO)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/sabersaleh__Llama2-7B-SPO-details",
name="sabersaleh__Llama2-7B-SPO__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T03-35-43.412546](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-SPO-details/blob/main/sabersaleh__Llama2-7B-SPO/results_2024-12-02T03-35-43.412546.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.014350453172205438,
"exact_match_stderr,none": 0.0032717098633934637,
"acc,none": 0.17569813829787234,
"acc_stderr,none": 0.003469571620440863,
"prompt_level_strict_acc,none": 0.10351201478743069,
"prompt_level_strict_acc_stderr,none": 0.013109035446484243,
"inst_level_loose_acc,none": 0.2158273381294964,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.20983213429256595,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.10905730129390019,
"prompt_level_loose_acc_stderr,none": 0.013413909746312102,
"acc_norm,none": 0.3309119211311454,
"acc_norm_stderr,none": 0.005128975300894813,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.334837701787884,
"acc_norm_stderr,none": 0.005901746897666035,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3150684931506849,
"acc_norm_stderr,none": 0.03857820876541411
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4943820224719101,
"acc_norm_stderr,none": 0.03757992900475984
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27684563758389263,
"acc_norm_stderr,none": 0.012960912249614355,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25757575757575757,
"acc_norm_stderr,none": 0.031156269519646826
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.25669642857142855,
"acc_norm_stderr,none": 0.020660425491724744
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.10351201478743069,
"prompt_level_strict_acc_stderr,none": 0.013109035446484243,
"inst_level_strict_acc,none": 0.20983213429256595,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.10905730129390019,
"prompt_level_loose_acc_stderr,none": 0.013413909746312102,
"inst_level_loose_acc,none": 0.2158273381294964,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.014350453172205438,
"exact_match_stderr,none": 0.0032717098633934637,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.019543973941368076,
"exact_match_stderr,none": 0.007913339243755165
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17569813829787234,
"acc_stderr,none": 0.003469571620440863
},
"leaderboard_musr": {
"acc_norm,none": 0.3862433862433862,
"acc_norm_stderr,none": 0.017179183382758968,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
}
},
"leaderboard": {
"exact_match,none": 0.014350453172205438,
"exact_match_stderr,none": 0.0032717098633934637,
"acc,none": 0.17569813829787234,
"acc_stderr,none": 0.003469571620440863,
"prompt_level_strict_acc,none": 0.10351201478743069,
"prompt_level_strict_acc_stderr,none": 0.013109035446484243,
"inst_level_loose_acc,none": 0.2158273381294964,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.20983213429256595,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.10905730129390019,
"prompt_level_loose_acc_stderr,none": 0.013413909746312102,
"acc_norm,none": 0.3309119211311454,
"acc_norm_stderr,none": 0.005128975300894813,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.334837701787884,
"acc_norm_stderr,none": 0.005901746897666035,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3150684931506849,
"acc_norm_stderr,none": 0.03857820876541411
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.4943820224719101,
"acc_norm_stderr,none": 0.03757992900475984
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27684563758389263,
"acc_norm_stderr,none": 0.012960912249614355,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25757575757575757,
"acc_norm_stderr,none": 0.031156269519646826
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.25669642857142855,
"acc_norm_stderr,none": 0.020660425491724744
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.10351201478743069,
"prompt_level_strict_acc_stderr,none": 0.013109035446484243,
"inst_level_strict_acc,none": 0.20983213429256595,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.10905730129390019,
"prompt_level_loose_acc_stderr,none": 0.013413909746312102,
"inst_level_loose_acc,none": 0.2158273381294964,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.014350453172205438,
"exact_match_stderr,none": 0.0032717098633934637,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.019543973941368076,
"exact_match_stderr,none": 0.007913339243755165
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17569813829787234,
"acc_stderr,none": 0.003469571620440863
},
"leaderboard_musr": {
"acc_norm,none": 0.3862433862433862,
"acc_norm_stderr,none": 0.017179183382758968,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/sabersaleh__Llama2-7B-SimPO-details | open-llm-leaderboard | "2024-12-02T04:13:06Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:10:11Z" | ---
pretty_name: Evaluation run of sabersaleh/Llama2-7B-SimPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sabersaleh/Llama2-7B-SimPO](https://huggingface.co/sabersaleh/Llama2-7B-SimPO)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/sabersaleh__Llama2-7B-SimPO-details\"\
,\n\tname=\"sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T04-10-10.920662](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-SimPO-details/blob/main/sabersaleh__Llama2-7B-SimPO/results_2024-12-02T04-10-10.920662.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_strict_acc,none\": 0.21342925659472423,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"exact_match,none\"\
: 0.0075528700906344415,\n \"exact_match_stderr,none\": 0.0023779536000938383,\n\
\ \"inst_level_loose_acc,none\": 0.22422062350119903,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.16414561170212766,\n\
\ \"acc_stderr,none\": 0.0033769846642746843,\n \"prompt_level_strict_acc,none\"\
: 0.11829944547134935,\n \"prompt_level_strict_acc_stderr,none\": 0.013898087176706528,\n\
\ \"prompt_level_loose_acc,none\": 0.12384473197781885,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014175305492766679,\n \"\
acc_norm,none\": 0.33947334284602415,\n \"acc_norm_stderr,none\": 0.0051545867942471195,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.3457733032459642,\n \"acc_norm_stderr,none\"\
: 0.005941764797774342,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5240641711229946,\n \"acc_norm_stderr,none\"\
: 0.03661929361528698\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.544,\n\
\ \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\":\
\ 0.031621252575725574\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.084,\n \"acc_norm_stderr,none\": 0.017578738526776348\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.484,\n \
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\":\
\ 0.024760377727750513\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.284,\n\
\ \"acc_norm_stderr,none\": 0.02857695873043744\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3356164383561644,\n \"acc_norm_stderr,none\"\
: 0.039214533254314086\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.224,\n \"acc_norm_stderr,none\":\
\ 0.026421361687347884\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.172,\n \"acc_norm_stderr,none\": 0.02391551394448624\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.46629213483146065,\n\
\ \"acc_norm_stderr,none\": 0.0374968006036899\n },\n \"\
leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\":\
\ 0.031621252575725574\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.164,\n \"acc_norm_stderr,none\": 0.02346526100207671\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\":\
\ 0.03010450339231644\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2709731543624161,\n\
\ \"acc_norm_stderr,none\": 0.01288270601869643,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.24242424242424243,\n \"acc_norm_stderr,none\": 0.030532892233932022\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2838827838827839,\n\
\ \"acc_norm_stderr,none\": 0.01931360450766325\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26785714285714285,\n \"acc_norm_stderr,none\"\
: 0.02094574294163546\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.11829944547134935,\n \"prompt_level_strict_acc_stderr,none\": 0.013898087176706528,\n\
\ \"inst_level_strict_acc,none\": 0.21342925659472423,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.12384473197781885,\n \"prompt_level_loose_acc_stderr,none\": 0.014175305492766679,\n\
\ \"inst_level_loose_acc,none\": 0.22422062350119903,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0075528700906344415,\n \"exact_match_stderr,none\"\
: 0.0023779536000938383,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.016286644951140065,\n\
\ \"exact_match_stderr,none\": 0.007235847161303936\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.010362694300518135,\n \"exact_match_stderr,none\"\
: 0.007308424386792209\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.014814814814814815,\n \"exact_match_stderr,none\"\
: 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.16414561170212766,\n\
\ \"acc_stderr,none\": 0.0033769846642746843\n },\n \"\
leaderboard_musr\": {\n \"acc_norm,none\": 0.3994708994708995,\n \
\ \"acc_norm_stderr,none\": 0.01732130388562696,\n \"alias\":\
\ \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.24609375,\n\
\ \"acc_norm_stderr,none\": 0.026973597563786113\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n }\n },\n \"leaderboard\": {\n \"inst_level_strict_acc,none\"\
: 0.21342925659472423,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"exact_match,none\": 0.0075528700906344415,\n \"exact_match_stderr,none\"\
: 0.0023779536000938383,\n \"inst_level_loose_acc,none\": 0.22422062350119903,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.16414561170212766,\n\
\ \"acc_stderr,none\": 0.0033769846642746843,\n \"prompt_level_strict_acc,none\"\
: 0.11829944547134935,\n \"prompt_level_strict_acc_stderr,none\": 0.013898087176706528,\n\
\ \"prompt_level_loose_acc,none\": 0.12384473197781885,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.014175305492766679,\n \"acc_norm,none\": 0.33947334284602415,\n \
\ \"acc_norm_stderr,none\": 0.0051545867942471195,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.3457733032459642,\n\
\ \"acc_norm_stderr,none\": 0.005941764797774342,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5240641711229946,\n \"acc_norm_stderr,none\"\
: 0.03661929361528698\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.084,\n \"acc_norm_stderr,none\": 0.017578738526776348\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3356164383561644,\n\
\ \"acc_norm_stderr,none\": 0.039214533254314086\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.224,\n \"acc_norm_stderr,none\": 0.026421361687347884\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\": 0.02391551394448624\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.46629213483146065,\n \"acc_norm_stderr,none\"\
: 0.0374968006036899\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.164,\n \"acc_norm_stderr,none\": 0.02346526100207671\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2709731543624161,\n\
\ \"acc_norm_stderr,none\": 0.01288270601869643,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.24242424242424243,\n\
\ \"acc_norm_stderr,none\": 0.030532892233932022\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2838827838827839,\n \"acc_norm_stderr,none\": 0.01931360450766325\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26785714285714285,\n \"acc_norm_stderr,none\"\
: 0.02094574294163546\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.11829944547134935,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.013898087176706528,\n \
\ \"inst_level_strict_acc,none\": 0.21342925659472423,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.12384473197781885,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014175305492766679,\n \"inst_level_loose_acc,none\"\
: 0.22422062350119903,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0075528700906344415,\n\
\ \"exact_match_stderr,none\": 0.0023779536000938383,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.016286644951140065,\n \"exact_match_stderr,none\": 0.007235847161303936\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.010362694300518135,\n \"exact_match_stderr,none\": 0.007308424386792209\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.014814814814814815,\n\
\ \"exact_match_stderr,none\": 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.16414561170212766,\n\
\ \"acc_stderr,none\": 0.0033769846642746843\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3994708994708995,\n \"acc_norm_stderr,none\"\
: 0.01732130388562696,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.532,\n \"acc_norm_stderr,none\": 0.031621252575725574\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.24609375,\n\
\ \"acc_norm_stderr,none\": 0.026973597563786113\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n }\n}\n```"
repo_url: https://huggingface.co/sabersaleh/Llama2-7B-SimPO
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-10-10.920662.jsonl'
- config_name: sabersaleh__Llama2-7B-SimPO__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_10_10.920662
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-10-10.920662.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-10-10.920662.jsonl'
---
# Dataset Card for Evaluation run of sabersaleh/Llama2-7B-SimPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sabersaleh/Llama2-7B-SimPO](https://huggingface.co/sabersaleh/Llama2-7B-SimPO)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/sabersaleh__Llama2-7B-SimPO-details",
name="sabersaleh__Llama2-7B-SimPO__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-10-10.920662](https://huggingface.co/datasets/open-llm-leaderboard/sabersaleh__Llama2-7B-SimPO-details/blob/main/sabersaleh__Llama2-7B-SimPO/results_2024-12-02T04-10-10.920662.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_strict_acc,none": 0.21342925659472423,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.0075528700906344415,
"exact_match_stderr,none": 0.0023779536000938383,
"inst_level_loose_acc,none": 0.22422062350119903,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.16414561170212766,
"acc_stderr,none": 0.0033769846642746843,
"prompt_level_strict_acc,none": 0.11829944547134935,
"prompt_level_strict_acc_stderr,none": 0.013898087176706528,
"prompt_level_loose_acc,none": 0.12384473197781885,
"prompt_level_loose_acc_stderr,none": 0.014175305492766679,
"acc_norm,none": 0.33947334284602415,
"acc_norm_stderr,none": 0.0051545867942471195,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3457733032459642,
"acc_norm_stderr,none": 0.005941764797774342,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5240641711229946,
"acc_norm_stderr,none": 0.03661929361528698
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.084,
"acc_norm_stderr,none": 0.017578738526776348
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3356164383561644,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.46629213483146065,
"acc_norm_stderr,none": 0.0374968006036899
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2709731543624161,
"acc_norm_stderr,none": 0.01288270601869643,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.24242424242424243,
"acc_norm_stderr,none": 0.030532892233932022
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2838827838827839,
"acc_norm_stderr,none": 0.01931360450766325
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26785714285714285,
"acc_norm_stderr,none": 0.02094574294163546
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.11829944547134935,
"prompt_level_strict_acc_stderr,none": 0.013898087176706528,
"inst_level_strict_acc,none": 0.21342925659472423,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.12384473197781885,
"prompt_level_loose_acc_stderr,none": 0.014175305492766679,
"inst_level_loose_acc,none": 0.22422062350119903,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0075528700906344415,
"exact_match_stderr,none": 0.0023779536000938383,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.016286644951140065,
"exact_match_stderr,none": 0.007235847161303936
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.16414561170212766,
"acc_stderr,none": 0.0033769846642746843
},
"leaderboard_musr": {
"acc_norm,none": 0.3994708994708995,
"acc_norm_stderr,none": 0.01732130388562696,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
},
"leaderboard": {
"inst_level_strict_acc,none": 0.21342925659472423,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.0075528700906344415,
"exact_match_stderr,none": 0.0023779536000938383,
"inst_level_loose_acc,none": 0.22422062350119903,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.16414561170212766,
"acc_stderr,none": 0.0033769846642746843,
"prompt_level_strict_acc,none": 0.11829944547134935,
"prompt_level_strict_acc_stderr,none": 0.013898087176706528,
"prompt_level_loose_acc,none": 0.12384473197781885,
"prompt_level_loose_acc_stderr,none": 0.014175305492766679,
"acc_norm,none": 0.33947334284602415,
"acc_norm_stderr,none": 0.0051545867942471195,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.3457733032459642,
"acc_norm_stderr,none": 0.005941764797774342,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5240641711229946,
"acc_norm_stderr,none": 0.03661929361528698
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.084,
"acc_norm_stderr,none": 0.017578738526776348
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3356164383561644,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.46629213483146065,
"acc_norm_stderr,none": 0.0374968006036899
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2709731543624161,
"acc_norm_stderr,none": 0.01288270601869643,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.24242424242424243,
"acc_norm_stderr,none": 0.030532892233932022
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2838827838827839,
"acc_norm_stderr,none": 0.01931360450766325
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26785714285714285,
"acc_norm_stderr,none": 0.02094574294163546
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.11829944547134935,
"prompt_level_strict_acc_stderr,none": 0.013898087176706528,
"inst_level_strict_acc,none": 0.21342925659472423,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.12384473197781885,
"prompt_level_loose_acc_stderr,none": 0.014175305492766679,
"inst_level_loose_acc,none": 0.22422062350119903,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0075528700906344415,
"exact_match_stderr,none": 0.0023779536000938383,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.016286644951140065,
"exact_match_stderr,none": 0.007235847161303936
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.010362694300518135,
"exact_match_stderr,none": 0.007308424386792209
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.16414561170212766,
"acc_stderr,none": 0.0033769846642746843
},
"leaderboard_musr": {
"acc_norm,none": 0.3994708994708995,
"acc_norm_stderr,none": 0.01732130388562696,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.532,
"acc_norm_stderr,none": 0.031621252575725574
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
haukur/enwik9 | haukur | "2024-12-02T04:18:39Z" | 3 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:18:15Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1039441079
num_examples: 13147026
download_size: 550864096
dataset_size: 1039441079
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen6_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-02T04:18:57Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:18:54Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 584256
num_examples: 1000
download_size: 351933
dataset_size: 584256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gswamy/pythia-1.4B-tldr-vllm-pair-iter-1 | gswamy | "2024-12-02T20:48:45Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:23:48Z" | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: query_token
sequence: int64
- name: query
dtype: string
- name: response0
dtype: string
- name: response0_token
sequence: int64
- name: response0_token_len
dtype: int64
- name: response0_policy
dtype: string
- name: query_response0
dtype: string
- name: query_response0_token
sequence: int64
- name: query_response0_token_len
dtype: int64
- name: query_response0_token_response_label
sequence: int64
- name: response1
dtype: string
- name: response1_token
sequence: int64
- name: response1_token_len
dtype: int64
- name: response1_policy
dtype: string
- name: query_response1
dtype: string
- name: query_response1_token
sequence: int64
- name: query_response1_token_len
dtype: int64
- name: query_response1_token_response_label
sequence: int64
- name: query_token_len
dtype: int64
- name: policies
dtype: string
- name: iter_1_best_query_response
sequence: int64
- name: iter_1_worst_query_response
sequence: int64
- name: iter_1_best_mask
sequence: int64
- name: iter_1_worst_mask
sequence: int64
- name: iter_1_best_reward
dtype: float64
- name: iter_1_worst_reward
dtype: float64
splits:
- name: train
num_bytes: 4841788931
num_examples: 92858
download_size: 180270073
dataset_size: 4841788931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/SultanR__SmolTulu-1.7b-it-v0-details | open-llm-leaderboard | "2024-12-02T04:27:57Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:24:30Z" | ---
pretty_name: Evaluation run of SultanR/SmolTulu-1.7b-it-v0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [SultanR/SmolTulu-1.7b-it-v0](https://huggingface.co/SultanR/SmolTulu-1.7b-it-v0)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/SultanR__SmolTulu-1.7b-it-v0-details\"\
,\n\tname=\"SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T04-24-29.671146](https://huggingface.co/datasets/open-llm-leaderboard/SultanR__SmolTulu-1.7b-it-v0-details/blob/main/SultanR__SmolTulu-1.7b-it-v0/results_2024-12-02T04-24-29.671146.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.026435045317220542,\n \"exact_match_stderr,none\"\
: 0.004359122520460206,\n \"prompt_level_loose_acc,none\": 0.6358595194085028,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \
\ \"acc,none\": 0.17104388297872342,\n \"acc_stderr,none\": 0.0034329595047432816,\n\
\ \"acc_norm,none\": 0.3514074458425217,\n \"acc_norm_stderr,none\"\
: 0.005165884234442981,\n \"inst_level_loose_acc,none\": 0.7386091127098321,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_strict_acc,none\": 0.600739371534196,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n \"\
alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.36816524908869985,\n \"acc_norm_stderr,none\"\
: 0.005979183471724429,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.572,\n\
\ \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.448,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115349\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_geometric_shapes\":\
\ {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.512,\n \
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\":\
\ 0.02607865766373279\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.576,\n \
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.2328767123287671,\n \"acc_norm_stderr,none\": 0.03510036341139227\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.108,\n \
\ \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.132,\n\
\ \"acc_norm_stderr,none\": 0.021450980824038166\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\":\
\ 0.029658294924545567\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.26929530201342283,\n\
\ \"acc_norm_stderr,none\": 0.01285318594753383,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.23737373737373738,\n \"acc_norm_stderr,none\": 0.030313710538198924\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.26373626373626374,\n\
\ \"acc_norm_stderr,none\": 0.018875713580372433\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29017857142857145,\n \"acc_norm_stderr,none\"\
: 0.021466115440571226\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.600739371534196,\n \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n\
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.6358595194085028,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"inst_level_loose_acc,none\": 0.7386091127098321,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.026435045317220542,\n \"exact_match_stderr,none\"\
: 0.004359122520460206,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.06514657980456026,\n\
\ \"exact_match_stderr,none\": 0.014107720843558174\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.012987012987012988,\n \"exact_match_stderr,none\"\
: 0.009153145279150204\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.05181347150259067,\n \"exact_match_stderr,none\"\
: 0.015996229320244134\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.17104388297872342,\n\
\ \"acc_stderr,none\": 0.003432959504743281\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3531746031746032,\n \"acc_norm_stderr,none\"\
: 0.01697485324642576,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.5,\n\
\ \"acc_norm_stderr,none\": 0.031686212526223896\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.24609375,\n \"acc_norm_stderr,none\"\
: 0.026973597563786113\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ }\n },\n \"leaderboard\": {\n \"exact_match,none\": 0.026435045317220542,\n\
\ \"exact_match_stderr,none\": 0.004359122520460206,\n \"prompt_level_loose_acc,none\"\
: 0.6358595194085028,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"acc,none\": 0.17104388297872342,\n \"acc_stderr,none\": 0.0034329595047432816,\n\
\ \"acc_norm,none\": 0.3514074458425217,\n \"acc_norm_stderr,none\"\
: 0.005165884234442981,\n \"inst_level_loose_acc,none\": 0.7386091127098321,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_strict_acc,none\": 0.600739371534196,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021075331332701255,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.36816524908869985,\n \"acc_norm_stderr,none\"\
: 0.005979183471724429,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2328767123287671,\n\
\ \"acc_norm_stderr,none\": 0.03510036341139227\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.108,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.132,\n \"acc_norm_stderr,none\": 0.021450980824038166\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.26929530201342283,\n\
\ \"acc_norm_stderr,none\": 0.01285318594753383,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.23737373737373738,\n\
\ \"acc_norm_stderr,none\": 0.030313710538198924\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.26373626373626374,\n \"acc_norm_stderr,none\": 0.018875713580372433\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29017857142857145,\n \"acc_norm_stderr,none\"\
: 0.021466115440571226\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.600739371534196,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n \
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.6358595194085028,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \"inst_level_loose_acc,none\"\
: 0.7386091127098321,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.026435045317220542,\n\
\ \"exact_match_stderr,none\": 0.004359122520460206,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.06514657980456026,\n \"exact_match_stderr,none\": 0.014107720843558174\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0035714285714285713,\n \"exact_match_stderr,none\": 0.0035714285714285713\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.012987012987012988,\n \"exact_match_stderr,none\"\
: 0.009153145279150204\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.05181347150259067,\n \"exact_match_stderr,none\": 0.015996229320244134\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.007407407407407408,\n\
\ \"exact_match_stderr,none\": 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.17104388297872342,\n\
\ \"acc_stderr,none\": 0.003432959504743281\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3531746031746032,\n \"acc_norm_stderr,none\"\
: 0.01697485324642576,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.24609375,\n\
\ \"acc_norm_stderr,none\": 0.026973597563786113\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n }\n}\n```"
repo_url: https://huggingface.co/SultanR/SmolTulu-1.7b-it-v0
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-24-29.671146.jsonl'
- config_name: SultanR__SmolTulu-1.7b-it-v0__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_24_29.671146
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-24-29.671146.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-24-29.671146.jsonl'
---
# Dataset Card for Evaluation run of SultanR/SmolTulu-1.7b-it-v0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [SultanR/SmolTulu-1.7b-it-v0](https://huggingface.co/SultanR/SmolTulu-1.7b-it-v0)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/SultanR__SmolTulu-1.7b-it-v0-details",
name="SultanR__SmolTulu-1.7b-it-v0__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-24-29.671146](https://huggingface.co/datasets/open-llm-leaderboard/SultanR__SmolTulu-1.7b-it-v0-details/blob/main/SultanR__SmolTulu-1.7b-it-v0/results_2024-12-02T04-24-29.671146.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.0034329595047432816,
"acc_norm,none": 0.3514074458425217,
"acc_norm_stderr,none": 0.005165884234442981,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.36816524908869985,
"acc_norm_stderr,none": 0.005979183471724429,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2328767123287671,
"acc_norm_stderr,none": 0.03510036341139227
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.108,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_gpqa": {
"acc_norm,none": 0.26929530201342283,
"acc_norm_stderr,none": 0.01285318594753383,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.26373626373626374,
"acc_norm_stderr,none": 0.018875713580372433
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29017857142857145,
"acc_norm_stderr,none": 0.021466115440571226
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.05181347150259067,
"exact_match_stderr,none": 0.015996229320244134
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.003432959504743281
},
"leaderboard_musr": {
"acc_norm,none": 0.3531746031746032,
"acc_norm_stderr,none": 0.01697485324642576,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
}
},
"leaderboard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.0034329595047432816,
"acc_norm,none": 0.3514074458425217,
"acc_norm_stderr,none": 0.005165884234442981,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.36816524908869985,
"acc_norm_stderr,none": 0.005979183471724429,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2328767123287671,
"acc_norm_stderr,none": 0.03510036341139227
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.108,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_gpqa": {
"acc_norm,none": 0.26929530201342283,
"acc_norm_stderr,none": 0.01285318594753383,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.26373626373626374,
"acc_norm_stderr,none": 0.018875713580372433
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29017857142857145,
"acc_norm_stderr,none": 0.021466115440571226
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.05181347150259067,
"exact_match_stderr,none": 0.015996229320244134
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.003432959504743281
},
"leaderboard_musr": {
"acc_norm,none": 0.3531746031746032,
"acc_norm_stderr,none": 0.01697485324642576,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/synergetic__FrankenQwen2.5-14B-details | open-llm-leaderboard | "2024-12-02T04:28:28Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:24:32Z" | ---
pretty_name: Evaluation run of synergetic/FrankenQwen2.5-14B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [synergetic/FrankenQwen2.5-14B](https://huggingface.co/synergetic/FrankenQwen2.5-14B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/synergetic__FrankenQwen2.5-14B-details\"\
,\n\tname=\"synergetic__FrankenQwen2.5-14B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T04-24-32.113248](https://huggingface.co/datasets/open-llm-leaderboard/synergetic__FrankenQwen2.5-14B-details/blob/main/synergetic__FrankenQwen2.5-14B/results_2024-12-02T04-24-32.113248.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.2529976019184652,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.2482014388489209,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"prompt_level_loose_acc,none\": 0.12939001848428835,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014443263302194753,\n \
\ \"prompt_level_strict_acc,none\": 0.1256931608133087,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.014265627567173898,\n \"acc_norm,none\": 0.5296406797249967,\n \
\ \"acc_norm_stderr,none\": 0.005187639459014372,\n \"acc,none\"\
: 0.43816489361702127,\n \"acc_stderr,none\": 0.004523476746563679,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6024995660475612,\n \"acc_norm_stderr,none\"\
: 0.005986317228384322,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.892,\n\
\ \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6577540106951871,\n \"acc_norm_stderr,none\"\
: 0.03478920176906822\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.56,\n\
\ \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.524,\n\
\ \"acc_norm_stderr,none\": 0.03164968895968774\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\":\
\ 0.02496069198917196\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\":\
\ 0.03151438761115348\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.72,\n \"acc_norm_stderr,none\": 0.02845414827783231\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\":\
\ 0.025721398901416368\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6506849315068494,\n \"acc_norm_stderr,none\": 0.039592236387765004\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.756,\n \
\ \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\":\
\ 0.031355968923772626\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6741573033707865,\n \"acc_norm_stderr,none\": 0.03522881089181037\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.668,\n\
\ \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\":\
\ 0.028928939388379694\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012870216178521153,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.26262626262626265,\n \"acc_norm_stderr,none\": 0.031353050095330834\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2857142857142857,\n\
\ \"acc_norm_stderr,none\": 0.019351013185102753\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2544642857142857,\n \"acc_norm_stderr,none\"\
: 0.02060126475832284\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.1256931608133087,\n \"prompt_level_strict_acc_stderr,none\": 0.014265627567173898,\n\
\ \"inst_level_strict_acc,none\": 0.24820143884892087,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.12939001848428835,\n \"prompt_level_loose_acc_stderr,none\": 0.014443263302194753,\n\
\ \"inst_level_loose_acc,none\": 0.2529976019184652,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"alias\": \" - leaderboard_math_hard\"\n },\n \
\ \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\": {\n \
\ \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\"\
: \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\":\
\ 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\"\
: 0.43816489361702127,\n \"acc_stderr,none\": 0.004523476746563679\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.3835978835978836,\n\
\ \"acc_norm_stderr,none\": 0.01747918490333201,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.30078125,\n\
\ \"acc_norm_stderr,none\": 0.02871850463421181\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.2529976019184652,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.2482014388489209,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"prompt_level_loose_acc,none\": 0.12939001848428835,\n \"\
prompt_level_loose_acc_stderr,none\": 0.014443263302194753,\n \"prompt_level_strict_acc,none\"\
: 0.1256931608133087,\n \"prompt_level_strict_acc_stderr,none\": 0.014265627567173898,\n\
\ \"acc_norm,none\": 0.5296406797249967,\n \"acc_norm_stderr,none\"\
: 0.005187639459014372,\n \"acc,none\": 0.43816489361702127,\n \"\
acc_stderr,none\": 0.004523476746563679,\n \"alias\": \"leaderboard\"\n \
\ },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6024995660475612,\n\
\ \"acc_norm_stderr,none\": 0.005986317228384322,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6577540106951871,\n \"acc_norm_stderr,none\"\
: 0.03478920176906822\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.72,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6506849315068494,\n\
\ \"acc_norm_stderr,none\": 0.039592236387765004\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6741573033707865,\n \"acc_norm_stderr,none\"\
: 0.03522881089181037\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012870216178521153,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.26262626262626265,\n\
\ \"acc_norm_stderr,none\": 0.031353050095330834\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2857142857142857,\n \"acc_norm_stderr,none\": 0.019351013185102753\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2544642857142857,\n \"acc_norm_stderr,none\"\
: 0.02060126475832284\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.1256931608133087,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.014265627567173898,\n \
\ \"inst_level_strict_acc,none\": 0.24820143884892087,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.12939001848428835,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014443263302194753,\n \"inst_level_loose_acc,none\"\
: 0.2529976019184652,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0,\n \"alias\": \" - leaderboard_math_hard\"\
\n },\n \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\": {\n\
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43816489361702127,\n\
\ \"acc_stderr,none\": 0.004523476746563679\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3835978835978836,\n \"acc_norm_stderr,none\"\
: 0.01747918490333201,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.30078125,\n\
\ \"acc_norm_stderr,none\": 0.02871850463421181\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n }\n}\n```"
repo_url: https://huggingface.co/synergetic/FrankenQwen2.5-14B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-24-32.113248.jsonl'
- config_name: synergetic__FrankenQwen2.5-14B__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_24_32.113248
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-24-32.113248.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-24-32.113248.jsonl'
---
# Dataset Card for Evaluation run of synergetic/FrankenQwen2.5-14B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [synergetic/FrankenQwen2.5-14B](https://huggingface.co/synergetic/FrankenQwen2.5-14B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/synergetic__FrankenQwen2.5-14B-details",
name="synergetic__FrankenQwen2.5-14B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-24-32.113248](https://huggingface.co/datasets/open-llm-leaderboard/synergetic__FrankenQwen2.5-14B-details/blob/main/synergetic__FrankenQwen2.5-14B/results_2024-12-02T04-24-32.113248.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.2529976019184652,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.2482014388489209,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"prompt_level_loose_acc,none": 0.12939001848428835,
"prompt_level_loose_acc_stderr,none": 0.014443263302194753,
"prompt_level_strict_acc,none": 0.1256931608133087,
"prompt_level_strict_acc_stderr,none": 0.014265627567173898,
"acc_norm,none": 0.5296406797249967,
"acc_norm_stderr,none": 0.005187639459014372,
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6024995660475612,
"acc_norm_stderr,none": 0.005986317228384322,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6577540106951871,
"acc_norm_stderr,none": 0.03478920176906822
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.72,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6506849315068494,
"acc_norm_stderr,none": 0.039592236387765004
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6741573033707865,
"acc_norm_stderr,none": 0.03522881089181037
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012870216178521153,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.26262626262626265,
"acc_norm_stderr,none": 0.031353050095330834
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2544642857142857,
"acc_norm_stderr,none": 0.02060126475832284
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.1256931608133087,
"prompt_level_strict_acc_stderr,none": 0.014265627567173898,
"inst_level_strict_acc,none": 0.24820143884892087,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.12939001848428835,
"prompt_level_loose_acc_stderr,none": 0.014443263302194753,
"inst_level_loose_acc,none": 0.2529976019184652,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679
},
"leaderboard_musr": {
"acc_norm,none": 0.3835978835978836,
"acc_norm_stderr,none": 0.01747918490333201,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.30078125,
"acc_norm_stderr,none": 0.02871850463421181
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.2529976019184652,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.2482014388489209,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"prompt_level_loose_acc,none": 0.12939001848428835,
"prompt_level_loose_acc_stderr,none": 0.014443263302194753,
"prompt_level_strict_acc,none": 0.1256931608133087,
"prompt_level_strict_acc_stderr,none": 0.014265627567173898,
"acc_norm,none": 0.5296406797249967,
"acc_norm_stderr,none": 0.005187639459014372,
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6024995660475612,
"acc_norm_stderr,none": 0.005986317228384322,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6577540106951871,
"acc_norm_stderr,none": 0.03478920176906822
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.72,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6506849315068494,
"acc_norm_stderr,none": 0.039592236387765004
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6741573033707865,
"acc_norm_stderr,none": 0.03522881089181037
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012870216178521153,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.26262626262626265,
"acc_norm_stderr,none": 0.031353050095330834
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2544642857142857,
"acc_norm_stderr,none": 0.02060126475832284
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.1256931608133087,
"prompt_level_strict_acc_stderr,none": 0.014265627567173898,
"inst_level_strict_acc,none": 0.24820143884892087,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.12939001848428835,
"prompt_level_loose_acc_stderr,none": 0.014443263302194753,
"inst_level_loose_acc,none": 0.2529976019184652,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679
},
"leaderboard_musr": {
"acc_norm,none": 0.3835978835978836,
"acc_norm_stderr,none": 0.01747918490333201,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.30078125,
"acc_norm_stderr,none": 0.02871850463421181
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/MTSAIR__Cotype-Nano-details | open-llm-leaderboard | "2024-12-02T04:30:13Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:26:41Z" | ---
pretty_name: Evaluation run of MTSAIR/Cotype-Nano
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MTSAIR/Cotype-Nano](https://huggingface.co/MTSAIR/Cotype-Nano)\nThe dataset is\
\ composed of 38 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 1 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/MTSAIR__Cotype-Nano-details\"\
,\n\tname=\"MTSAIR__Cotype-Nano__leaderboard_bbh_boolean_expressions\",\n\tsplit=\"\
latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from run\
\ 2024-12-02T04-26-40.216058](https://huggingface.co/datasets/open-llm-leaderboard/MTSAIR__Cotype-Nano-details/blob/main/MTSAIR__Cotype-Nano/results_2024-12-02T04-26-40.216058.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.36414048059149723,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \"prompt_level_strict_acc,none\"\
: 0.3179297597042514,\n \"prompt_level_strict_acc_stderr,none\": 0.02003933297102034,\n\
\ \"acc,none\": 0.24767287234042554,\n \"acc_stderr,none\"\
: 0.003935425705552356,\n \"acc_norm,none\": 0.36061746011155793,\n \
\ \"acc_norm_stderr,none\": 0.005169325806483379,\n \"inst_level_loose_acc,none\"\
: 0.47601918465227816,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.06419939577039276,\n \"exact_match_stderr,none\"\
: 0.006530898429299409,\n \"inst_level_strict_acc,none\": 0.4316546762589928,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.38361395591043224,\n \"acc_norm_stderr,none\": 0.006010807277181961,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5347593582887701,\n\
\ \"acc_norm_stderr,none\": 0.036573080985189216\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.264,\n \"acc_norm_stderr,none\":\
\ 0.027934518957690866\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.512,\n\
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\":\
\ 0.02993325909419153\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.256,\n\
\ \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3698630136986301,\n \"acc_norm_stderr,none\"\
: 0.04009165058801775\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\":\
\ 0.02936106757521985\n },\n \"leaderboard_bbh_ruin_names\": {\n \
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.152,\n \"acc_norm_stderr,none\": 0.022752024491765464\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.5112359550561798,\n\
\ \"acc_norm_stderr,none\": 0.03757281091983857\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\":\
\ 0.03153986449255664\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.104,\n \"acc_norm_stderr,none\":\
\ 0.019345100974843932\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\":\
\ 0.029844039047465857\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012875997640285002,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.25757575757575757,\n \"acc_norm_stderr,none\": 0.031156269519646826\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.27472527472527475,\n\
\ \"acc_norm_stderr,none\": 0.019120635768881563\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2700892857142857,\n \"acc_norm_stderr,none\"\
: 0.021000749078822437\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.3179297597042514,\n \"prompt_level_strict_acc_stderr,none\": 0.02003933297102034,\n\
\ \"inst_level_strict_acc,none\": 0.4316546762589928,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.36414048059149723,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"inst_level_loose_acc,none\": 0.47601918465227816,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.06419939577039276,\n \"exact_match_stderr,none\"\
: 0.006530898429299409,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.14006514657980457,\n\
\ \"exact_match_stderr,none\": 0.019839791442658312\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.015151515151515152,\n\
\ \"exact_match_stderr,none\": 0.01067276863717474\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02142857142857143,\n \"exact_match_stderr,none\": 0.008669434577665551\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.006493506493506494,\n\
\ \"exact_match_stderr,none\": 0.006493506493506494\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.15025906735751296,\n \"exact_match_stderr,none\"\
: 0.025787723180723855\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.24767287234042554,\n\
\ \"acc_stderr,none\": 0.003935425705552356\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.328042328042328,\n \"acc_norm_stderr,none\"\
: 0.016381978023430912,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.52,\n\
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.21875,\n \"acc_norm_stderr,none\"\
: 0.025888027174359812\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.36414048059149723,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"prompt_level_strict_acc,none\": 0.3179297597042514,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.02003933297102034,\n \"acc,none\": 0.24767287234042554,\n \"acc_stderr,none\"\
: 0.003935425705552356,\n \"acc_norm,none\": 0.36061746011155793,\n \
\ \"acc_norm_stderr,none\": 0.005169325806483379,\n \"inst_level_loose_acc,none\"\
: 0.47601918465227816,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"exact_match,none\": 0.06419939577039276,\n \"exact_match_stderr,none\"\
: 0.006530898429299409,\n \"inst_level_strict_acc,none\": 0.4316546762589928,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.38361395591043224,\n\
\ \"acc_norm_stderr,none\": 0.006010807277181961,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5347593582887701,\n \"acc_norm_stderr,none\"\
: 0.036573080985189216\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.264,\n \"acc_norm_stderr,none\": 0.027934518957690866\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3698630136986301,\n\
\ \"acc_norm_stderr,none\": 0.04009165058801775\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.152,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5112359550561798,\n \"acc_norm_stderr,none\"\
: 0.03757281091983857\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.104,\n \"acc_norm_stderr,none\": 0.019345100974843932\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012875997640285002,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.25757575757575757,\n\
\ \"acc_norm_stderr,none\": 0.031156269519646826\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.27472527472527475,\n \"acc_norm_stderr,none\": 0.019120635768881563\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2700892857142857,\n \"acc_norm_stderr,none\"\
: 0.021000749078822437\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.3179297597042514,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02003933297102034,\n \
\ \"inst_level_strict_acc,none\": 0.4316546762589928,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.36414048059149723,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \"inst_level_loose_acc,none\"\
: 0.47601918465227816,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.06419939577039276,\n\
\ \"exact_match_stderr,none\": 0.006530898429299409,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.14006514657980457,\n \"exact_match_stderr,none\": 0.019839791442658312\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02142857142857143,\n \"exact_match_stderr,none\"\
: 0.008669434577665551\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.006493506493506494,\n \"exact_match_stderr,none\": 0.006493506493506494\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.15025906735751296,\n \"exact_match_stderr,none\"\
: 0.025787723180723855\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.007407407407407408,\n \"exact_match_stderr,none\": 0.007407407407407408\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.24767287234042554,\n \"acc_stderr,none\": 0.003935425705552356\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.328042328042328,\n\
\ \"acc_norm_stderr,none\": 0.016381978023430912,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_musr_object_placements\"\
: {\n \"alias\": \" - leaderboard_musr_object_placements\",\n \"\
acc_norm,none\": 0.21875,\n \"acc_norm_stderr,none\": 0.025888027174359812\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ }\n}\n```"
repo_url: https://huggingface.co/MTSAIR/Cotype-Nano
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-26-40.216058.jsonl'
- config_name: MTSAIR__Cotype-Nano__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_26_40.216058
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-26-40.216058.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-26-40.216058.jsonl'
---
# Dataset Card for Evaluation run of MTSAIR/Cotype-Nano
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MTSAIR/Cotype-Nano](https://huggingface.co/MTSAIR/Cotype-Nano)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/MTSAIR__Cotype-Nano-details",
name="MTSAIR__Cotype-Nano__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-26-40.216058](https://huggingface.co/datasets/open-llm-leaderboard/MTSAIR__Cotype-Nano-details/blob/main/MTSAIR__Cotype-Nano/results_2024-12-02T04-26-40.216058.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.36414048059149723,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"prompt_level_strict_acc,none": 0.3179297597042514,
"prompt_level_strict_acc_stderr,none": 0.02003933297102034,
"acc,none": 0.24767287234042554,
"acc_stderr,none": 0.003935425705552356,
"acc_norm,none": 0.36061746011155793,
"acc_norm_stderr,none": 0.005169325806483379,
"inst_level_loose_acc,none": 0.47601918465227816,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.006530898429299409,
"inst_level_strict_acc,none": 0.4316546762589928,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.38361395591043224,
"acc_norm_stderr,none": 0.006010807277181961,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5347593582887701,
"acc_norm_stderr,none": 0.036573080985189216
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3698630136986301,
"acc_norm_stderr,none": 0.04009165058801775
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5112359550561798,
"acc_norm_stderr,none": 0.03757281091983857
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.104,
"acc_norm_stderr,none": 0.019345100974843932
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012875997640285002,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25757575757575757,
"acc_norm_stderr,none": 0.031156269519646826
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.27472527472527475,
"acc_norm_stderr,none": 0.019120635768881563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2700892857142857,
"acc_norm_stderr,none": 0.021000749078822437
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3179297597042514,
"prompt_level_strict_acc_stderr,none": 0.02003933297102034,
"inst_level_strict_acc,none": 0.4316546762589928,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.36414048059149723,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.47601918465227816,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.006530898429299409,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.14006514657980457,
"exact_match_stderr,none": 0.019839791442658312
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.15025906735751296,
"exact_match_stderr,none": 0.025787723180723855
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.24767287234042554,
"acc_stderr,none": 0.003935425705552356
},
"leaderboard_musr": {
"acc_norm,none": 0.328042328042328,
"acc_norm_stderr,none": 0.016381978023430912,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.21875,
"acc_norm_stderr,none": 0.025888027174359812
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.36414048059149723,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"prompt_level_strict_acc,none": 0.3179297597042514,
"prompt_level_strict_acc_stderr,none": 0.02003933297102034,
"acc,none": 0.24767287234042554,
"acc_stderr,none": 0.003935425705552356,
"acc_norm,none": 0.36061746011155793,
"acc_norm_stderr,none": 0.005169325806483379,
"inst_level_loose_acc,none": 0.47601918465227816,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.006530898429299409,
"inst_level_strict_acc,none": 0.4316546762589928,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.38361395591043224,
"acc_norm_stderr,none": 0.006010807277181961,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5347593582887701,
"acc_norm_stderr,none": 0.036573080985189216
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3698630136986301,
"acc_norm_stderr,none": 0.04009165058801775
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5112359550561798,
"acc_norm_stderr,none": 0.03757281091983857
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.104,
"acc_norm_stderr,none": 0.019345100974843932
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012875997640285002,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25757575757575757,
"acc_norm_stderr,none": 0.031156269519646826
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.27472527472527475,
"acc_norm_stderr,none": 0.019120635768881563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2700892857142857,
"acc_norm_stderr,none": 0.021000749078822437
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3179297597042514,
"prompt_level_strict_acc_stderr,none": 0.02003933297102034,
"inst_level_strict_acc,none": 0.4316546762589928,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.36414048059149723,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.47601918465227816,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06419939577039276,
"exact_match_stderr,none": 0.006530898429299409,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.14006514657980457,
"exact_match_stderr,none": 0.019839791442658312
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.006493506493506494,
"exact_match_stderr,none": 0.006493506493506494
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.15025906735751296,
"exact_match_stderr,none": 0.025787723180723855
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.24767287234042554,
"acc_stderr,none": 0.003935425705552356
},
"leaderboard_musr": {
"acc_norm,none": 0.328042328042328,
"acc_norm_stderr,none": 0.016381978023430912,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.21875,
"acc_norm_stderr,none": 0.025888027174359812
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/SultanR__SmolTulu-1.7b-Instruct-details | open-llm-leaderboard | "2024-12-02T04:31:08Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:27:40Z" | ---
pretty_name: Evaluation run of SultanR/SmolTulu-1.7b-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [SultanR/SmolTulu-1.7b-Instruct](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/SultanR__SmolTulu-1.7b-Instruct-details\"\
,\n\tname=\"SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T04-27-39.293748](https://huggingface.co/datasets/open-llm-leaderboard/SultanR__SmolTulu-1.7b-Instruct-details/blob/main/SultanR__SmolTulu-1.7b-Instruct/results_2024-12-02T04-27-39.293748.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.3514074458425217,\n \"acc_norm_stderr,none\"\
: 0.005165884234442981,\n \"prompt_level_loose_acc,none\": 0.6358595194085028,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.600739371534196,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n \
\ \"inst_level_loose_acc,none\": 0.7386091127098321,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.026435045317220542,\n \
\ \"exact_match_stderr,none\": 0.004359122520460206,\n \"acc,none\"\
: 0.17104388297872342,\n \"acc_stderr,none\": 0.0034329595047432816,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.36816524908869985,\n \"acc_norm_stderr,none\"\
: 0.005979183471724429,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.572,\n\
\ \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.448,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115349\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_geometric_shapes\":\
\ {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.512,\n \
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\":\
\ 0.02607865766373279\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.576,\n \
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.2328767123287671,\n \"acc_norm_stderr,none\": 0.03510036341139227\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.108,\n \
\ \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.132,\n\
\ \"acc_norm_stderr,none\": 0.021450980824038166\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\":\
\ 0.029658294924545567\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.26929530201342283,\n\
\ \"acc_norm_stderr,none\": 0.01285318594753383,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.23737373737373738,\n \"acc_norm_stderr,none\": 0.030313710538198924\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.26373626373626374,\n\
\ \"acc_norm_stderr,none\": 0.018875713580372433\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29017857142857145,\n \"acc_norm_stderr,none\"\
: 0.021466115440571226\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.600739371534196,\n \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n\
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.6358595194085028,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"inst_level_loose_acc,none\": 0.7386091127098321,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.026435045317220542,\n \"exact_match_stderr,none\"\
: 0.004359122520460206,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.06514657980456026,\n\
\ \"exact_match_stderr,none\": 0.014107720843558174\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.012987012987012988,\n \"exact_match_stderr,none\"\
: 0.009153145279150204\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.05181347150259067,\n \"exact_match_stderr,none\"\
: 0.015996229320244134\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.17104388297872342,\n\
\ \"acc_stderr,none\": 0.003432959504743281\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3531746031746032,\n \"acc_norm_stderr,none\"\
: 0.01697485324642576,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.5,\n\
\ \"acc_norm_stderr,none\": 0.031686212526223896\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.24609375,\n \"acc_norm_stderr,none\"\
: 0.026973597563786113\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.3514074458425217,\n\
\ \"acc_norm_stderr,none\": 0.005165884234442981,\n \"prompt_level_loose_acc,none\"\
: 0.6358595194085028,\n \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n\
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.600739371534196,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n \"inst_level_loose_acc,none\"\
: 0.7386091127098321,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"exact_match,none\": 0.026435045317220542,\n \"exact_match_stderr,none\"\
: 0.004359122520460206,\n \"acc,none\": 0.17104388297872342,\n \"\
acc_stderr,none\": 0.0034329595047432816,\n \"alias\": \"leaderboard\"\n\
\ },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.36816524908869985,\n\
\ \"acc_norm_stderr,none\": 0.005979183471724429,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2328767123287671,\n\
\ \"acc_norm_stderr,none\": 0.03510036341139227\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.28,\n \"acc_norm_stderr,none\": 0.02845414827783231\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.108,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.132,\n \"acc_norm_stderr,none\": 0.021450980824038166\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.26929530201342283,\n\
\ \"acc_norm_stderr,none\": 0.01285318594753383,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.23737373737373738,\n\
\ \"acc_norm_stderr,none\": 0.030313710538198924\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.26373626373626374,\n \"acc_norm_stderr,none\": 0.018875713580372433\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29017857142857145,\n \"acc_norm_stderr,none\"\
: 0.021466115440571226\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.600739371534196,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021075331332701255,\n \
\ \"inst_level_strict_acc,none\": 0.7074340527577938,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.6358595194085028,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02070704795859195,\n \"inst_level_loose_acc,none\"\
: 0.7386091127098321,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.026435045317220542,\n\
\ \"exact_match_stderr,none\": 0.004359122520460206,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.06514657980456026,\n \"exact_match_stderr,none\": 0.014107720843558174\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.008130081300813009,\n \"exact_match_stderr,none\": 0.008130081300813007\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0035714285714285713,\n \"exact_match_stderr,none\": 0.0035714285714285713\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.012987012987012988,\n \"exact_match_stderr,none\"\
: 0.009153145279150204\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.05181347150259067,\n \"exact_match_stderr,none\": 0.015996229320244134\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.007407407407407408,\n\
\ \"exact_match_stderr,none\": 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.17104388297872342,\n\
\ \"acc_stderr,none\": 0.003432959504743281\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3531746031746032,\n \"acc_norm_stderr,none\"\
: 0.01697485324642576,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.24609375,\n\
\ \"acc_norm_stderr,none\": 0.026973597563786113\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n }\n}\n```"
repo_url: https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-27-39.293748.jsonl'
- config_name: SultanR__SmolTulu-1.7b-Instruct__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_27_39.293748
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-27-39.293748.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-27-39.293748.jsonl'
---
# Dataset Card for Evaluation run of SultanR/SmolTulu-1.7b-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [SultanR/SmolTulu-1.7b-Instruct](https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/SultanR__SmolTulu-1.7b-Instruct-details",
name="SultanR__SmolTulu-1.7b-Instruct__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-27-39.293748](https://huggingface.co/datasets/open-llm-leaderboard/SultanR__SmolTulu-1.7b-Instruct-details/blob/main/SultanR__SmolTulu-1.7b-Instruct/results_2024-12-02T04-27-39.293748.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.3514074458425217,
"acc_norm_stderr,none": 0.005165884234442981,
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.0034329595047432816,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.36816524908869985,
"acc_norm_stderr,none": 0.005979183471724429,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2328767123287671,
"acc_norm_stderr,none": 0.03510036341139227
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.108,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_gpqa": {
"acc_norm,none": 0.26929530201342283,
"acc_norm_stderr,none": 0.01285318594753383,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.26373626373626374,
"acc_norm_stderr,none": 0.018875713580372433
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29017857142857145,
"acc_norm_stderr,none": 0.021466115440571226
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.05181347150259067,
"exact_match_stderr,none": 0.015996229320244134
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.003432959504743281
},
"leaderboard_musr": {
"acc_norm,none": 0.3531746031746032,
"acc_norm_stderr,none": 0.01697485324642576,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
}
},
"leaderboard": {
"acc_norm,none": 0.3514074458425217,
"acc_norm_stderr,none": 0.005165884234442981,
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.0034329595047432816,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.36816524908869985,
"acc_norm_stderr,none": 0.005979183471724429,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2328767123287671,
"acc_norm_stderr,none": 0.03510036341139227
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.28,
"acc_norm_stderr,none": 0.02845414827783231
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.108,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.132,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_gpqa": {
"acc_norm,none": 0.26929530201342283,
"acc_norm_stderr,none": 0.01285318594753383,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.26373626373626374,
"acc_norm_stderr,none": 0.018875713580372433
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29017857142857145,
"acc_norm_stderr,none": 0.021466115440571226
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.600739371534196,
"prompt_level_strict_acc_stderr,none": 0.021075331332701255,
"inst_level_strict_acc,none": 0.7074340527577938,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6358595194085028,
"prompt_level_loose_acc_stderr,none": 0.02070704795859195,
"inst_level_loose_acc,none": 0.7386091127098321,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.026435045317220542,
"exact_match_stderr,none": 0.004359122520460206,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.008130081300813009,
"exact_match_stderr,none": 0.008130081300813007
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.012987012987012988,
"exact_match_stderr,none": 0.009153145279150204
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.05181347150259067,
"exact_match_stderr,none": 0.015996229320244134
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.17104388297872342,
"acc_stderr,none": 0.003432959504743281
},
"leaderboard_musr": {
"acc_norm,none": 0.3531746031746032,
"acc_norm_stderr,none": 0.01697485324642576,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.24609375,
"acc_norm_stderr,none": 0.026973597563786113
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
mlfoundations-dev/unnatural_instructions_gpt-4o-mini_scale_x4 | mlfoundations-dev | "2024-12-02T04:49:35Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:49:28Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: constraints
dtype: string
- name: output
dtype: string
- name: alternative_formulation
dtype: string
- name: alternative_formulation_inlined
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 370037851
num_examples: 227604
download_size: 143079298
dataset_size: 370037851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/asharsha30__LLAMA_Harsha_8_B_ORDP_10k-details | open-llm-leaderboard | "2024-12-02T05:03:31Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T04:58:39Z" | ---
pretty_name: Evaluation run of asharsha30/LLAMA_Harsha_8_B_ORDP_10k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [asharsha30/LLAMA_Harsha_8_B_ORDP_10k](https://huggingface.co/asharsha30/LLAMA_Harsha_8_B_ORDP_10k)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/asharsha30__LLAMA_Harsha_8_B_ORDP_10k-details\"\
,\n\tname=\"asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T04-58-31.780769](https://huggingface.co/datasets/open-llm-leaderboard/asharsha30__LLAMA_Harsha_8_B_ORDP_10k-details/blob/main/asharsha30__LLAMA_Harsha_8_B_ORDP_10k/results_2024-12-02T04-58-31.780769.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.434052757793765,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.05211480362537765,\n \
\ \"exact_match_stderr,none\": 0.0060252620719431545,\n \"acc,none\"\
: 0.281000664893617,\n \"acc_stderr,none\": 0.004097953770325976,\n \
\ \"inst_level_strict_acc,none\": 0.4136690647482014,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.29944547134935307,\n \"prompt_level_loose_acc_stderr,none\": 0.019709834029672937,\n\
\ \"prompt_level_strict_acc,none\": 0.27911275415896486,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n \"\
acc_norm,none\": 0.42729277467894666,\n \"acc_norm_stderr,none\": 0.005252183131190879,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.4667592431869467,\n \"acc_norm_stderr,none\"\
: 0.006106218522241726,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.684,\n\
\ \"acc_norm_stderr,none\": 0.02946265759857865\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.576,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.732,\n \
\ \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\":\
\ 0.03010450339231644\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\":\
\ 0.030938207620401222\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.410958904109589,\n \"acc_norm_stderr,none\": 0.04085902451640228\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.54,\n \
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\":\
\ 0.030938207620401222\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.42696629213483145,\n \"acc_norm_stderr,none\": 0.03717921762559315\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.076,\n\
\ \"acc_norm_stderr,none\": 0.01679357306785969\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\":\
\ 0.030186568464511673\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27348993288590606,\n\
\ \"acc_norm_stderr,none\": 0.012922093286405052,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.25252525252525254,\n \"acc_norm_stderr,none\": 0.03095405547036587\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2857142857142857,\n\
\ \"acc_norm_stderr,none\": 0.019351013185102753\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26785714285714285,\n \"acc_norm_stderr,none\"\
: 0.02094574294163546\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.27911275415896486,\n \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n\
\ \"inst_level_strict_acc,none\": 0.4136690647482014,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.29944547134935307,\n \"prompt_level_loose_acc_stderr,none\": 0.019709834029672937,\n\
\ \"inst_level_loose_acc,none\": 0.434052757793765,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.05211480362537765,\n \"exact_match_stderr,none\": 0.0060252620719431545,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.08143322475570032,\n \"exact_match_stderr,none\"\
: 0.015634913029180107\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.04878048780487805,\n \"exact_match_stderr,none\"\
: 0.01950219655858808\n },\n \"leaderboard_math_geometry_hard\": {\n\
\ \"alias\": \" - leaderboard_math_geometry_hard\",\n \"\
exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.05844155844155844,\n \"exact_match_stderr,none\"\
: 0.018964387451957845\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.11917098445595854,\n \"exact_match_stderr,none\"\
: 0.02338193534812143\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.014814814814814815,\n \"exact_match_stderr,none\"\
: 0.010436494549594376\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.281000664893617,\n\
\ \"acc_stderr,none\": 0.004097953770325976\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.36904761904761907,\n \"acc_norm_stderr,none\"\
: 0.016972951603926475,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.29296875,\n \"acc_norm_stderr,none\"\
: 0.028500984607927556\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.434052757793765,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \
\ \"exact_match,none\": 0.05211480362537765,\n \"exact_match_stderr,none\"\
: 0.0060252620719431545,\n \"acc,none\": 0.281000664893617,\n \"acc_stderr,none\"\
: 0.004097953770325976,\n \"inst_level_strict_acc,none\": 0.4136690647482014,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.29944547134935307,\n \"prompt_level_loose_acc_stderr,none\": 0.019709834029672937,\n\
\ \"prompt_level_strict_acc,none\": 0.27911275415896486,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.019303080958497275,\n \"acc_norm,none\": 0.42729277467894666,\n \
\ \"acc_norm_stderr,none\": 0.005252183131190879,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4667592431869467,\n\
\ \"acc_norm_stderr,none\": 0.006106218522241726,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.410958904109589,\n\
\ \"acc_norm_stderr,none\": 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.42696629213483145,\n \"acc_norm_stderr,none\"\
: 0.03717921762559315\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.076,\n \"acc_norm_stderr,none\": 0.01679357306785969\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.27348993288590606,\n\
\ \"acc_norm_stderr,none\": 0.012922093286405052,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.25252525252525254,\n\
\ \"acc_norm_stderr,none\": 0.03095405547036587\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2857142857142857,\n \"acc_norm_stderr,none\": 0.019351013185102753\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26785714285714285,\n \"acc_norm_stderr,none\"\
: 0.02094574294163546\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.27911275415896486,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n \
\ \"inst_level_strict_acc,none\": 0.4136690647482014,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.29944547134935307,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.019709834029672937,\n \"inst_level_loose_acc,none\"\
: 0.434052757793765,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.05211480362537765,\n\
\ \"exact_match_stderr,none\": 0.0060252620719431545,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.08143322475570032,\n \"exact_match_stderr,none\": 0.015634913029180107\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.04878048780487805,\n \"exact_match_stderr,none\": 0.01950219655858808\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.05844155844155844,\n \"exact_match_stderr,none\": 0.018964387451957845\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.11917098445595854,\n \"exact_match_stderr,none\"\
: 0.02338193534812143\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.014814814814814815,\n \"exact_match_stderr,none\": 0.010436494549594376\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.281000664893617,\n \"acc_stderr,none\": 0.004097953770325976\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.36904761904761907,\n\
\ \"acc_norm_stderr,none\": 0.016972951603926475,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.29296875,\n \"acc_norm_stderr,none\": 0.028500984607927556\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ }\n}\n```"
repo_url: https://huggingface.co/asharsha30/LLAMA_Harsha_8_B_ORDP_10k
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_ifeval
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T04-58-31.780769.jsonl'
- config_name: asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T04_58_31.780769
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-58-31.780769.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T04-58-31.780769.jsonl'
---
# Dataset Card for Evaluation run of asharsha30/LLAMA_Harsha_8_B_ORDP_10k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [asharsha30/LLAMA_Harsha_8_B_ORDP_10k](https://huggingface.co/asharsha30/LLAMA_Harsha_8_B_ORDP_10k)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/asharsha30__LLAMA_Harsha_8_B_ORDP_10k-details",
name="asharsha30__LLAMA_Harsha_8_B_ORDP_10k__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T04-58-31.780769](https://huggingface.co/datasets/open-llm-leaderboard/asharsha30__LLAMA_Harsha_8_B_ORDP_10k-details/blob/main/asharsha30__LLAMA_Harsha_8_B_ORDP_10k/results_2024-12-02T04-58-31.780769.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.434052757793765,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.05211480362537765,
"exact_match_stderr,none": 0.0060252620719431545,
"acc,none": 0.281000664893617,
"acc_stderr,none": 0.004097953770325976,
"inst_level_strict_acc,none": 0.4136690647482014,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.29944547134935307,
"prompt_level_loose_acc_stderr,none": 0.019709834029672937,
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"acc_norm,none": 0.42729277467894666,
"acc_norm_stderr,none": 0.005252183131190879,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4667592431869467,
"acc_norm_stderr,none": 0.006106218522241726,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.410958904109589,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.42696629213483145,
"acc_norm_stderr,none": 0.03717921762559315
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.076,
"acc_norm_stderr,none": 0.01679357306785969
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27348993288590606,
"acc_norm_stderr,none": 0.012922093286405052,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25252525252525254,
"acc_norm_stderr,none": 0.03095405547036587
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26785714285714285,
"acc_norm_stderr,none": 0.02094574294163546
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_strict_acc,none": 0.4136690647482014,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.29944547134935307,
"prompt_level_loose_acc_stderr,none": 0.019709834029672937,
"inst_level_loose_acc,none": 0.434052757793765,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.05211480362537765,
"exact_match_stderr,none": 0.0060252620719431545,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.08143322475570032,
"exact_match_stderr,none": 0.015634913029180107
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04878048780487805,
"exact_match_stderr,none": 0.01950219655858808
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05844155844155844,
"exact_match_stderr,none": 0.018964387451957845
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.11917098445595854,
"exact_match_stderr,none": 0.02338193534812143
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.281000664893617,
"acc_stderr,none": 0.004097953770325976
},
"leaderboard_musr": {
"acc_norm,none": 0.36904761904761907,
"acc_norm_stderr,none": 0.016972951603926475,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.434052757793765,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.05211480362537765,
"exact_match_stderr,none": 0.0060252620719431545,
"acc,none": 0.281000664893617,
"acc_stderr,none": 0.004097953770325976,
"inst_level_strict_acc,none": 0.4136690647482014,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.29944547134935307,
"prompt_level_loose_acc_stderr,none": 0.019709834029672937,
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"acc_norm,none": 0.42729277467894666,
"acc_norm_stderr,none": 0.005252183131190879,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4667592431869467,
"acc_norm_stderr,none": 0.006106218522241726,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.410958904109589,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.42696629213483145,
"acc_norm_stderr,none": 0.03717921762559315
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.076,
"acc_norm_stderr,none": 0.01679357306785969
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.27348993288590606,
"acc_norm_stderr,none": 0.012922093286405052,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.25252525252525254,
"acc_norm_stderr,none": 0.03095405547036587
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26785714285714285,
"acc_norm_stderr,none": 0.02094574294163546
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_strict_acc,none": 0.4136690647482014,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.29944547134935307,
"prompt_level_loose_acc_stderr,none": 0.019709834029672937,
"inst_level_loose_acc,none": 0.434052757793765,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.05211480362537765,
"exact_match_stderr,none": 0.0060252620719431545,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.08143322475570032,
"exact_match_stderr,none": 0.015634913029180107
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04878048780487805,
"exact_match_stderr,none": 0.01950219655858808
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05844155844155844,
"exact_match_stderr,none": 0.018964387451957845
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.11917098445595854,
"exact_match_stderr,none": 0.02338193534812143
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.014814814814814815,
"exact_match_stderr,none": 0.010436494549594376
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.281000664893617,
"acc_stderr,none": 0.004097953770325976
},
"leaderboard_musr": {
"acc_norm,none": 0.36904761904761907,
"acc_norm_stderr,none": 0.016972951603926475,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
keikhosrotav/Pen-data-1 | keikhosrotav | "2024-12-02T06:02:19Z" | 3 | 0 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"task_categories:image-feature-extraction",
"task_categories:feature-extraction",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"image-classification",
"image-segmentation",
"image-feature-extraction",
"feature-extraction"
] | "2024-12-02T05:01:23Z" | ---
license: mit
task_categories:
- image-classification
- image-segmentation
- image-feature-extraction
- feature-extraction
language:
- en
tags:
- code
pretty_name: keikhosro tavakoli
size_categories:
- 100K<n<1M
--- |
open-llm-leaderboard/qingy2019__Qwen2.5-Math-14B-Instruct-details | open-llm-leaderboard | "2024-12-02T05:17:54Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:07:12Z" | ---
pretty_name: Evaluation run of qingy2019/Qwen2.5-Math-14B-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [qingy2019/Qwen2.5-Math-14B-Instruct](https://huggingface.co/qingy2019/Qwen2.5-Math-14B-Instruct)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/qingy2019__Qwen2.5-Math-14B-Instruct-details\"\
,\n\tname=\"qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T05-08-40.315655](https://huggingface.co/datasets/open-llm-leaderboard/qingy2019__Qwen2.5-Math-14B-Instruct-details/blob/main/qingy2019__Qwen2.5-Math-14B-Instruct/results_2024-12-02T05-08-40.315655.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.737410071942446,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.5415896487985212,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021442010560476468,\n \
\ \"acc_norm,none\": 0.5764690621351667,\n \"acc_norm_stderr,none\"\
: 0.005208747427765323,\n \"exact_match,none\": 0.2764350453172205,\n\
\ \"exact_match_stderr,none\": 0.011708266026998851,\n \"\
inst_level_strict_acc,none\": 0.6594724220623501,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.5339095744680851,\n \"acc_stderr,none\"\
: 0.004547975138689626,\n \"prompt_level_loose_acc,none\": 0.6414048059149723,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.020638182918873173,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n\
\ \"acc_norm,none\": 0.6327026557889255,\n \"acc_norm_stderr,none\"\
: 0.005886473721108874,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.884,\n\
\ \"acc_norm_stderr,none\": 0.020293429803083823\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6042780748663101,\n \"acc_norm_stderr,none\"\
: 0.035855600715925424\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.612,\n\
\ \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\":\
\ 0.02946265759857865\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.788,\n \
\ \"acc_norm_stderr,none\": 0.025901884690541117\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.896,\n \"acc_norm_stderr,none\": 0.019345100974843932\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.484,\n\
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.7328767123287672,\n \"acc_norm_stderr,none\"\
: 0.03674407640319397\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\":\
\ 0.025721398901416368\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.804,\n \"acc_norm_stderr,none\": 0.025156857313255922\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.7584269662921348,\n\
\ \"acc_norm_stderr,none\": 0.032173216138332565\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542941\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.848,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.236,\n \"acc_norm_stderr,none\":\
\ 0.026909337594953852\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\":\
\ 0.02572139890141637\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.556,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3691275167785235,\n\
\ \"acc_norm_stderr,none\": 0.013986495045275629,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.37373737373737376,\n \"acc_norm_stderr,none\": 0.03446897738659336\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.38461538461538464,\n\
\ \"acc_norm_stderr,none\": 0.02083955266087989\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3482142857142857,\n \"acc_norm_stderr,none\"\
: 0.022533152157915175\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.5415896487985212,\n \"prompt_level_strict_acc_stderr,none\": 0.021442010560476468,\n\
\ \"inst_level_strict_acc,none\": 0.6594724220623501,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.6414048059149723,\n \"prompt_level_loose_acc_stderr,none\": 0.020638182918873173,\n\
\ \"inst_level_loose_acc,none\": 0.737410071942446,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.2764350453172205,\n \"exact_match_stderr,none\": 0.011708266026998851,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.43322475570032576,\n \"exact_match_stderr,none\"\
: 0.028327050442298423\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.2926829268292683,\n \"exact_match_stderr,none\"\
: 0.04119323030208565\n },\n \"leaderboard_math_geometry_hard\": {\n\
\ \"alias\": \" - leaderboard_math_geometry_hard\",\n \"\
exact_match,none\": 0.17424242424242425,\n \"exact_match_stderr,none\"\
: 0.03314115103435667\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.09642857142857143,\n \"exact_match_stderr,none\"\
: 0.017671849720607317\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.2662337662337662,\n \"exact_match_stderr,none\"\
: 0.03573260790443323\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.44559585492227977,\n \"exact_match_stderr,none\"\
: 0.03587014986075661\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.14814814814814814,\n \"exact_match_stderr,none\"\
: 0.030688647610352705\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5339095744680851,\n\
\ \"acc_stderr,none\": 0.004547975138689626\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4748677248677249,\n \"acc_norm_stderr,none\"\
: 0.017959022038877108,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.572,\n\
\ \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\":\
\ 0.03031695312954162\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.737410071942446,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \
\ \"prompt_level_strict_acc,none\": 0.5415896487985212,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021442010560476468,\n \"acc_norm,none\": 0.5764690621351667,\n \
\ \"acc_norm_stderr,none\": 0.005208747427765323,\n \"exact_match,none\"\
: 0.2764350453172205,\n \"exact_match_stderr,none\": 0.011708266026998851,\n\
\ \"inst_level_strict_acc,none\": 0.6594724220623501,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.5339095744680851,\n \"acc_stderr,none\"\
: 0.004547975138689626,\n \"prompt_level_loose_acc,none\": 0.6414048059149723,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.020638182918873173,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6327026557889255,\n \"acc_norm_stderr,none\": 0.005886473721108874,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6042780748663101,\n \"acc_norm_stderr,none\"\
: 0.035855600715925424\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.896,\n \"acc_norm_stderr,none\": 0.019345100974843932\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.7328767123287672,\n\
\ \"acc_norm_stderr,none\": 0.03674407640319397\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.804,\n \"acc_norm_stderr,none\": 0.025156857313255922\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7584269662921348,\n \"acc_norm_stderr,none\"\
: 0.032173216138332565\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.848,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.236,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3691275167785235,\n\
\ \"acc_norm_stderr,none\": 0.013986495045275629,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.37373737373737376,\n\
\ \"acc_norm_stderr,none\": 0.03446897738659336\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.38461538461538464,\n \"acc_norm_stderr,none\": 0.02083955266087989\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3482142857142857,\n \"acc_norm_stderr,none\"\
: 0.022533152157915175\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.5415896487985212,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021442010560476468,\n \
\ \"inst_level_strict_acc,none\": 0.6594724220623501,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.6414048059149723,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020638182918873173,\n \"inst_level_loose_acc,none\"\
: 0.737410071942446,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.2764350453172205,\n\
\ \"exact_match_stderr,none\": 0.011708266026998851,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.43322475570032576,\n \"exact_match_stderr,none\": 0.028327050442298423\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2926829268292683,\n \"exact_match_stderr,none\": 0.04119323030208565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.17424242424242425,\n \"exact_match_stderr,none\"\
: 0.03314115103435667\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.09642857142857143,\n \"exact_match_stderr,none\"\
: 0.017671849720607317\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.2662337662337662,\n \"exact_match_stderr,none\": 0.03573260790443323\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.44559585492227977,\n \"exact_match_stderr,none\"\
: 0.03587014986075661\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.14814814814814814,\n \"exact_match_stderr,none\": 0.030688647610352705\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.5339095744680851,\n \"acc_stderr,none\": 0.004547975138689626\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4748677248677249,\n\
\ \"acc_norm_stderr,none\": 0.017959022038877108,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\": 0.03031695312954162\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ }\n}\n```"
repo_url: https://huggingface.co/qingy2019/Qwen2.5-Math-14B-Instruct
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_ifeval
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_ifeval_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T05-08-40.315655.jsonl'
- config_name: qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T05_08_40.315655
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T05-08-40.315655.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T05-08-40.315655.jsonl'
---
# Dataset Card for Evaluation run of qingy2019/Qwen2.5-Math-14B-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [qingy2019/Qwen2.5-Math-14B-Instruct](https://huggingface.co/qingy2019/Qwen2.5-Math-14B-Instruct)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/qingy2019__Qwen2.5-Math-14B-Instruct-details",
name="qingy2019__Qwen2.5-Math-14B-Instruct__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T05-08-40.315655](https://huggingface.co/datasets/open-llm-leaderboard/qingy2019__Qwen2.5-Math-14B-Instruct-details/blob/main/qingy2019__Qwen2.5-Math-14B-Instruct/results_2024-12-02T05-08-40.315655.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.737410071942446,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5415896487985212,
"prompt_level_strict_acc_stderr,none": 0.021442010560476468,
"acc_norm,none": 0.5764690621351667,
"acc_norm_stderr,none": 0.005208747427765323,
"exact_match,none": 0.2764350453172205,
"exact_match_stderr,none": 0.011708266026998851,
"inst_level_strict_acc,none": 0.6594724220623501,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.5339095744680851,
"acc_stderr,none": 0.004547975138689626,
"prompt_level_loose_acc,none": 0.6414048059149723,
"prompt_level_loose_acc_stderr,none": 0.020638182918873173,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6327026557889255,
"acc_norm_stderr,none": 0.005886473721108874,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6042780748663101,
"acc_norm_stderr,none": 0.035855600715925424
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.896,
"acc_norm_stderr,none": 0.019345100974843932
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7328767123287672,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.804,
"acc_norm_stderr,none": 0.025156857313255922
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7584269662921348,
"acc_norm_stderr,none": 0.032173216138332565
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.848,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3691275167785235,
"acc_norm_stderr,none": 0.013986495045275629,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.37373737373737376,
"acc_norm_stderr,none": 0.03446897738659336
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.38461538461538464,
"acc_norm_stderr,none": 0.02083955266087989
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3482142857142857,
"acc_norm_stderr,none": 0.022533152157915175
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5415896487985212,
"prompt_level_strict_acc_stderr,none": 0.021442010560476468,
"inst_level_strict_acc,none": 0.6594724220623501,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6414048059149723,
"prompt_level_loose_acc_stderr,none": 0.020638182918873173,
"inst_level_loose_acc,none": 0.737410071942446,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.2764350453172205,
"exact_match_stderr,none": 0.011708266026998851,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.43322475570032576,
"exact_match_stderr,none": 0.028327050442298423
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2926829268292683,
"exact_match_stderr,none": 0.04119323030208565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.17424242424242425,
"exact_match_stderr,none": 0.03314115103435667
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.09642857142857143,
"exact_match_stderr,none": 0.017671849720607317
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2662337662337662,
"exact_match_stderr,none": 0.03573260790443323
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.44559585492227977,
"exact_match_stderr,none": 0.03587014986075661
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.14814814814814814,
"exact_match_stderr,none": 0.030688647610352705
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5339095744680851,
"acc_stderr,none": 0.004547975138689626
},
"leaderboard_musr": {
"acc_norm,none": 0.4748677248677249,
"acc_norm_stderr,none": 0.017959022038877108,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.737410071942446,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5415896487985212,
"prompt_level_strict_acc_stderr,none": 0.021442010560476468,
"acc_norm,none": 0.5764690621351667,
"acc_norm_stderr,none": 0.005208747427765323,
"exact_match,none": 0.2764350453172205,
"exact_match_stderr,none": 0.011708266026998851,
"inst_level_strict_acc,none": 0.6594724220623501,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.5339095744680851,
"acc_stderr,none": 0.004547975138689626,
"prompt_level_loose_acc,none": 0.6414048059149723,
"prompt_level_loose_acc_stderr,none": 0.020638182918873173,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6327026557889255,
"acc_norm_stderr,none": 0.005886473721108874,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6042780748663101,
"acc_norm_stderr,none": 0.035855600715925424
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.896,
"acc_norm_stderr,none": 0.019345100974843932
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7328767123287672,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.804,
"acc_norm_stderr,none": 0.025156857313255922
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7584269662921348,
"acc_norm_stderr,none": 0.032173216138332565
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.848,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3691275167785235,
"acc_norm_stderr,none": 0.013986495045275629,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.37373737373737376,
"acc_norm_stderr,none": 0.03446897738659336
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.38461538461538464,
"acc_norm_stderr,none": 0.02083955266087989
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3482142857142857,
"acc_norm_stderr,none": 0.022533152157915175
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5415896487985212,
"prompt_level_strict_acc_stderr,none": 0.021442010560476468,
"inst_level_strict_acc,none": 0.6594724220623501,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6414048059149723,
"prompt_level_loose_acc_stderr,none": 0.020638182918873173,
"inst_level_loose_acc,none": 0.737410071942446,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.2764350453172205,
"exact_match_stderr,none": 0.011708266026998851,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.43322475570032576,
"exact_match_stderr,none": 0.028327050442298423
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2926829268292683,
"exact_match_stderr,none": 0.04119323030208565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.17424242424242425,
"exact_match_stderr,none": 0.03314115103435667
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.09642857142857143,
"exact_match_stderr,none": 0.017671849720607317
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2662337662337662,
"exact_match_stderr,none": 0.03573260790443323
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.44559585492227977,
"exact_match_stderr,none": 0.03587014986075661
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.14814814814814814,
"exact_match_stderr,none": 0.030688647610352705
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5339095744680851,
"acc_stderr,none": 0.004547975138689626
},
"leaderboard_musr": {
"acc_norm,none": 0.4748677248677249,
"acc_norm_stderr,none": 0.017959022038877108,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Adelante/Query_Augmentation_by_re-explain | Adelante | "2024-12-02T05:23:17Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:14:25Z" | ---
dataset_info:
features:
- name: ID
dtype: string
- name: Informal name
dtype: string
- name: Gen result
dtype: string
- name: Informal statement
dtype: string
splits:
- name: train
num_bytes: 1172824
num_examples: 2000
download_size: 552066
dataset_size: 1172824
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/zelk12__MT1-Gen3-gemma-2-9B-details | open-llm-leaderboard | "2024-12-02T05:29:32Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:25:24Z" | ---
pretty_name: Evaluation run of zelk12/MT1-Gen3-gemma-2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zelk12/MT1-Gen3-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen3-gemma-2-9B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/zelk12__MT1-Gen3-gemma-2-9B-details\"\
,\n\tname=\"zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-02T05-25-21.661198](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT1-Gen3-gemma-2-9B-details/blob/main/zelk12__MT1-Gen3-gemma-2-9B/results_2024-12-02T05-25-21.661198.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.789279112754159,\n \"\
prompt_level_loose_acc_stderr,none\": 0.017549801883664215,\n \"exact_match,none\"\
: 0.11782477341389729,\n \"exact_match_stderr,none\": 0.008505757757465697,\n\
\ \"inst_level_strict_acc,none\": 0.8285371702637889,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\": 0.5506550784796991,\n\
\ \"acc_norm_stderr,none\": 0.005283406997706799,\n \"prompt_level_strict_acc,none\"\
: 0.7634011090573013,\n \"prompt_level_strict_acc_stderr,none\": 0.018288827582625598,\n\
\ \"acc,none\": 0.43492353723404253,\n \"acc_stderr,none\"\
: 0.004519695757201688,\n \"inst_level_loose_acc,none\": 0.8501199040767387,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6090956431175143,\n \"acc_norm_stderr,none\": 0.006039109302546755,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6363636363636364,\n\
\ \"acc_norm_stderr,none\": 0.03527198153014412\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.636,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.712,\n \
\ \"acc_norm_stderr,none\": 0.028697004587398257\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\":\
\ 0.029933259094191533\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6164383561643836,\n \"acc_norm_stderr,none\": 0.04038112474853568\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.812,\n \
\ \"acc_norm_stderr,none\": 0.02476037772775051\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.6629213483146067,\n\
\ \"acc_norm_stderr,none\": 0.03553120966481325\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.856,\n\
\ \"acc_norm_stderr,none\": 0.022249407735450245\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.512,\n\
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.348993288590604,\n \
\ \"acc_norm_stderr,none\": 0.013822053559016792,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.35858585858585856,\n \"acc_norm_stderr,none\": 0.034169036403915276\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3516483516483517,\n\
\ \"acc_norm_stderr,none\": 0.02045320407062836\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34151785714285715,\n \"acc_norm_stderr,none\"\
: 0.022429776589214533\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7634011090573013,\n \"prompt_level_strict_acc_stderr,none\": 0.018288827582625598,\n\
\ \"inst_level_strict_acc,none\": 0.8285371702637889,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.789279112754159,\n \"prompt_level_loose_acc_stderr,none\": 0.017549801883664215,\n\
\ \"inst_level_loose_acc,none\": 0.8501199040767387,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.11782477341389729,\n \"exact_match_stderr,none\"\
: 0.008505757757465697,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.247557003257329,\n\
\ \"exact_match_stderr,none\": 0.024672530661985218\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.07317073170731707,\n \"exact_match_stderr,none\": 0.023577005978097667\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.045454545454545456,\n\
\ \"exact_match_stderr,none\": 0.018199158975632696\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.025,\n \"exact_match_stderr,none\": 0.009346956263824575\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \"\
\ - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.08441558441558442,\n\
\ \"exact_match_stderr,none\": 0.022475781231866967\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.21243523316062177,\n \"exact_match_stderr,none\"\
: 0.02951928261681729\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.02962962962962963,\n \"exact_match_stderr,none\"\
: 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43492353723404253,\n\
\ \"acc_stderr,none\": 0.004519695757201688\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.01759794820604658,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.29296875,\n \"acc_norm_stderr,none\"\
: 0.028500984607927556\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.789279112754159,\n \"prompt_level_loose_acc_stderr,none\": 0.017549801883664215,\n\
\ \"exact_match,none\": 0.11782477341389729,\n \"exact_match_stderr,none\"\
: 0.008505757757465697,\n \"inst_level_strict_acc,none\": 0.8285371702637889,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.5506550784796991,\n \"acc_norm_stderr,none\": 0.005283406997706799,\n\
\ \"prompt_level_strict_acc,none\": 0.7634011090573013,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018288827582625598,\n \"acc,none\": 0.43492353723404253,\n \"\
acc_stderr,none\": 0.004519695757201688,\n \"inst_level_loose_acc,none\"\
: 0.8501199040767387,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.6090956431175143,\n \"acc_norm_stderr,none\": 0.006039109302546755,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6363636363636364,\n \"acc_norm_stderr,none\"\
: 0.03527198153014412\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6164383561643836,\n\
\ \"acc_norm_stderr,none\": 0.04038112474853568\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.812,\n \"acc_norm_stderr,none\": 0.02476037772775051\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6629213483146067,\n \"acc_norm_stderr,none\"\
: 0.03553120966481325\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.856,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.348993288590604,\n\
\ \"acc_norm_stderr,none\": 0.013822053559016792,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.35858585858585856,\n\
\ \"acc_norm_stderr,none\": 0.034169036403915276\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3516483516483517,\n \"acc_norm_stderr,none\": 0.02045320407062836\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34151785714285715,\n \"acc_norm_stderr,none\"\
: 0.022429776589214533\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7634011090573013,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018288827582625598,\n \
\ \"inst_level_strict_acc,none\": 0.8285371702637889,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.789279112754159,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.017549801883664215,\n \"inst_level_loose_acc,none\"\
: 0.8501199040767387,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.11782477341389729,\n\
\ \"exact_match_stderr,none\": 0.008505757757465697,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.247557003257329,\n \"exact_match_stderr,none\": 0.024672530661985218\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.07317073170731707,\n \"exact_match_stderr,none\": 0.023577005978097667\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.045454545454545456,\n \"exact_match_stderr,none\"\
: 0.018199158975632696\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.025,\n \"exact_match_stderr,none\": 0.009346956263824575\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.08441558441558442,\n \"exact_match_stderr,none\"\
: 0.022475781231866967\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.21243523316062177,\n \"exact_match_stderr,none\": 0.02951928261681729\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.02962962962962963,\n\
\ \"exact_match_stderr,none\": 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43492353723404253,\n\
\ \"acc_stderr,none\": 0.004519695757201688\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.01759794820604658,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.29296875,\n\
\ \"acc_norm_stderr,none\": 0.028500984607927556\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n }\n}\n```"
repo_url: https://huggingface.co/zelk12/MT1-Gen3-gemma-2-9B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_navigate
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_snarks
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_gpqa_extended
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_gpqa_main
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_ifeval
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_ifeval_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_mmlu_pro
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_musr_object_placements
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-02T05-25-21.661198.jsonl'
- config_name: zelk12__MT1-Gen3-gemma-2-9B__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_02T05_25_21.661198
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T05-25-21.661198.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-02T05-25-21.661198.jsonl'
---
# Dataset Card for Evaluation run of zelk12/MT1-Gen3-gemma-2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [zelk12/MT1-Gen3-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen3-gemma-2-9B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/zelk12__MT1-Gen3-gemma-2-9B-details",
name="zelk12__MT1-Gen3-gemma-2-9B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T05-25-21.661198](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT1-Gen3-gemma-2-9B-details/blob/main/zelk12__MT1-Gen3-gemma-2-9B/results_2024-12-02T05-25-21.661198.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.789279112754159,
"prompt_level_loose_acc_stderr,none": 0.017549801883664215,
"exact_match,none": 0.11782477341389729,
"exact_match_stderr,none": 0.008505757757465697,
"inst_level_strict_acc,none": 0.8285371702637889,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.5506550784796991,
"acc_norm_stderr,none": 0.005283406997706799,
"prompt_level_strict_acc,none": 0.7634011090573013,
"prompt_level_strict_acc_stderr,none": 0.018288827582625598,
"acc,none": 0.43492353723404253,
"acc_stderr,none": 0.004519695757201688,
"inst_level_loose_acc,none": 0.8501199040767387,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6090956431175143,
"acc_norm_stderr,none": 0.006039109302546755,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6363636363636364,
"acc_norm_stderr,none": 0.03527198153014412
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6164383561643836,
"acc_norm_stderr,none": 0.04038112474853568
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6629213483146067,
"acc_norm_stderr,none": 0.03553120966481325
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.348993288590604,
"acc_norm_stderr,none": 0.013822053559016792,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35858585858585856,
"acc_norm_stderr,none": 0.034169036403915276
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3516483516483517,
"acc_norm_stderr,none": 0.02045320407062836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34151785714285715,
"acc_norm_stderr,none": 0.022429776589214533
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7634011090573013,
"prompt_level_strict_acc_stderr,none": 0.018288827582625598,
"inst_level_strict_acc,none": 0.8285371702637889,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.789279112754159,
"prompt_level_loose_acc_stderr,none": 0.017549801883664215,
"inst_level_loose_acc,none": 0.8501199040767387,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11782477341389729,
"exact_match_stderr,none": 0.008505757757465697,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.247557003257329,
"exact_match_stderr,none": 0.024672530661985218
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.07317073170731707,
"exact_match_stderr,none": 0.023577005978097667
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.045454545454545456,
"exact_match_stderr,none": 0.018199158975632696
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.025,
"exact_match_stderr,none": 0.009346956263824575
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.08441558441558442,
"exact_match_stderr,none": 0.022475781231866967
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.21243523316062177,
"exact_match_stderr,none": 0.02951928261681729
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43492353723404253,
"acc_stderr,none": 0.004519695757201688
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01759794820604658,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.789279112754159,
"prompt_level_loose_acc_stderr,none": 0.017549801883664215,
"exact_match,none": 0.11782477341389729,
"exact_match_stderr,none": 0.008505757757465697,
"inst_level_strict_acc,none": 0.8285371702637889,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.5506550784796991,
"acc_norm_stderr,none": 0.005283406997706799,
"prompt_level_strict_acc,none": 0.7634011090573013,
"prompt_level_strict_acc_stderr,none": 0.018288827582625598,
"acc,none": 0.43492353723404253,
"acc_stderr,none": 0.004519695757201688,
"inst_level_loose_acc,none": 0.8501199040767387,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6090956431175143,
"acc_norm_stderr,none": 0.006039109302546755,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6363636363636364,
"acc_norm_stderr,none": 0.03527198153014412
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6164383561643836,
"acc_norm_stderr,none": 0.04038112474853568
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6629213483146067,
"acc_norm_stderr,none": 0.03553120966481325
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.348993288590604,
"acc_norm_stderr,none": 0.013822053559016792,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35858585858585856,
"acc_norm_stderr,none": 0.034169036403915276
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3516483516483517,
"acc_norm_stderr,none": 0.02045320407062836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34151785714285715,
"acc_norm_stderr,none": 0.022429776589214533
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7634011090573013,
"prompt_level_strict_acc_stderr,none": 0.018288827582625598,
"inst_level_strict_acc,none": 0.8285371702637889,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.789279112754159,
"prompt_level_loose_acc_stderr,none": 0.017549801883664215,
"inst_level_loose_acc,none": 0.8501199040767387,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11782477341389729,
"exact_match_stderr,none": 0.008505757757465697,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.247557003257329,
"exact_match_stderr,none": 0.024672530661985218
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.07317073170731707,
"exact_match_stderr,none": 0.023577005978097667
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.045454545454545456,
"exact_match_stderr,none": 0.018199158975632696
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.025,
"exact_match_stderr,none": 0.009346956263824575
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.08441558441558442,
"exact_match_stderr,none": 0.022475781231866967
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.21243523316062177,
"exact_match_stderr,none": 0.02951928261681729
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43492353723404253,
"acc_stderr,none": 0.004519695757201688
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01759794820604658,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dgambettaphd/D_gen7_run1_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-12-02T05:25:53Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:25:50Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 584248
num_examples: 1000
download_size: 351952
dataset_size: 584248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bruece/domainnet-126-edge-image-painting | Bruece | "2024-12-02T05:59:04Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:33:08Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: edge_image
dtype: image
splits:
- name: train
num_bytes: 2018430048.984
num_examples: 24032
download_size: 2029006402
dataset_size: 2018430048.984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bruece/domainnet-126-edge-image-real | Bruece | "2024-12-02T06:02:42Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:35:57Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: edge_image
dtype: image
splits:
- name: train
num_bytes: 3089508693.173
num_examples: 55697
download_size: 3766376456
dataset_size: 3089508693.173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/truthfulqa_en_mc1_results | richmondsin | "2024-12-02T05:40:08Z" | 3 | 0 | [
"region:us"
] | null | "2024-12-02T05:39:57Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 3 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/truthfulqa_en_mc1_results\"\
,\n\tname=\"google__gemma-2-2b__truthfulqa_en_mc1\",\n\tsplit=\"latest\"\n)\n```\n\
\n## Latest results\n\nThese are the [latest results from run 2024-12-02T00-39-57.674643](https://huggingface.co/datasets/richmondsin/truthfulqa_en_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-02T00-39-57.674643.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"truthfulqa_en_mc1\"\
: {\n \"alias\": \"truthfulqa_en_mc1\",\n \"acc,none\": 0.2579250720461095,\n\
\ \"acc_stderr,none\": 0.016618967642626447,\n \"acc_norm,none\"\
: 0.27521613832853026,\n \"acc_norm_stderr,none\": 0.016965809584321628\n\
\ }\n },\n \"truthfulqa_en_mc1\": {\n \"alias\": \"truthfulqa_en_mc1\"\
,\n \"acc,none\": 0.2579250720461095,\n \"acc_stderr,none\": 0.016618967642626447,\n\
\ \"acc_norm,none\": 0.27521613832853026,\n \"acc_norm_stderr,none\"\
: 0.016965809584321628\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__truthfulqa_en_mc1
data_files:
- split: 2024_12_02T00_39_57.674643
path:
- '**/samples_truthfulqa_en_mc1_2024-12-02T00-39-57.674643.jsonl'
- split: latest
path:
- '**/samples_truthfulqa_en_mc1_2024-12-02T00-39-57.674643.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/truthfulqa_en_mc1_results",
name="google__gemma-2-2b__truthfulqa_en_mc1",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T00-39-57.674643](https://huggingface.co/datasets/richmondsin/truthfulqa_en_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-02T00-39-57.674643.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"truthfulqa_en_mc1": {
"alias": "truthfulqa_en_mc1",
"acc,none": 0.2579250720461095,
"acc_stderr,none": 0.016618967642626447,
"acc_norm,none": 0.27521613832853026,
"acc_norm_stderr,none": 0.016965809584321628
}
},
"truthfulqa_en_mc1": {
"alias": "truthfulqa_en_mc1",
"acc,none": 0.2579250720461095,
"acc_stderr,none": 0.016618967642626447,
"acc_norm,none": 0.27521613832853026,
"acc_norm_stderr,none": 0.016965809584321628
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Harold328/OmniBench-99 | Harold328 | "2024-12-02T08:08:11Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-02T05:41:53Z" | ---
license: apache-2.0
---
<!-- # OmniBench-99 -->
## Overview
OmniBench-99 benchmark is published in [OmniCreator](https://haroldchen19.github.io/OmniCreator-Page/), containing 99 videos with varied contents (*i.e.*, Environment, Human/Animal, and Object), designed to offer a comprehensive platform
for evaluating generative video editing, focusing on both editing **types** and **scenarios**.
[Paper Link](https://haroldchen19.github.io/OmniCreator-Page/)
[Project Page](https://haroldchen19.github.io/OmniCreator-Page/)
## Dataset Structure
Unlike previous benchmarks that evaluate only four editing types, **OmniBench-99** expands the scope to include both editing types and scenarios. Specifically:
* *Environment*: Scenarios are developed for **Background**, **Weather**, and **Time** edits.
* *Object*: Scenarios are created for **Addition**, **Removal**, and **Replacement** edits.
* *Human/Animal*: Scenarios are designed for **Appearance** and **Motion/Pose** edits.
|
Sin2pi/Whisper_like_model | Sin2pi | "2024-12-02T06:22:53Z" | 3 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-02T05:42:40Z" | ---
license: mit
---
Openais original implimentation of their Whisper model integrated with HF trainer and datasets. Experiment with openais original version without the need for openai to HF conversion. REady to go script just install dependencies.
Also a copy of a training loop in pure pytorch that includes dataset dataloaders collators etc. The other two scripts include experimental models that I've been working on. |
akhooli/mmarco_111k_test_qs | akhooli | "2024-12-02T05:56:41Z" | 3 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T05:54:45Z" | ---
license: mit
dataset_info:
features:
- name: query_id
dtype: int64
- name: text
dtype: string
- name: document_ids
sequence: string
- name: scores
sequence: float64
- name: means
dtype: float64
- name: stds
dtype: float64
- name: maxmins
dtype: float64
- name: includes
dtype: string
splits:
- name: train
num_bytes: 79143903
num_examples: 111869
download_size: 44316635
dataset_size: 79143903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dogtooth/tulu_8b_generated_uf_gold_scored_iter2 | dogtooth | "2024-12-02T06:52:47Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T06:52:45Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reference_completion
dtype: string
- name: reference_completion_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
- name: chosen_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
- name: rejected_score
struct:
- name: Skywork/Skywork-Reward-Gemma-2-27B-v0.2
dtype: float64
splits:
- name: train
num_bytes: 28411723
num_examples: 5678
download_size: 15737941
dataset_size: 28411723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
siqi00/llama3_gsm8k_question_0.8_0.95_-1_256 | siqi00 | "2024-12-02T07:07:59Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T07:07:55Z" | ---
dataset_info:
features:
- name: real
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_2
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_3
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_4
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_5
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_6
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_7
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_8
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_9
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_10
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_11
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_12
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_13
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_14
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_15
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_16
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_17
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_18
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_19
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_20
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_21
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_22
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated_23
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 154728513
num_examples: 7473
download_size: 69082303
dataset_size: 154728513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
krigeta/dragonballonly | krigeta | "2024-12-02T07:47:02Z" | 3 | 1 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"art"
] | null | "2024-12-02T07:11:02Z" | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Dragon_ball_only
This is the image base of bangumi dragon_ball_only, we detected 6 characters, 353 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 169 | [Download](0\dataset.zip) | ![preview 1](0\preview_1.png) | ![preview 2](0\preview_2.png) | ![preview 3](0\preview_3.png) | ![preview 4](0\preview_4.png) | ![preview 5](0\preview_5.png) | ![preview 6](0\preview_6.png) | ![preview 7](0\preview_7.png) | ![preview 8](0\preview_8.png) |
| 1 | 10 | [Download](1\dataset.zip) | ![preview 1](1\preview_1.png) | ![preview 2](1\preview_2.png) | ![preview 3](1\preview_3.png) | ![preview 4](1\preview_4.png) | ![preview 5](1\preview_5.png) | ![preview 6](1\preview_6.png) | ![preview 7](1\preview_7.png) | ![preview 8](1\preview_8.png) |
| 2 | 134 | [Download](2\dataset.zip) | ![preview 1](2\preview_1.png) | ![preview 2](2\preview_2.png) | ![preview 3](2\preview_3.png) | ![preview 4](2\preview_4.png) | ![preview 5](2\preview_5.png) | ![preview 6](2\preview_6.png) | ![preview 7](2\preview_7.png) | ![preview 8](2\preview_8.png) |
| 3 | 12 | [Download](3\dataset.zip) | ![preview 1](3\preview_1.png) | ![preview 2](3\preview_2.png) | ![preview 3](3\preview_3.png) | ![preview 4](3\preview_4.png) | ![preview 5](3\preview_5.png) | ![preview 6](3\preview_6.png) | ![preview 7](3\preview_7.png) | ![preview 8](3\preview_8.png) |
| 4 | 7 | [Download](4\dataset.zip) | ![preview 1](4\preview_1.png) | ![preview 2](4\preview_2.png) | ![preview 3](4\preview_3.png) | ![preview 4](4\preview_4.png) | ![preview 5](4\preview_5.png) | ![preview 6](4\preview_6.png) | ![preview 7](4\preview_7.png) | N/A |
| noise | 21 | [Download](-1\dataset.zip) | ![preview 1](-1\preview_1.png) | ![preview 2](-1\preview_2.png) | ![preview 3](-1\preview_3.png) | ![preview 4](-1\preview_4.png) | ![preview 5](-1\preview_5.png) | ![preview 6](-1\preview_6.png) | ![preview 7](-1\preview_7.png) | ![preview 8](-1\preview_8.png) |
|
mesolitica/Malaysian-STT-Whisper | mesolitica | "2024-12-01T06:11:20Z" | 2 | 1 | [
"task_categories:automatic-speech-recognition",
"language:ms",
"language:en",
"language:zh",
"language:ta",
"language:id",
"region:us"
] | [
"automatic-speech-recognition"
] | "2024-08-28T16:10:17Z" | ---
task_categories:
- automatic-speech-recognition
language:
- ms
- en
- zh
- ta
- id
---
# Malaysian STT Whisper format
Up to 15k hours annotated, we done heavy postfilter, postprocessing and post-translation to improve pseudolabeled Whisper Large V3.
Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/distilled-malaysian-whisper
## Dataset involved
1. Malaysian context, https://huggingface.co/datasets/mesolitica/pseudolabel-malaya-speech-stt-train-whisper-large-v3-timestamp
2. Malaysian context, https://huggingface.co/datasets/mesolitica/pseudolabel-malaysian-youtube-whisper-large-v3-timestamp
3. Malay audiobook, https://huggingface.co/datasets/mesolitica/pseudolabel-nusantara-large-v3-timestamp
4. Singaporean context, https://huggingface.co/datasets/mesolitica/pseudolabel-imda-large-v3-timestamp
5. Indonesian context, https://huggingface.co/datasets/mesolitica/pseudolabel-indonesian-large-v3-timestamp
6. Mandarin audio, https://huggingface.co/datasets/mesolitica/pseudolabel-mandarin-large-v3-timestamp
7. Tamil audio, https://huggingface.co/datasets/mesolitica/pseudolabel-tamil-large-v3-timestamp
8. Science context, https://huggingface.co/datasets/mesolitica/pseudolabel-science-large-v3-timestamp
9. Malay sarawak, https://huggingface.co/datasets/malaysia-ai/sarawakmalay-whisper-format
10. Scripted Malay Daily Use Speech Corpus, https://huggingface.co/datasets/malaysia-ai/scripted-malay-daily-use-speech-corpus-whisper-format
11. Malay Conversational Speech Corpus, https://huggingface.co/datasets/malaysia-ai/malay-conversational-speech-corpus-whisper-format
12. Iban, https://huggingface.co/datasets/malaysia-ai/iban-whisper-format
13. Malay dialects, https://huggingface.co/datasets/mesolitica/pseudolabel-malay-dialects-large-v3-timestamp |
BACKENDAPI2024/radarpoliticaldatasetredditscrap11272024 | BACKENDAPI2024 | "2024-11-28T05:26:38Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T01:52:06Z" | ---
license: mit
---
|
Turbo-AI/data-cross | Turbo-AI | "2024-11-28T04:02:53Z" | 2 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T04:02:20Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: context
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1845906517
num_examples: 1059592
download_size: 599220748
dataset_size: 1845906517
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Babelscape/LLM-Oasis_unfactual_text_generation | Babelscape | "2024-12-02T13:58:58Z" | 2 | 0 | [
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2411.19655",
"region:us"
] | null | "2024-11-28T11:19:53Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: unfactual_claims
sequence: string
- name: paraphrase
dtype: string
- name: unfactual_text
dtype: string
splits:
- name: validation
num_bytes: 29003066
num_examples: 13838
- name: train
num_bytes: 139294158
num_examples: 67385
download_size: 113033236
dataset_size: 168297224
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
language: en
license:
- cc-by-nc-sa-4.0
---
# Babelscape/LLM-Oasis_unfactual_text_generation
## Dataset Description
**LLM-Oasis_unfactual_text_generation** is part of the LLM-Oasis suite and contains unfactual texts generated from a set of falsified claims extracted from a Wikipedia passage and its paraphrase.
This dataset corresponds to the unfactual text generation step described in Section 3.4 of the [LLM-Oasis paper](https://arxiv.org/abs/2411.19655). Please refer to our [GitHub repository](https://github.com/Babelscape/LLM-Oasis) for more information on the overall data generation pipeline of LLM-Oasis.
### Features
- **title**: The title of the Wikipedia page.
- **text**: A passage of 5 sentences from the Wikipedia page.
- **unfactual_claims**: A sequence of claims (including one unfactual claim) extracted from the text.
- **paraphrase**: A paraphrased version of the original text.
- **unfactual_text**: The final unfactual text generated from the unfactual claims and paraphrase.
### Dataset Statistics
- **Train Split**:
- Number of examples: 67,385
- **Validation Split**:
- Number of examples: 13,838
## License
This work is under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Citation
If you use LLM-Oasis in your work, please cite our [paper](https://arxiv.org/abs/2411.19655):
```
@misc{scirè2024truthmirageendtoendfactuality,
title={Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-OASIS},
author={Alessandro Scirè and Andrei Stefan Bejgu and Simone Tedeschi and Karim Ghonim and Federico Martelli and Roberto Navigli},
year={2024},
eprint={2411.19655},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.19655},
} |
RawrWoofMeow/Scenes | RawrWoofMeow | "2024-11-30T23:01:49Z" | 2 | 0 | [
"license:unlicense",
"region:us"
] | null | "2024-11-29T00:22:49Z" | ---
license: unlicense
---
|
kojikubota/Advanced-Code-Review-Agent | kojikubota | "2024-11-30T03:39:00Z" | 2 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-30T03:38:31Z" | ---
license: mit
---
# Advanced Code Review Support Agent
A uniquely innovative AI agent specialized in code review, capable of handling any task related to code. This agent provides comprehensive support for code analysis, improvement suggestions, and optimization across various programming languages and development environments.
![Status: Experimental](https://img.shields.io/badge/Status-Experimental-orange)
## Overview
The Advanced Code Review Support Agent is designed to enhance code quality and maintainability by offering detailed analysis and suggestions. It supports multi-language environments and adapts to different coding styles and project scales.
### Key Features
- **Multi-language Support**: Handles all programming languages, considering specific syntax and conventions.
- **Project Scale Adaptability**: Suitable for small scripts to large-scale applications.
- **Diverse Environment Compatibility**: Works across desktop, web, and mobile platforms.
- **Style Adaptation**: Learns and aligns with user or team coding styles.
## Core Components
### 1. Code Analysis
- **Code Review**: Identifies issues in readability, maintainability, and naming conventions.
- **Bug Detection**: Finds syntax and logic errors, and suggests corrections.
- **Performance Optimization**: Proposes efficient algorithms and data structures.
### 2. Security and Style
- **Security Review**: Detects vulnerabilities and suggests mitigations.
- **Style Consistency**: Aligns with common style guides like PEP 8.
### 3. Documentation and Testing
- **Documentation Generation**: Adds descriptions to functions and classes.
- **Test Case Generation**: Proposes unit and integration tests.
## Implementation Process
1. **Input Analysis**: Detailed code analysis to identify potential issues.
2. **Issue Identification**: Enumerates issues from various perspectives.
3. **Improvement Suggestions**: Provides corrected code and proposals.
4. **Additional Comments**: Explains benefits and best practices.
## Limitations and Considerations
- **Non-Support for External Tools**: Relies solely on text analysis.
- **Knowledge Cutoff**: Limited to the knowledge scope of the GPT model.
- **Non-Provision of Legal Advice**: Does not provide legal interpretations.
## Future Development
- **Enhanced Pattern Recognition**: Advanced emergent learning.
- **Improved Knowledge Synthesis**: Deeper conceptual integration.
- **Extended Context Management**: More sophisticated session handling.
## License
This project is licensed under the [MIT License](LICENSE). |
chiyuanhsiao/Magpie_rank0_chunk1_interleaf | chiyuanhsiao | "2024-11-30T05:18:18Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T05:08:08Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11294661417.0
num_examples: 10024
download_size: 11030126549
dataset_size: 11294661417.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/Magpie_rank0_chunk2_interleaf | chiyuanhsiao | "2024-11-30T07:48:10Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T07:37:46Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11332213004.0
num_examples: 10024
download_size: 11067047070
dataset_size: 11332213004.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ricardoSLabs/TIG_ss304_dataset | ricardoSLabs | "2024-12-01T01:59:09Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T08:23:34Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burn through
'1': contamination
'2': good weld
'3': high travel speed
'4': lack of fusion
'5': lack of shielding gas
splits:
- name: train
num_bytes: 4549552261.38
num_examples: 24204
- name: validation
num_bytes: 1644249513.366
num_examples: 9694
- name: test
num_bytes: 2200199170.92
num_examples: 11160
download_size: 9894813192
dataset_size: 8394000945.666
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
DT4LM/albertbasev2_rte_faster-alzantot | DT4LM | "2024-11-30T08:45:55Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T08:45:52Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 41801
num_examples: 132
download_size: 36120
dataset_size: 41801
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_rte_faster-alzantot_original | DT4LM | "2024-11-30T08:45:58Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T08:45:56Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 41523
num_examples: 132
download_size: 35609
dataset_size: 41523
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/Magpie_rank0_chunk3_interleaf | chiyuanhsiao | "2024-11-30T10:13:23Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T10:04:30Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11358050213.5
num_examples: 10020
download_size: 11098322658
dataset_size: 11358050213.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LongThan/HalLinkLLM_mod_dataset_2 | LongThan | "2024-11-30T12:06:25Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T12:06:23Z" | ---
dataset_info:
features:
- name: knowledge
dtype: string
- name: question
dtype: string
- name: right_answer
dtype: string
- name: hallucinated_answer
dtype: string
splits:
- name: train
num_bytes: 5452144
num_examples: 10000
download_size: 3729897
dataset_size: 5452144
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LongThan/HalLinkLLM_mod_dataset_3 | LongThan | "2024-11-30T12:06:27Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T12:06:25Z" | ---
dataset_info:
features:
- name: knowledge
dtype: string
- name: question
dtype: string
- name: right_answer
dtype: string
- name: hallucinated_answer
dtype: string
splits:
- name: train
num_bytes: 5453425
num_examples: 10000
download_size: 3730403
dataset_size: 5453425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nntdgrs/test11 | nntdgrs | "2024-11-30T12:29:05Z" | 2 | 0 | [
"license:llama3.1",
"region:us"
] | null | "2024-11-30T12:29:05Z" | ---
license: llama3.1
---
|
Newvel/entailment_dataset | Newvel | "2024-11-30T16:32:10Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T16:32:09Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: entailment
dtype: string
- name: task
dtype: string
- name: text
dtype: string
- name: hypothesis
dtype: string
splits:
- name: train
num_bytes: 482246
num_examples: 1841
- name: validation
num_bytes: 84907
num_examples: 326
download_size: 376660
dataset_size: 567153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
DT4LM/albertbasev2_rte_pair_clare | DT4LM | "2024-11-30T17:09:29Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T17:05:08Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 78459
num_examples: 246
download_size: 58369
dataset_size: 78459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_rte_pair_clare_original | DT4LM | "2024-11-30T17:10:35Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T17:09:30Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 77270
num_examples: 246
download_size: 57254
dataset_size: 77270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yahya-Mohamed/fate7a | Yahya-Mohamed | "2024-11-30T18:12:48Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T18:12:45Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 975091.0
num_examples: 5
download_size: 976593
dataset_size: 975091.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vera-ZWY/reddite2024elections_posterdemographic | Vera-ZWY | "2024-11-30T20:23:44Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T19:03:10Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: score
dtype: string
- name: id
dtype: string
- name: url
dtype: string
- name: num_comments
dtype: string
- name: created
dtype: timestamp[s]
- name: body
dtype: string
- name: content
dtype: string
- name: subreddit
dtype: string
- name: authors
dtype: string
- name: submission_exists
dtype: string
splits:
- name: train
num_bytes: 349264
num_examples: 429
download_size: 183276
dataset_size: 349264
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dnth/pixmo-ask-model-anything-images | dnth | "2024-12-01T03:46:23Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-01T03:30:53Z" | ---
dataset_info:
features:
- name: image_sha256
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 21974810178.04
num_examples: 153592
download_size: 15883131179
dataset_size: 21974810178.04
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|