datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.59M
| likes
int64 0
6.32k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1.01M
|
---|---|---|---|---|---|---|---|---|
neoneye/simon-arc-solve-rotate-v12 | neoneye | "2024-11-21T22:31:10Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T22:29:55Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve rotate version 12
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the image gets rotated cw/ccw/180 and transposed.
The image sizes are between 1 and 4 pixels.
Predict the number of rows in the output image.
# Version 2
image size: 1-5.
# Version 3
image size: 1-5.
Added `flipx` and `flipy` transformations.
# Version 4
image size: 1-5.
number of tests: 1-2. Previously there were always just 1 test.
Added `flipa` and `flipb` transformations, that flips over the diagonal.
# Version 5
image size: 1-5.
number of tests: 1-2.
# Version 6
image size: 1-13.
# Version 7
Earlier predictions added to some of the rows.
# Version 8
Earlier predictions with focus on repair 1 bad pixel.
# Version 9
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 10
Replaced RLE compressed response with raw pixel response.
# Version 11
image size: 1-16.
# Version 12
I think the image sizes was too big for the model to make sense of the data. Trying with smaller images.
image size: 1-5.
|
liuyanchen1015/Llama-3.2-1B_jailbroken_responses_all | liuyanchen1015 | "2024-11-21T22:30:20Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:30:19Z" | ---
dataset_info:
features:
- name: prompt_name
dtype: string
- name: jailbreak_prompt_name
dtype: string
- name: jailbreak_prompt_text
dtype: string
- name: original_prompt_text
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 30035209
num_examples: 9888
download_size: 5010265
dataset_size: 30035209
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nitaibezerra/govbrnews | nitaibezerra | "2024-11-21T23:24:01Z" | 0 | 0 | [
"license:agpl-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:38:24Z" | ---
license: agpl-3.0
dataset_info:
features:
- name: unique_id
dtype: string
- name: agency
dtype: string
- name: published_at
dtype: date32
- name: title
dtype: string
- name: url
dtype: string
- name: category
dtype: string
- name: tags
sequence: string
- name: content
dtype: string
- name: extracted_at
dtype: timestamp[us]
splits:
- name: train
num_bytes: 103687847
num_examples: 28704
download_size: 52139131
dataset_size: 103687847
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/code_contests_codegemma_passk-part1-of-1_graded | Asap7772 | "2024-11-21T22:38:30Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:38:26Z" | ---
dataset_info:
features:
- name: timeout
dtype: float64
- name: test_cases
struct:
- name: input
sequence: string
- name: output
sequence: string
- name: solutions
sequence: string
- name: name
dtype: string
- name: is_corrects
sequence: bool
splits:
- name: train
num_bytes: 37056176
num_examples: 102
download_size: 10725719
dataset_size: 37056176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tippawan/Finetune-mt-story-telling-221124 | Tippawan | "2024-11-21T22:45:33Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:45:32Z" | ---
dataset_info:
features:
- name: en
dtype: string
- name: th
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 719000
num_examples: 5629
- name: test
num_bytes: 145238
num_examples: 1126
- name: validation
num_bytes: 145163
num_examples: 1126
download_size: 577598
dataset_size: 1009401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
huggingface/my-distiset-838c6e9d | huggingface | "2024-11-21T22:47:27Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-11-21T22:47:24Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: system_prompt
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 3955
num_examples: 1
download_size: 19437
dataset_size: 3955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-838c6e9d
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/huggingface/my-distiset-838c6e9d/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/huggingface/my-distiset-838c6e9d/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "SEAL Team 6, also known as DEVGRU (Development Group) or Task Force Blue, is the United States Navy\u0027s most elite special operations force. They are a special operations force of the Naval Special Warfare Command (NSWC) and are responsible for conducting a variety of missions, including counterterrorism, direct action, special reconnaissance, high-risk arrests, and other operations that require specialized expertise and skill.\n\nHere\u0027s an overview of how SEAL Team 6 operates and their situation in today\u0027s military:\n\n**Organization and Structure:**\n\nSEAL Team 6 is a part of the Naval Special Warfare Command (NSWC), which is headquartered at Naval Base Coronado in California. The team is divided into several units, including:\n\n* 1st Commanding Officer (XO): responsible for commanding the team\n* 1st Sergeant (1stSGT): responsible for leading the team\u0027s operations\n* Platoon Leaders (PLs): responsible for leading individual platoons\n* Team Leaders (TLs): responsible for leading individual teams within a platoon\n* SEALs (Special Warfare Operators): the individual operators who make up the team\n\n**Training and Selection:**\n\nSEAL Team 6 operators undergo an extremely challenging selection process, known as STS (SEAL Training Command). This process includes:\n\n* Basic Underwater Demolition/SEAL (BUD/S) training: a 24-week course that includes physical conditioning, combat training, and swimming and diving.\n* Advanced training: a 26-week course that includes advanced combat skills, language training, and medical training.\n* Qualification Training: a 7-week course that includes specialized skills training, such as parachuting, helicopter operations, and advanced marksmanship.\n\n**Operations:**\n\nSEAL Team 6 conducts a variety of operations, including:\n\n* Counterterrorism: SEALS are trained to conduct high-risk missions to neutralize or capture high-value targets, such as terrorist leaders or high-priority personnel.\n* Direct action: SEALS conduct raids, ambushes, and other operations to disrupt or destroy enemy forces or infrastructure.\n* Special reconnaissance: SEALS conduct reconnaissance missions to gather intelligence, identify targets, and prepare for future operations.\n* High-risk arrests: SEALS conduct operations to apprehend high-priority targets, such as terrorists or fugitives.\n\n**Situation in Today\u0027s Military:**\n\nSEAL Team 6 continues to operate in various parts of the world, conducting a range of missions to support national security objectives. Some of the current operations of SEAL Team 6 include:\n\n* Counterterrorism: SEAL Team 6 continues to conduct counterterrorism operations against terrorist organizations, such as ISIS and Al-Qaeda.\n* Irregular warfare: SEALS conduct operations against non-state actors and irregular forces, such as insurgent groups and militias.\n* Direct action: SEALs continue to conduct direct action missions to neutralize or capture high-value targets.\n* Special reconnaissance: SEALS conduct reconnaissance missions to support future operations and gather intelligence on enemy forces.\n\nIn terms of their organization, SEAL Team 6 has undergone significant changes in recent years, including:\n\n* Reorganization: In 2020, the Naval Special Warfare Command (NSWC) underwent a major reorganization, including the establishment of a new unit, the Naval Special Warfare Command (NSWC) Special Operations Command (NSWCOM).\n* Modernization: SEALS have been equipped with new technologies, including advanced night vision goggles, drones, and body armor.\n* Integration: SEALS continue to integrate with other special operations forces, such as the Army\u0027s Delta Force and the Air Force\u0027s 24th Special Tactics Squadron.\n\nOverall, SEAL Team 6 remains a highly trained and effective special operations force, conducting a range of high-risk missions to support national security objectives.",
"prompt": "How does SEAL Team 6 meet with operations? What is there situation in today\u0027s military? \n\n",
"system_prompt": ""
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-838c6e9d", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-838c6e9d")
```
</details>
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_33 | ZixuanKe | "2024-11-21T22:48:35Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:48:33Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 208604
num_examples: 41
download_size: 34372
dataset_size: 208604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
huggingface/my-distiset-555f6edc | huggingface | "2024-11-21T22:50:05Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-11-21T22:50:02Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: system_prompt
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 2521
num_examples: 1
download_size: 13149
dataset_size: 2521
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-555f6edc
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/huggingface/my-distiset-555f6edc/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/huggingface/my-distiset-555f6edc/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "To create a comprehensive dataset for a personal finance app, I recommend including the following essential data fields:\n\n1. **User Information**:\n * User ID (unique identifier)\n * User name\n * Email address\n * Password (hashed for security)\n * Date of birth\n2. **Financial Accounts**:\n * Account type (checking, savings, credit card, loan, investment, etc.)\n * Account number (masked for security)\n * Account balance (updated periodically)\n * Interest rate (if applicable)\n * Minimum payment (if applicable)\n3. **Transactions**:\n * Transaction ID (unique identifier)\n * Date\n * Description\n * Category (income, expense, transfer, etc.)\n * Type (deposit, withdrawal, payment, etc.)\n * Amount\n * Currency\n4. **Income**:\n * Income source (job, investment, etc.)\n * Income amount\n * Frequency (monthly, bi-weekly, etc.)\n * Start date\n5. **Expenses**:\n * Expense category (housing, food, transportation, etc.)\n * Expense amount\n * Frequency (monthly, bi-weekly, etc.)\n * Due date\n6. **Budgeting**:\n * Budget category (income, expenses, savings, debt repayment, etc.)\n * Budget amount\n * Target percentage\n7. **Debt**:\n * Debt type (credit card, loan, etc.)\n * Balance\n * Interest rate\n * Minimum payment\n * Due date\n8. **Savings**:\n * Savings goal (emergency fund, retirement, etc.)\n * Target amount\n * Current savings amount\n * Progress percentage\n9. **Notifications**:\n * Notification type (expense alert, payment due, etc.)\n * Trigger date\n * Description\n10. **Settings**:\n * Currency\n * Currency symbol\n * Language\n * Unit of measurement (e.g., pounds, dollars, etc.)",
"prompt": "I\u0027d like to create a dataset for a personal finance app to help users manage their budget and expenses. What are the essential data fields I should include? \n\n",
"system_prompt": "You are an AI assistant responsible for building comprehensive datasets for various purposes. Your purpose is to assist users in designing and generating datasets that accurately reflect their requirements, needs, and objectives. Offer clear guidance on dataset creation, provide suggestions for relevant data points, and ensure the dataset is well-structured, organized, and scalable. Assist users in ensuring data quality, relevance, and consistency while anticipating potential data requirements and maintenance. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-555f6edc", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-555f6edc")
```
</details>
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_34 | ZixuanKe | "2024-11-21T22:50:26Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:50:25Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 59231
num_examples: 12
download_size: 12012
dataset_size: 59231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tippawan/Finetune-mt-story-telling-221124-messages | Tippawan | "2024-11-21T22:51:05Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:51:04Z" | ---
dataset_info:
features:
- name: en
dtype: string
- name: th
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 719000
num_examples: 5629
- name: test
num_bytes: 145238
num_examples: 1126
- name: validation
num_bytes: 145163
num_examples: 1126
download_size: 577598
dataset_size: 1009401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
WARSO46/ForgetRetainBooks | WARSO46 | "2024-11-22T00:29:08Z" | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T22:55:31Z" | ---
license: mit
language:
- en
---
# ForgetRetainBooks
This dataset is derived from the NarrativeQA dataset, created by Kocisky et al. (2018). NarrativeQA is a dataset for evaluating reading comprehension and narrative understanding.
This dataset is an extraction of the book content from the original NarrativeQA dataset.
## Citation
If you want to use this dataset, please also cite the original NarrativeQA dataset.
```Bibtex
@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}
``` |
Tippawan/Finetune-mt-story-telling-221124-messages2 | Tippawan | "2024-11-21T23:04:09Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:04:09Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1090514
num_examples: 5629
download_size: 370384
dataset_size: 1090514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DiegoP-S/DatasetSynthesis | DiegoP-S | "2024-11-22T01:16:08Z" | 0 | 0 | [
"license:apache-2.0",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:05:45Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file_name
dtype: 'null'
- name: image
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 749
dataset_size: 0
---
|
fatlonder/i24 | fatlonder | "2024-11-22T01:25:33Z" | 0 | 0 | [
"license:other",
"region:us"
] | null | "2024-11-21T23:08:22Z" | ---
license: other
license_name: i24
license_link: LICENSE
---
Data use agreement
By acquiring and using the I-24 MOTION data, you agree to the following data use agreement (updated 19 September 2023).
You are free to use the data in academic and commercial work.
The dataset contains anonymous trajectories. Any activities to re-identify individuals in the dataset or activities that may cause harm to individuals in the dataset are prohibited.
When you use I-24 MOTION data in published academic work, you are required to include the following citation contents. This allows us to aggregate statistics on the data use in publications:
Gloudemans, D., Wang, Y., Ji, J., Zachar, G., Barbour, W., Hall, E., Cebelak, M., Smith, L. and Work, D.B., 2023. I-24 MOTION: An instrument for freeway traffic science. Transportation Research Part C: Emerging Technologies, 155, p.104311.
The bibtex version of the reference is:
@article{gloudemans202324,
title={I-24 MOTION: An instrument for freeway traffic science},
author={Gloudemans, Derek and Wang, Yanbing and Ji, Junyi and Zachar, Gergely and Barbour, William and Hall, Eric and Cebelak, Meredith and Smith, Lee and Work, Daniel B},
journal={Transportation Research Part C: Emerging Technologies},
volume={155},
pages={104311},
year={2023},
publisher={Elsevier}
}
You are free to create and share derivative products as long as you maintain the terms above.
The data is provided “As is.” We make no other warranties, express or implied, and hereby disclaim all implied warranties, including any warranty of merchantability and warranty of fitness for a particular purpose.
|
neoneye/simon-arc-solve-scale-v9 | neoneye | "2024-11-21T23:16:54Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T23:14:58Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve scale version 9
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the images gets scaled up/down in both x and y direction.
example count: 2-4.
test count: 1-2.
image size: 3-10.
scale factor: 1-3.
# Version 2
image size: 1-20.
scale factor: 1-7.
# Version 3
image size: 1-30.
scale factor: 1-7.
# Version 4
Added a few noise to the images.
image size: 1-10.
scale factor: 1-7.
Only scale down.
Number of noise pixels per pixel cell: 0-2.
# Version 5
More noisy images for down scaling.
image size: 1-12.
Number of noise pixels per pixel cell: 0-half.
# Version 6
Earlier predictions added to some of the rows.
# Version 7
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 8
Replaced RLE compressed response with raw pixel response.
image size: 1-5.
scale factor: 1-7.
# Version 9
image size: 1-7.
scale factor: 1-3.
|
LiveTaro/parler-litagin-dataset-split-normalized-renamed-desc | LiveTaro | "2024-11-21T23:18:41Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:18:37Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: text_original
dtype: string
- name: gender
dtype: string
- name: text
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 84886568
num_examples: 244609
download_size: 56011007
dataset_size: 84886568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen5_run0_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-11-21T23:21:27Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:21:24Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523777
num_examples: 1000
download_size: 288170
dataset_size: 523777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-solve-skew-v5 | neoneye | "2024-11-21T23:33:46Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T23:32:47Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve skew version 5
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the job is to apply skew/unkew in the directions up/down/left/right.
example count: 2-4.
test count: 1-2.
image size: 1-4.
# Version 2
image size: 1-7.
# Version 3
Earlier predictions added to some of the rows.
# Version 4
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 5
Replaced RLE compressed response with raw pixel response. |
adipanda/anime-characters-3M | adipanda | "2024-11-21T23:36:23Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:35:35Z" | ---
dataset_info:
features:
- name: filename
dtype: string
- name: tags
dtype: string
- name: bangumi_name
dtype: string
- name: has_face
dtype: bool
- name: face_count
dtype: int64
- name: face_x0
dtype: float64
- name: face_y0
dtype: float64
- name: face_x1
dtype: float64
- name: face_y1
dtype: float64
- name: face_width
dtype: float64
- name: face_height
dtype: float64
- name: face_max
dtype: float64
- name: face_min
dtype: float64
- name: face_area
dtype: float64
- name: face_max_ratio
dtype: float64
- name: face_min_ratio
dtype: float64
- name: face_width_ratio
dtype: float64
- name: face_height_ratio
dtype: float64
- name: face_area_ratio
dtype: float64
- name: image_url
dtype: string
splits:
- name: train
num_bytes: 1149205732
num_examples: 2947126
download_size: 367004538
dataset_size: 1149205732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MattMcG/HookerHTRTest | MattMcG | "2024-11-22T00:26:36Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:44:34Z" | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 260242215.0
num_examples: 263
- name: test
num_bytes: 65736026.0
num_examples: 66
download_size: 324566842
dataset_size: 325978241.0
---
|
open-llm-leaderboard/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit-details | open-llm-leaderboard | "2024-11-22T00:32:39Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:44:41Z" | ---
pretty_name: Evaluation run of FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit](https://huggingface.co/FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit-details\"\
,\n\tname=\"FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-23-19.646165](https://huggingface.co/datasets/open-llm-leaderboard/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit-details/blob/main/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit/results_2024-11-22T00-23-19.646165.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.4879852125693161,\n \"\
prompt_level_loose_acc_stderr,none\": 0.02151036119343917,\n \"inst_level_loose_acc,none\"\
: 0.60431654676259,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"acc,none\": 0.3720910904255319,\n \"acc_stderr,none\"\
: 0.004406787358080629,\n \"prompt_level_strict_acc,none\": 0.46395563770794823,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02146059282373674,\n \
\ \"inst_level_strict_acc,none\": 0.579136690647482,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.10876132930513595,\n \
\ \"exact_match_stderr,none\": 0.00819586222508565,\n \"acc_norm,none\"\
: 0.48112595667401736,\n \"acc_norm_stderr,none\": 0.005323368568863841,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5238673841346988,\n \"acc_norm_stderr,none\"\
: 0.006146238124194547,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.816,\n\
\ \"acc_norm_stderr,none\": 0.02455581299422255\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5828877005347594,\n \"acc_norm_stderr,none\"\
: 0.0361545093114083\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.748,\n\
\ \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.824,\n \
\ \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\":\
\ 0.030821679117375447\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4315068493150685,\n \"acc_norm_stderr,none\": 0.041131302645371945\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.624,\n \
\ \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6235955056179775,\n \"acc_norm_stderr,none\": 0.03641600124728159\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.424,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\":\
\ 0.02936106757521985\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.311241610738255,\n\
\ \"acc_norm_stderr,none\": 0.013409889215684433,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3383838383838384,\n \"acc_norm_stderr,none\": 0.033711241426263\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2857142857142857,\n\
\ \"acc_norm_stderr,none\": 0.019351013185102753\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33035714285714285,\n \"acc_norm_stderr,none\"\
: 0.022246398347131557\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.46395563770794823,\n \"prompt_level_strict_acc_stderr,none\": 0.02146059282373674,\n\
\ \"inst_level_strict_acc,none\": 0.579136690647482,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.4879852125693161,\n \"prompt_level_loose_acc_stderr,none\": 0.02151036119343917,\n\
\ \"inst_level_loose_acc,none\": 0.60431654676259,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.10876132930513595,\n \"exact_match_stderr,none\": 0.00819586222508565,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.19543973941368079,\n \"exact_match_stderr,none\"\
: 0.022668616386127674\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0975609756097561,\n \"exact_match_stderr,none\"\
: 0.026863777740489123\n },\n \"leaderboard_math_geometry_hard\":\
\ {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \
\ \"exact_match,none\": 0.045454545454545456,\n \"exact_match_stderr,none\"\
: 0.018199158975632696\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.010714285714285714,\n \"exact_match_stderr,none\"\
: 0.006163684194761604\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.06493506493506493,\n \"exact_match_stderr,none\"\
: 0.01992116854149014\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.25906735751295334,\n \"exact_match_stderr,none\"\
: 0.03161877917935408\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3720910904255319,\n\
\ \"acc_stderr,none\": 0.004406787358080629\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.01749054545808027,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.568,\n\
\ \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28125,\n \"acc_norm_stderr,none\"\
: 0.028155620586096754\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.4879852125693161,\n \"prompt_level_loose_acc_stderr,none\": 0.02151036119343917,\n\
\ \"inst_level_loose_acc,none\": 0.60431654676259,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.3720910904255319,\n \"acc_stderr,none\"\
: 0.004406787358080629,\n \"prompt_level_strict_acc,none\": 0.46395563770794823,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02146059282373674,\n \
\ \"inst_level_strict_acc,none\": 0.579136690647482,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.10876132930513595,\n \"exact_match_stderr,none\"\
: 0.00819586222508565,\n \"acc_norm,none\": 0.48112595667401736,\n \
\ \"acc_norm_stderr,none\": 0.005323368568863841,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.5238673841346988,\n\
\ \"acc_norm_stderr,none\": 0.006146238124194547,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5828877005347594,\n \"acc_norm_stderr,none\"\
: 0.0361545093114083\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4315068493150685,\n\
\ \"acc_norm_stderr,none\": 0.041131302645371945\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6235955056179775,\n \"acc_norm_stderr,none\"\
: 0.03641600124728159\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.311241610738255,\n\
\ \"acc_norm_stderr,none\": 0.013409889215684433,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3383838383838384,\n\
\ \"acc_norm_stderr,none\": 0.033711241426263\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2857142857142857,\n \"acc_norm_stderr,none\": 0.019351013185102753\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33035714285714285,\n \"acc_norm_stderr,none\"\
: 0.022246398347131557\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.46395563770794823,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02146059282373674,\n \
\ \"inst_level_strict_acc,none\": 0.579136690647482,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.4879852125693161,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02151036119343917,\n \"inst_level_loose_acc,none\"\
: 0.60431654676259,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.10876132930513595,\n\
\ \"exact_match_stderr,none\": 0.00819586222508565,\n \"alias\": \"\
\ - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.19543973941368079,\n \"exact_match_stderr,none\": 0.022668616386127674\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0975609756097561,\n \"exact_match_stderr,none\": 0.026863777740489123\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.045454545454545456,\n \"exact_match_stderr,none\"\
: 0.018199158975632696\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.010714285714285714,\n \"exact_match_stderr,none\"\
: 0.006163684194761604\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.06493506493506493,\n \"exact_match_stderr,none\": 0.01992116854149014\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.25906735751295334,\n \"exact_match_stderr,none\"\
: 0.03161877917935408\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3720910904255319,\n \"acc_stderr,none\": 0.004406787358080629\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.42328042328042326,\n\
\ \"acc_norm_stderr,none\": 0.01749054545808027,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28125,\n \"acc_norm_stderr,none\": 0.028155620586096754\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n}\n```"
repo_url: https://huggingface.co/FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-23-19.646165.jsonl'
- config_name: FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_23_19.646165
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-23-19.646165.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-23-19.646165.jsonl'
---
# Dataset Card for Evaluation run of FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit](https://huggingface.co/FlofloB/test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit-details",
name="FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-23-19.646165](https://huggingface.co/datasets/open-llm-leaderboard/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit-details/blob/main/FlofloB__test_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit/results_2024-11-22T00-23-19.646165.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.4879852125693161,
"prompt_level_loose_acc_stderr,none": 0.02151036119343917,
"inst_level_loose_acc,none": 0.60431654676259,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.3720910904255319,
"acc_stderr,none": 0.004406787358080629,
"prompt_level_strict_acc,none": 0.46395563770794823,
"prompt_level_strict_acc_stderr,none": 0.02146059282373674,
"inst_level_strict_acc,none": 0.579136690647482,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.10876132930513595,
"exact_match_stderr,none": 0.00819586222508565,
"acc_norm,none": 0.48112595667401736,
"acc_norm_stderr,none": 0.005323368568863841,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5238673841346988,
"acc_norm_stderr,none": 0.006146238124194547,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5828877005347594,
"acc_norm_stderr,none": 0.0361545093114083
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4315068493150685,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6235955056179775,
"acc_norm_stderr,none": 0.03641600124728159
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.311241610738255,
"acc_norm_stderr,none": 0.013409889215684433,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3383838383838384,
"acc_norm_stderr,none": 0.033711241426263
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33035714285714285,
"acc_norm_stderr,none": 0.022246398347131557
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.46395563770794823,
"prompt_level_strict_acc_stderr,none": 0.02146059282373674,
"inst_level_strict_acc,none": 0.579136690647482,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4879852125693161,
"prompt_level_loose_acc_stderr,none": 0.02151036119343917,
"inst_level_loose_acc,none": 0.60431654676259,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.10876132930513595,
"exact_match_stderr,none": 0.00819586222508565,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.19543973941368079,
"exact_match_stderr,none": 0.022668616386127674
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0975609756097561,
"exact_match_stderr,none": 0.026863777740489123
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.045454545454545456,
"exact_match_stderr,none": 0.018199158975632696
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.010714285714285714,
"exact_match_stderr,none": 0.006163684194761604
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.25906735751295334,
"exact_match_stderr,none": 0.03161877917935408
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3720910904255319,
"acc_stderr,none": 0.004406787358080629
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01749054545808027,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.4879852125693161,
"prompt_level_loose_acc_stderr,none": 0.02151036119343917,
"inst_level_loose_acc,none": 0.60431654676259,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.3720910904255319,
"acc_stderr,none": 0.004406787358080629,
"prompt_level_strict_acc,none": 0.46395563770794823,
"prompt_level_strict_acc_stderr,none": 0.02146059282373674,
"inst_level_strict_acc,none": 0.579136690647482,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.10876132930513595,
"exact_match_stderr,none": 0.00819586222508565,
"acc_norm,none": 0.48112595667401736,
"acc_norm_stderr,none": 0.005323368568863841,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5238673841346988,
"acc_norm_stderr,none": 0.006146238124194547,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5828877005347594,
"acc_norm_stderr,none": 0.0361545093114083
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4315068493150685,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6235955056179775,
"acc_norm_stderr,none": 0.03641600124728159
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.311241610738255,
"acc_norm_stderr,none": 0.013409889215684433,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3383838383838384,
"acc_norm_stderr,none": 0.033711241426263
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33035714285714285,
"acc_norm_stderr,none": 0.022246398347131557
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.46395563770794823,
"prompt_level_strict_acc_stderr,none": 0.02146059282373674,
"inst_level_strict_acc,none": 0.579136690647482,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4879852125693161,
"prompt_level_loose_acc_stderr,none": 0.02151036119343917,
"inst_level_loose_acc,none": 0.60431654676259,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.10876132930513595,
"exact_match_stderr,none": 0.00819586222508565,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.19543973941368079,
"exact_match_stderr,none": 0.022668616386127674
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0975609756097561,
"exact_match_stderr,none": 0.026863777740489123
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.045454545454545456,
"exact_match_stderr,none": 0.018199158975632696
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.010714285714285714,
"exact_match_stderr,none": 0.006163684194761604
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.25906735751295334,
"exact_match_stderr,none": 0.03161877917935408
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3720910904255319,
"acc_stderr,none": 0.004406787358080629
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01749054545808027,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details | open-llm-leaderboard | "2024-11-22T00:28:41Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:44:57Z" | ---
pretty_name: Evaluation run of icefog72/Ice0.40-20.11-RP
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [icefog72/Ice0.40-20.11-RP](https://huggingface.co/icefog72/Ice0.40-20.11-RP)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details\"\
,\n\tname=\"icefog72__Ice0.40-20.11-RP__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-20-22.881636](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details/blob/main/icefog72__Ice0.40-20.11-RP/results_2024-11-22T00-20-22.881636.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.592326139088729,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.47035932027500327,\n \"\
acc_norm_stderr,none\": 0.005330323393972458,\n \"prompt_level_loose_acc,none\"\
: 0.4787430683918669,\n \"prompt_level_loose_acc_stderr,none\": 0.02149712051598776,\n\
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.30992353723404253,\n\
\ \"acc_stderr,none\": 0.004216237086078009,\n \"prompt_level_strict_acc,none\"\
: 0.4177449168207024,\n \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n\
\ \"exact_match,none\": 0.06268882175226587,\n \"exact_match_stderr,none\"\
: 0.006525049774700846,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.5075507724353411,\n\
\ \"acc_norm_stderr,none\": 0.006146177305130497,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6149732620320856,\n\
\ \"acc_norm_stderr,none\": 0.03567936280544673\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.572,\n\
\ \"acc_norm_stderr,none\": 0.031355968923772626\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\":\
\ 0.031191596026022818\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4246575342465753,\n \"acc_norm_stderr,none\": 0.04104862657656195\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.556,\n \
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6797752808988764,\n \"acc_norm_stderr,none\": 0.03506900770722058\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.252,\n\
\ \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013371083374985824,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2828282828282828,\n \"acc_norm_stderr,none\": 0.032087795587867514\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.32051282051282054,\n\
\ \"acc_norm_stderr,none\": 0.019990105460697117\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.4177449168207024,\n \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n\
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.4787430683918669,\n \"prompt_level_loose_acc_stderr,none\": 0.02149712051598776,\n\
\ \"inst_level_loose_acc,none\": 0.592326139088729,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.06268882175226587,\n \"exact_match_stderr,none\": 0.006525049774700846,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.09446254071661238,\n \"exact_match_stderr,none\"\
: 0.016719462370368424\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.024390243902439025,\n \"exact_match_stderr,none\"\
: 0.013965813032045565\n },\n \"leaderboard_math_geometry_hard\":\
\ {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \
\ \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.02142857142857143,\n \"exact_match_stderr,none\"\
: 0.008669434577665551\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.05194805194805195,\n \"exact_match_stderr,none\"\
: 0.017941344490765\n },\n \"leaderboard_math_prealgebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"\
exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.30992353723404253,\n\
\ \"acc_stderr,none\": 0.004216237086078009\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4444444444444444,\n \"acc_norm_stderr,none\"\
: 0.017783559448746142,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.568,\n\
\ \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\"\
: 0.030964342373467638\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.592326139088729,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \
\ \"acc_norm,none\": 0.47035932027500327,\n \"acc_norm_stderr,none\"\
: 0.005330323393972458,\n \"prompt_level_loose_acc,none\": 0.4787430683918669,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.02149712051598776,\n \"\
inst_level_strict_acc,none\": 0.5347721822541966,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.30992353723404253,\n \"acc_stderr,none\"\
: 0.004216237086078009,\n \"prompt_level_strict_acc,none\": 0.4177449168207024,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n \
\ \"exact_match,none\": 0.06268882175226587,\n \"exact_match_stderr,none\"\
: 0.006525049774700846,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5075507724353411,\n \"acc_norm_stderr,none\"\
: 0.006146177305130497,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4246575342465753,\n\
\ \"acc_norm_stderr,none\": 0.04104862657656195\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6797752808988764,\n \"acc_norm_stderr,none\"\
: 0.03506900770722058\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013371083374985824,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2828282828282828,\n\
\ \"acc_norm_stderr,none\": 0.032087795587867514\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.32051282051282054,\n \"acc_norm_stderr,none\": 0.019990105460697117\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.4177449168207024,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n \
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.4787430683918669,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02149712051598776,\n \"inst_level_loose_acc,none\"\
: 0.592326139088729,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.06268882175226587,\n\
\ \"exact_match_stderr,none\": 0.006525049774700846,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.09446254071661238,\n \"exact_match_stderr,none\": 0.016719462370368424\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02142857142857143,\n \"exact_match_stderr,none\"\
: 0.008669434577665551\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.05194805194805195,\n \"exact_match_stderr,none\": 0.017941344490765\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.30992353723404253,\n \"acc_stderr,none\": 0.004216237086078009\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4444444444444444,\n\
\ \"acc_norm_stderr,none\": 0.017783559448746142,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\": 0.030964342373467638\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ }\n}\n```"
repo_url: https://huggingface.co/icefog72/Ice0.40-20.11-RP
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-20-22.881636.jsonl'
- config_name: icefog72__Ice0.40-20.11-RP__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_20_22.881636
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-20-22.881636.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-20-22.881636.jsonl'
---
# Dataset Card for Evaluation run of icefog72/Ice0.40-20.11-RP
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [icefog72/Ice0.40-20.11-RP](https://huggingface.co/icefog72/Ice0.40-20.11-RP)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details",
name="icefog72__Ice0.40-20.11-RP__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-20-22.881636](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details/blob/main/icefog72__Ice0.40-20.11-RP/results_2024-11-22T00-20-22.881636.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.592326139088729,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.47035932027500327,
"acc_norm_stderr,none": 0.005330323393972458,
"prompt_level_loose_acc,none": 0.4787430683918669,
"prompt_level_loose_acc_stderr,none": 0.02149712051598776,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009,
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5075507724353411,
"acc_norm_stderr,none": 0.006146177305130497,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013371083374985824,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2828282828282828,
"acc_norm_stderr,none": 0.032087795587867514
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32051282051282054,
"acc_norm_stderr,none": 0.019990105460697117
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4787430683918669,
"prompt_level_loose_acc_stderr,none": 0.02149712051598776,
"inst_level_loose_acc,none": 0.592326139088729,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09446254071661238,
"exact_match_stderr,none": 0.016719462370368424
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009
},
"leaderboard_musr": {
"acc_norm,none": 0.4444444444444444,
"acc_norm_stderr,none": 0.017783559448746142,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.592326139088729,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.47035932027500327,
"acc_norm_stderr,none": 0.005330323393972458,
"prompt_level_loose_acc,none": 0.4787430683918669,
"prompt_level_loose_acc_stderr,none": 0.02149712051598776,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009,
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5075507724353411,
"acc_norm_stderr,none": 0.006146177305130497,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013371083374985824,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2828282828282828,
"acc_norm_stderr,none": 0.032087795587867514
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32051282051282054,
"acc_norm_stderr,none": 0.019990105460697117
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4787430683918669,
"prompt_level_loose_acc_stderr,none": 0.02149712051598776,
"inst_level_loose_acc,none": 0.592326139088729,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09446254071661238,
"exact_match_stderr,none": 0.016719462370368424
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009
},
"leaderboard_musr": {
"acc_norm,none": 0.4444444444444444,
"acc_norm_stderr,none": 0.017783559448746142,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dogtooth/llama-31-diverse-generations-hs | dogtooth | "2024-11-21T23:50:12Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:50:10Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
sequence: string
splits:
- name: train
num_bytes: 48336201
num_examples: 10163
download_size: 20400362
dataset_size: 48336201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hellomomiji/llm-judge-dataset | hellomomiji | "2024-11-22T00:14:30Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:52:14Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: explanation
dtype: string
splits:
- name: train
num_bytes: 120707
num_examples: 46
download_size: 77337
dataset_size: 120707
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shozol/translated_gsm8k_to_bangla_train | Shozol | "2024-11-21T23:53:02Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:52:56Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 9733028
num_examples: 7473
download_size: 3453459
dataset_size: 9733028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigstupidhats/aya_dataset_conversations | bigstupidhats | "2024-11-22T00:07:19Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:56:25Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: Qwen2.5-7B_ifd
dtype: float64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: Finnish
num_bytes: 1189456
num_examples: 742
- name: Danish
num_bytes: 125638
num_examples: 95
- name: Ukrainian
num_bytes: 1185732
num_examples: 522
- name: Greek
num_bytes: 1340758
num_examples: 622
- name: Lithuanian
num_bytes: 1110382
num_examples: 912
- name: Korean
num_bytes: 595628
num_examples: 361
- name: Irish
num_bytes: 1684920
num_examples: 1244
download_size: 3590770
dataset_size: 7232514
configs:
- config_name: default
data_files:
- split: Finnish
path: data/Finnish-*
- split: Danish
path: data/Danish-*
- split: Ukrainian
path: data/Ukrainian-*
- split: Greek
path: data/Greek-*
- split: Lithuanian
path: data/Lithuanian-*
- split: Korean
path: data/Korean-*
- split: Irish
path: data/Irish-*
---
|
LLMsForHepth/infer_hep_th | LLMsForHepth | "2024-11-22T00:00:53Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:00:49Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: orig_abstract
dtype: string
- name: versions
list:
- name: created
dtype: string
- name: version
dtype: string
- name: update_date
dtype: string
- name: authors_parsed
sequence:
sequence: string
- name: abstract
dtype: string
- name: prompt
dtype: string
- name: y_true
dtype: string
- name: comp_s1-L-3.1-8B-base
dtype: string
- name: preds_s1-L-3.1-8B-base
dtype: string
splits:
- name: test
num_bytes: 62376789
num_examples: 15808
download_size: 33324312
dataset_size: 62376789
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ChatterjeeLab/FusOn-DB | ChatterjeeLab | "2024-11-22T00:03:14Z" | 0 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:02:01Z" | ---
license: mit
---
|
Gramacho/complete_pira_train_val_corpus1_en_llama3_alpaca_1484 | Gramacho | "2024-11-22T00:04:56Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:04:53Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: id
dtype: string
- name: context
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3292965
num_examples: 1484
download_size: 1038412
dataset_size: 3292965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TSOWatch/1001NightsTreasureKnowledge | TSOWatch | "2024-11-22T00:09:09Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:08:33Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsBesiegedCity | TSOWatch | "2024-11-22T00:14:26Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:14:05Z" | ---
license: creativeml-openrail-m
---
|
Gramacho/complete_pira_test_corpus1_en_llama3_alpaca_181 | Gramacho | "2024-11-22T00:15:17Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:15:16Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: id
dtype: string
- name: context
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 387591
num_examples: 181
download_size: 181256
dataset_size: 387591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sartifyllc/tulu-3-sft-mixture-conversation | sartifyllc | "2024-11-22T00:34:07Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:15:24Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 5731918871
num_examples: 939343
download_size: 2805331792
dataset_size: 5731918871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shekhar1536/CategorizedPromptData | shekhar1536 | "2024-11-22T01:04:16Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:16:30Z" | ---
license: apache-2.0
---
|
TSOWatch/1001NightsOilLamp | TSOWatch | "2024-11-22T00:56:29Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:18:03Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsSeaSnake | TSOWatch | "2024-11-22T00:19:25Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:19:10Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsWoodcutter | TSOWatch | "2024-11-22T00:23:30Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:23:14Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsTreasureWisdom | TSOWatch | "2024-11-22T00:24:26Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:24:05Z" | ---
license: creativeml-openrail-m
---
|
DiegoP-S/fruit | DiegoP-S | "2024-11-22T00:57:17Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:24:23Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: image
dtype: binary
splits:
- name: train
num_bytes: 3602634
num_examples: 8
download_size: 2578872
dataset_size: 3602634
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TSOWatch/1001NightsFirstThief | TSOWatch | "2024-11-22T00:25:06Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:24:54Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsSecondThief | TSOWatch | "2024-11-22T00:25:46Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:25:35Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsThirdThief | TSOWatch | "2024-11-22T00:26:27Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:26:15Z" | ---
license: creativeml-openrail-m
---
|
Cha-Imaa/bringupbench_compiled | Cha-Imaa | "2024-11-22T00:26:27Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:26:21Z" | ---
dataset_info:
features:
- name: c_file
dtype: string
- name: arm_file
dtype: string
- name: x86_file
dtype: string
- name: arm_assembly
dtype: string
- name: x86_assembly
dtype: string
splits:
- name: train
num_bytes: 3198525
num_examples: 65
download_size: 943285
dataset_size: 3198525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADT1999/my-dataset-project-PIPE-original_image | ADT1999 | "2024-11-22T01:11:44Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:27:13Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: original_image
dtype: image
splits:
- name: train
num_bytes: 3582721591.0
num_examples: 10000
download_size: 3582968243
dataset_size: 3582721591.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details | open-llm-leaderboard | "2024-11-22T00:34:22Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:31:09Z" | ---
pretty_name: Evaluation run of allenai/Llama-3.1-Tulu-3-8B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details\"\
,\n\tname=\"allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-31-08.901515](https://huggingface.co/datasets/open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details/blob/main/allenai__Llama-3.1-Tulu-3-8B/results_2024-11-22T00-31-08.901515.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.38785834738617203,\n \"acc_norm_stderr,none\"\
: 0.005273329157943381,\n \"inst_level_loose_acc,none\": 0.8752997601918465,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"acc,none\": 0.2826628989361702,\n \
\ \"acc_stderr,none\": 0.0041053027261143855,\n \"prompt_level_strict_acc,none\"\
: 0.7948243992606284,\n \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n\
\ \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \"\
prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.4025342822426662,\n \"acc_norm_stderr,none\": 0.006072426154807149,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.8,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5187165775401069,\n\
\ \"acc_norm_stderr,none\": 0.03663608375537843\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.472,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.288,\n\
\ \"acc_norm_stderr,none\": 0.028697004587398253\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3904109589041096,\n \"acc_norm_stderr,none\"\
: 0.040513109165891854\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.5224719101123596,\n\
\ \"acc_norm_stderr,none\": 0.03754432508487191\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.116,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\":\
\ 0.02881432040220563\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013264655332365493,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.30303030303030304,\n \"acc_norm_stderr,none\": 0.03274287914026869\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.28205128205128205,\n\
\ \"acc_norm_stderr,none\": 0.019275803929950375\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7948243992606284,\n \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n\
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.8151571164510166,\n \"prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n\
\ \"inst_level_loose_acc,none\": 0.8752997601918465,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.3811074918566775,\n\
\ \"exact_match_stderr,none\": 0.02776327166045321\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \" \
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.06060606060606061,\n\
\ \"exact_match_stderr,none\": 0.020847129156682045\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.03214285714285714,\n \"exact_match_stderr,none\": 0.01055955866175321\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.12987012987012986,\n\
\ \"exact_match_stderr,none\": 0.02717696535667076\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.2826628989361702,\n\
\ \"acc_stderr,none\": 0.004105302726114385\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4166666666666667,\n \"acc_norm_stderr,none\"\
: 0.01768575862518651,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.528,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.31640625,\n \"acc_norm_stderr,none\"\
: 0.02912403057115479\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.38785834738617203,\n\
\ \"acc_norm_stderr,none\": 0.005273329157943381,\n \"inst_level_loose_acc,none\"\
: 0.8752997601918465,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"acc,none\": 0.2826628989361702,\n \"acc_stderr,none\"\
: 0.0041053027261143855,\n \"prompt_level_strict_acc,none\": 0.7948243992606284,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n \
\ \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01670417955850395,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.4025342822426662,\n \"acc_norm_stderr,none\"\
: 0.006072426154807149,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.8,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3904109589041096,\n\
\ \"acc_norm_stderr,none\": 0.040513109165891854\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5224719101123596,\n \"acc_norm_stderr,none\"\
: 0.03754432508487191\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.116,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\": 0.02881432040220563\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013264655332365493,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.30303030303030304,\n\
\ \"acc_norm_stderr,none\": 0.03274287914026869\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.28205128205128205,\n \"acc_norm_stderr,none\": 0.019275803929950375\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \"alias\":\
\ \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7948243992606284,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n \
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n \"inst_level_loose_acc,none\"\
: 0.8752997601918465,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.19637462235649547,\n\
\ \"exact_match_stderr,none\": 0.009854609082277298,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.3811074918566775,\n \"exact_match_stderr,none\": 0.02776327166045321\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.06060606060606061,\n \"exact_match_stderr,none\"\
: 0.020847129156682045\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.03214285714285714,\n \"exact_match_stderr,none\"\
: 0.01055955866175321\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.12987012987012986,\n \"exact_match_stderr,none\": 0.02717696535667076\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.2826628989361702,\n \"acc_stderr,none\": 0.004105302726114385\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4166666666666667,\n\
\ \"acc_norm_stderr,none\": 0.01768575862518651,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.31640625,\n \"acc_norm_stderr,none\": 0.02912403057115479\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ }\n}\n```"
repo_url: https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-31-08.901515.jsonl'
---
# Dataset Card for Evaluation run of allenai/Llama-3.1-Tulu-3-8B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details",
name="allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-31-08.901515](https://huggingface.co/datasets/open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details/blob/main/allenai__Llama-3.1-Tulu-3-8B/results_2024-11-22T00-31-08.901515.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.38785834738617203,
"acc_norm_stderr,none": 0.005273329157943381,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.0041053027261143855,
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4025342822426662,
"acc_norm_stderr,none": 0.006072426154807149,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3904109589041096,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5224719101123596,
"acc_norm_stderr,none": 0.03754432508487191
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.116,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013264655332365493,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30303030303030304,
"acc_norm_stderr,none": 0.03274287914026869
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06060606060606061,
"exact_match_stderr,none": 0.020847129156682045
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12987012987012986,
"exact_match_stderr,none": 0.02717696535667076
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.004105302726114385
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.01768575862518651,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.31640625,
"acc_norm_stderr,none": 0.02912403057115479
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
}
},
"leaderboard": {
"acc_norm,none": 0.38785834738617203,
"acc_norm_stderr,none": 0.005273329157943381,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.0041053027261143855,
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4025342822426662,
"acc_norm_stderr,none": 0.006072426154807149,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3904109589041096,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5224719101123596,
"acc_norm_stderr,none": 0.03754432508487191
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.116,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013264655332365493,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30303030303030304,
"acc_norm_stderr,none": 0.03274287914026869
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06060606060606061,
"exact_match_stderr,none": 0.020847129156682045
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12987012987012986,
"exact_match_stderr,none": 0.02717696535667076
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.004105302726114385
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.01768575862518651,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.31640625,
"acc_norm_stderr,none": 0.02912403057115479
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Yuanxin-Liu/Test-Dataset | Yuanxin-Liu | "2024-11-22T00:31:56Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:31:55Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: original_question
dtype: string
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 108719
num_examples: 100
download_size: 64293
dataset_size: 108719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details | open-llm-leaderboard | "2024-11-22T00:35:43Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:32:12Z" | ---
pretty_name: Evaluation run of ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details\"\
,\n\tname=\"ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-32-11.693490](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix/results_2024-11-22T00-32-11.693490.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.4447307180851064,\n \"acc_stderr,none\"\
: 0.004530535363926051,\n \"inst_level_loose_acc,none\": 0.8285371702637889,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \
\ \"acc_norm,none\": 0.5014917628745622,\n \"acc_norm_stderr,none\"\
: 0.005340969872084893,\n \"prompt_level_strict_acc,none\": 0.7504621072088724,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\":\
\ {\n \"acc_norm,none\": 0.5521610831452872,\n \"acc_norm_stderr,none\"\
: 0.006179016832046109,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.86,\n\
\ \"acc_norm_stderr,none\": 0.021989409645240245\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.632,\n\
\ \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.556,\n \
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\":\
\ 0.03162125257572558\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\":\
\ 0.02607865766373279\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5958904109589042,\n \"acc_norm_stderr,none\": 0.0407519857003932\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.58,\n \
\ \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\":\
\ 0.031603975145223735\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6966292134831461,\n \"acc_norm_stderr,none\": 0.03455421944400101\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.168,\n \"acc_norm_stderr,none\":\
\ 0.023692813205492536\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29949664429530204,\n\
\ \"acc_norm_stderr,none\": 0.013278959534799928,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2878787878787879,\n \"acc_norm_stderr,none\": 0.03225883512300998\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.29120879120879123,\n\
\ \"acc_norm_stderr,none\": 0.019460910297288078\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n\
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7634011090573013,\n \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n\
\ \"inst_level_loose_acc,none\": 0.8285371702637889,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.6091205211726385,\n\
\ \"exact_match_stderr,none\": 0.027894098976471507\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.20454545454545456,\n\
\ \"exact_match_stderr,none\": 0.03524251981380333\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.1392857142857143,\n \"exact_match_stderr,none\": 0.02072911170255923\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.3051948051948052,\n\
\ \"exact_match_stderr,none\": 0.0372284008596668\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.1037037037037037,\n \"exact_match_stderr,none\"\
: 0.02633725661744443\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.4447307180851064,\n\
\ \"acc_stderr,none\": 0.004530535363926052\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43386243386243384,\n \"acc_norm_stderr,none\"\
: 0.01762618265060195,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.56,\n\
\ \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.296875,\n \"acc_norm_stderr,none\"\
: 0.028610997088737832\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.4447307180851064,\n\
\ \"acc_stderr,none\": 0.004530535363926051,\n \"inst_level_loose_acc,none\"\
: 0.8285371702637889,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \
\ \"acc_norm,none\": 0.5014917628745622,\n \"acc_norm_stderr,none\": 0.005340969872084893,\n\
\ \"prompt_level_strict_acc,none\": 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018622404509805804,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5521610831452872,\n \"acc_norm_stderr,none\"\
: 0.006179016832046109,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.86,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5958904109589042,\n\
\ \"acc_norm_stderr,none\": 0.0407519857003932\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.168,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29949664429530204,\n\
\ \"acc_norm_stderr,none\": 0.013278959534799928,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2878787878787879,\n\
\ \"acc_norm_stderr,none\": 0.03225883512300998\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.29120879120879123,\n \"acc_norm_stderr,none\": 0.019460910297288078\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7504621072088724,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n \
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \"inst_level_loose_acc,none\"\
: 0.8285371702637889,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.32326283987915405,\n\
\ \"exact_match_stderr,none\": 0.011761711608666757,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.6091205211726385,\n \"exact_match_stderr,none\": 0.027894098976471507\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.20454545454545456,\n \"exact_match_stderr,none\"\
: 0.03524251981380333\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.1392857142857143,\n \"exact_match_stderr,none\"\
: 0.02072911170255923\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.3051948051948052,\n \"exact_match_stderr,none\": 0.0372284008596668\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.1037037037037037,\n \"exact_match_stderr,none\": 0.02633725661744443\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.4447307180851064,\n \"acc_stderr,none\": 0.004530535363926052\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43386243386243384,\n\
\ \"acc_norm_stderr,none\": 0.01762618265060195,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \"leaderboard_musr_object_placements\"\
: {\n \"alias\": \" - leaderboard_musr_object_placements\",\n \"\
acc_norm,none\": 0.296875,\n \"acc_norm_stderr,none\": 0.028610997088737832\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-32-11.693490.jsonl'
---
# Dataset Card for Evaluation run of ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details",
name="ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-32-11.693490](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix/results_2024-11-22T00-32-11.693490.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926051,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"acc_norm,none": 0.5014917628745622,
"acc_norm_stderr,none": 0.005340969872084893,
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5521610831452872,
"acc_norm_stderr,none": 0.006179016832046109,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29949664429530204,
"acc_norm_stderr,none": 0.013278959534799928,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.29120879120879123,
"acc_norm_stderr,none": 0.019460910297288078
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.20454545454545456,
"exact_match_stderr,none": 0.03524251981380333
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.1392857142857143,
"exact_match_stderr,none": 0.02072911170255923
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3051948051948052,
"exact_match_stderr,none": 0.0372284008596668
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1037037037037037,
"exact_match_stderr,none": 0.02633725661744443
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926052
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01762618265060195,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.296875,
"acc_norm_stderr,none": 0.028610997088737832
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
}
},
"leaderboard": {
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926051,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"acc_norm,none": 0.5014917628745622,
"acc_norm_stderr,none": 0.005340969872084893,
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5521610831452872,
"acc_norm_stderr,none": 0.006179016832046109,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29949664429530204,
"acc_norm_stderr,none": 0.013278959534799928,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.29120879120879123,
"acc_norm_stderr,none": 0.019460910297288078
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.20454545454545456,
"exact_match_stderr,none": 0.03524251981380333
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.1392857142857143,
"exact_match_stderr,none": 0.02072911170255923
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3051948051948052,
"exact_match_stderr,none": 0.0372284008596668
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1037037037037037,
"exact_match_stderr,none": 0.02633725661744443
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926052
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01762618265060195,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.296875,
"acc_norm_stderr,none": 0.028610997088737832
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details | open-llm-leaderboard | "2024-11-22T00:37:56Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:34:21Z" | ---
pretty_name: Evaluation run of ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details\"\
,\n\tname=\"ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-34-20.371295](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix/results_2024-11-22T00-34-20.371295.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.7578558225508318,\n \"\
prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n \"acc,none\"\
: 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217,\n \
\ \"acc_norm,none\": 0.5046050071345181,\n \"acc_norm_stderr,none\"\
: 0.005356894928628325,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"inst_level_loose_acc,none\": 0.8201438848920863,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.551640340218712,\n \"acc_norm_stderr,none\": 0.006182534734432989,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5614973262032086,\n\
\ \"acc_norm_stderr,none\": 0.03638341809400991\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542941\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.6,\n\
\ \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.552,\n \
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\":\
\ 0.03164968895968774\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.364,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.589041095890411,\n \"acc_norm_stderr,none\"\
: 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.6966292134831461,\n\
\ \"acc_norm_stderr,none\": 0.03455421944400101\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\":\
\ 0.025537121574548162\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\":\
\ 0.027367497504863593\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3196308724832215,\n\
\ \"acc_norm_stderr,none\": 0.013522572199065146,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3181818181818182,\n \"acc_norm_stderr,none\": 0.0331847733384533\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3131868131868132,\n\
\ \"acc_norm_stderr,none\": 0.01986656558013767\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\"\
: 0.0222080353262888\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"inst_level_strict_acc,none\": 0.802158273381295,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"inst_level_loose_acc,none\": 0.8201438848920863,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.5635179153094463,\n\
\ \"exact_match_stderr,none\": 0.028351520946552713\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.17073170731707318,\n \"exact_match_stderr,none\": 0.034066279591320504\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.1590909090909091,\n\
\ \"exact_match_stderr,none\": 0.03195667292673137\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.11785714285714285,\n \"exact_match_stderr,none\": 0.019303911310421605\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.23376623376623376,\n\
\ \"exact_match_stderr,none\": 0.034215730598256215\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.47668393782383417,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.1111111111111111,\n \"exact_match_stderr,none\"\
: 0.027148765412512273\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.4431515957446808,\n\
\ \"acc_stderr,none\": 0.00452891098809217\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43783068783068785,\n \"acc_norm_stderr,none\"\
: 0.017595964155130817,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.556,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28515625,\n \"acc_norm_stderr,none\"\
: 0.028273327213286358\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"acc,none\": 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217,\n\
\ \"acc_norm,none\": 0.5046050071345181,\n \"acc_norm_stderr,none\"\
: 0.005356894928628325,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"inst_level_loose_acc,none\": 0.8201438848920863,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.551640340218712,\n\
\ \"acc_norm_stderr,none\": 0.006182534734432989,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5614973262032086,\n \"acc_norm_stderr,none\"\
: 0.03638341809400991\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.589041095890411,\n\
\ \"acc_norm_stderr,none\": 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3196308724832215,\n\
\ \"acc_norm_stderr,none\": 0.013522572199065146,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3181818181818182,\n\
\ \"acc_norm_stderr,none\": 0.0331847733384533\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3131868131868132,\n \"acc_norm_stderr,none\": 0.01986656558013767\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\": 0.0222080353262888\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018890584986760186,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"inst_level_loose_acc,none\": 0.8201438848920863,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.29531722054380666,\n \"exact_match_stderr,none\": 0.011453860732395094,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.5635179153094463,\n \"exact_match_stderr,none\": 0.028351520946552713\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.17073170731707318,\n \"exact_match_stderr,none\": 0.034066279591320504\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.1590909090909091,\n \"exact_match_stderr,none\"\
: 0.03195667292673137\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.11785714285714285,\n \"exact_match_stderr,none\"\
: 0.019303911310421605\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.23376623376623376,\n \"exact_match_stderr,none\": 0.034215730598256215\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.47668393782383417,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.1111111111111111,\n \"exact_match_stderr,none\": 0.027148765412512273\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43783068783068785,\n\
\ \"acc_norm_stderr,none\": 0.017595964155130817,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28515625,\n \"acc_norm_stderr,none\": 0.028273327213286358\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-34-20.371295.jsonl'
---
# Dataset Card for Evaluation run of ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details",
name="ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-34-20.371295](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix/results_2024-11-22T00-34-20.371295.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217,
"acc_norm,none": 0.5046050071345181,
"acc_norm_stderr,none": 0.005356894928628325,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.551640340218712,
"acc_norm_stderr,none": 0.006182534734432989,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5614973262032086,
"acc_norm_stderr,none": 0.03638341809400991
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3196308724832215,
"acc_norm_stderr,none": 0.013522572199065146,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3131868131868132,
"acc_norm_stderr,none": 0.01986656558013767
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.0222080353262888
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5635179153094463,
"exact_match_stderr,none": 0.028351520946552713
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.17073170731707318,
"exact_match_stderr,none": 0.034066279591320504
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.1590909090909091,
"exact_match_stderr,none": 0.03195667292673137
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11785714285714285,
"exact_match_stderr,none": 0.019303911310421605
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.23376623376623376,
"exact_match_stderr,none": 0.034215730598256215
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.47668393782383417,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1111111111111111,
"exact_match_stderr,none": 0.027148765412512273
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.017595964155130817,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28515625,
"acc_norm_stderr,none": 0.028273327213286358
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217,
"acc_norm,none": 0.5046050071345181,
"acc_norm_stderr,none": 0.005356894928628325,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.551640340218712,
"acc_norm_stderr,none": 0.006182534734432989,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5614973262032086,
"acc_norm_stderr,none": 0.03638341809400991
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3196308724832215,
"acc_norm_stderr,none": 0.013522572199065146,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3131868131868132,
"acc_norm_stderr,none": 0.01986656558013767
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.0222080353262888
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5635179153094463,
"exact_match_stderr,none": 0.028351520946552713
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.17073170731707318,
"exact_match_stderr,none": 0.034066279591320504
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.1590909090909091,
"exact_match_stderr,none": 0.03195667292673137
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11785714285714285,
"exact_match_stderr,none": 0.019303911310421605
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.23376623376623376,
"exact_match_stderr,none": 0.034215730598256215
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.47668393782383417,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1111111111111111,
"exact_match_stderr,none": 0.027148765412512273
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.017595964155130817,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28515625,
"acc_norm_stderr,none": 0.028273327213286358
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
neoneye/simon-arc-solve-color-v17 | neoneye | "2024-11-22T00:37:15Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-22T00:36:02Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve color version 17
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the colors gets manipulated.
Currently it's two-color images, where the transformation is to swap colors.
The image sizes are between 1 and 5 pixels.
Predict the number of rows in the output image.
# Version 2
Number of test: 1-2. Previously it was always 1 test.
# Version 3
input image size: 1-3.
Number of tests: 1.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 4
input image size: 1-4.
Number of tests: 1.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 5
input image size: 1-5.
Number of tests: 1-2.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 6
input image size: 1-5.
Number of tests: 1-2.
Identify most popular color, and least popular color. Multiple output sizes: output size is 1x1, and same output size as input size.
Swap colors.
# Version 7
Focus on `generate_task_replace_color`.
image size: 3-6.
padding size: 1-5.
# Version 8
Focus on `generate_task_replace_color`.
image size: 3-8.
padding size: 1-10.
# Version 9
Focus on `generate_task_replace_color`.
image size: 3-10.
padding size: 1-20.
# Version 10
Enabled all the task generators.
# Version 11
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-5.
padding size: 1-4.
# Version 12
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-8.
padding size: 1-10.
# Version 13
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-10.
padding size: 1-20.
# Version 14
Extended `generate_task_replace_color_pairs_with_different_palettes` with 2 new palette modes.
Enabled all transformations.
# Version 15
Earlier predictions added to some of the rows.
# Version 16
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 17
Replaced RLE compressed response with raw pixel response.
image size: 1-7.
|
preetam8/accent_correction_dataset_sample_3 | preetam8 | "2024-11-22T00:39:44Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:39:42Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: speaker
dtype: string
- name: waveform
sequence: float16
splits:
- name: train
num_bytes: 1137004
num_examples: 4
download_size: 1117919
dataset_size: 1137004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuyuriri/btsdata_TEST | yuyuriri | "2024-11-22T00:43:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:43:44Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
su0529/btsdata_TEST | su0529 | "2024-11-22T00:45:04Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:44:54Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LeeSence/llamastudy_bts_ko | LeeSence | "2024-11-22T00:45:36Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:04Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rssaem/llama_bts_ko | rssaem | "2024-11-22T00:46:53Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:07Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuyuriri/llama_bts_ko | yuyuriri | "2024-11-22T00:46:02Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:08Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
smui/llama_bts_ko | smui | "2024-11-22T00:47:34Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:10Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
---
|
Kimsusung/LSS_BTS_KR | Kimsusung | "2024-11-22T00:46:27Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:10Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
isuh/llama_bts_ko | isuh | "2024-11-22T00:48:24Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:45:12Z" | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
preetam8/accent_correction_dataset_sample_4 | preetam8 | "2024-11-22T00:48:26Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:48:24Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: speaker
dtype: string
- name: waveform
sequence: float16
splits:
- name: train
num_bytes: 2226284
num_examples: 4
download_size: 2138977
dataset_size: 2226284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
answerdotai/trivia_mcqa_6 | answerdotai | "2024-11-22T00:50:26Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:49:49Z" | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: options
sequence: string
- name: answer_index
dtype: int64
- name: answer
dtype: string
splits:
- name: validation
num_bytes: 20498466
num_examples: 1154
- name: train
num_bytes: 249005104
num_examples: 13986
download_size: 154522714
dataset_size: 269503570
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen6_run0_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-11-22T00:56:30Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:56:27Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523754
num_examples: 1000
download_size: 288210
dataset_size: 523754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jeonjin11/11ama_bts_ko | jeonjin11 | "2024-11-22T01:03:43Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:01:48Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aisuko/DPO-dataset | aisuko | "2024-11-22T01:02:45Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:02:38Z" | ---
dataset_info:
features:
- name: origin
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 154357530
num_examples: 37587
download_size: 81215310
dataset_size: 154357530
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nuprl-staging/training_classifier_intermediate_depth2 | nuprl-staging | "2024-11-22T01:03:19Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:03:17Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: answer
dtype: string
- name: pythoncode
dtype: string
- name: depth0
dtype: string
- name: depth1
dtype: string
- name: depth2
dtype: string
- name: depth3
dtype: string
- name: depth4
dtype: string
- name: depth5
dtype: string
- name: depth6
dtype: string
- name: depth7
dtype: string
- name: depthn0
dtype: string
- name: depthn1
dtype: string
- name: depthn2
dtype: string
- name: depthn3
dtype: string
- name: depthn4
dtype: string
- name: depthn5
dtype: string
- name: depthn6
dtype: string
- name: depthn7
dtype: string
- name: num_nodes
dtype: int64
- name: num_edges
dtype: int64
- name: num_classes
dtype: int64
- name: path_length
dtype: int64
- name: num_cycle
dtype: int64
- name: correctness
dtype: bool
- name: one_correct
dtype: bool
splits:
- name: train
num_bytes: 4222364
num_examples: 3000
- name: test
num_bytes: 698164
num_examples: 500
download_size: 1082498
dataset_size: 4920528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ahmedheakl/ar_sharegpt4v_instruct | ahmedheakl | "2024-11-22T01:13:43Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"modality:image",
"modality:text",
"region:us"
] | null | "2024-11-22T01:04:05Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: image_path
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10203009235
num_examples: 45123
download_size: 10157685442
dataset_size: 10203009235
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-solve-skew-v6 | neoneye | "2024-11-22T01:18:26Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-22T01:17:49Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve skew version 6
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the job is to apply skew/unkew in the directions up/down/left/right.
example count: 2-4.
test count: 1-2.
image size: 1-4.
# Version 2
image size: 1-7.
# Version 3
Earlier predictions added to some of the rows.
# Version 4
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 5
Replaced RLE compressed response with raw pixel response.
# Version 6
image size: 1-9. |
The-reader1111/hello | The-reader1111 | "2024-11-22T01:23:21Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:23:07Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29225
num_examples: 43
download_size: 19249
dataset_size: 29225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sirjoy/so100_test_2 | sirjoy | "2024-11-22T01:24:03Z" | 0 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | "2024-11-22T01:23:51Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
UniDataPro/2d-masks-pad-attacks | UniDataPro | "2024-11-22T01:27:43Z" | 0 | 0 | [
"license:cc-by-nc-nd-4.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-22T01:27:21Z" | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
tags:
- people
- mask
- computer vison
- facial recognition
- verification
- cyber security
size_categories:
- 10K<n<100K
---
# 2D Masks with Eyeholes Attacks
The dataset comprises **11,200+** videos of people wearing of holding 2D printed masks with eyeholes captured using **5** different devices. This extensive collection is designed for research in presentation attacks, focusing on various **detection methods**, primarily aimed at meeting the requirements for **iBeta Level 1 & 2 certification.** Specifically engineered to challenge **facial recognition** and enhance **spoofing detection** techniques.
By utilizing this dataset, researchers and developers can advance their understanding and capabilities in **biometric security** and **liveness detection technologies**. - **[Get the data](https://unidata.pro/datasets/2d-masks/?utm_source=huggingface&utm_medium=cpc&utm_campaign=2d-masks-pad-attacks)**
## Attacks in the dataset
![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F589023f5a5202dd588e5d9df56747037%2FFrame%20166.png?generation=1731498279480466&alt=media)
The attacks were recorded in various settings, showcasing individuals with different attributes. Each photograph features human faces adorned with 2D masks, simulating potential spoofing attempts in facial recognition systems.
**Variants of backgrounds and attributes in the dataset**:
![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2Fbf3ed70a8a519ebb4bbdd8fc634bd4f3%2FFrame%20146%20(2).png?generation=1730208154622175&alt=media)
# 💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at [https://unidata.pro](https://unidata.pro/datasets/2d-masks/?utm_source=huggingface&utm_medium=cpc&utm_campaign=2d-masks-pad-attacks) to discuss your requirements and pricing options.
Researchers can utilize this dataset to explore detection technology and recognition algorithms that aim to prevent impostor attacks and improve authentication processes.
## Metadata for the dataset
![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F599c49641aa40b28eef7684a086fb98b%2FFrame%20161%20(4).png?generation=1731498379460267&alt=media)
**Variables in .csv files:**
- **name:** filename of the printed 2D mask
- **path:** link-path for the original video
- **type:** type(wearing or holding) of printed mask
The dataset provides a robust foundation for achieving higher detection accuracy and advancing liveness detection methods, which are essential for preventing identity fraud and ensuring reliable biometric verification.
# 🌐 [UniData](https://unidata.pro/datasets/2d-masks/utm_source=huggingface&utm_medium=cpc&utm_campaign=2d-masks-pad-attacks) provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects |