datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.64M
| likes
int64 0
6.39k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
ADHIZ/ghh | ADHIZ | "2024-11-29T09:26:18Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T09:26:14Z" | ---
dataset_info:
features:
- name: year
dtype: int64
- name: industry_code_ANZSIC
dtype: string
- name: industry_name_ANZSIC
dtype: string
- name: rme_size_grp
dtype: string
- name: variable
dtype: string
- name: value
dtype: string
- name: unit
dtype: string
splits:
- name: train
num_bytes: 2151896
num_examples: 20124
download_size: 173304
dataset_size: 2151896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADHIZ/ghh34r5 | ADHIZ | "2024-11-29T09:29:36Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T09:29:34Z" | ---
dataset_info:
features:
- name: code_language
dtype: string
- name: code
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 202
num_examples: 2
download_size: 2217
dataset_size: 202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dnth/pixmo-cap-qa-images-chunk-0 | dnth | "2024-11-29T09:54:42Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T09:54:20Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 831484746.516
num_examples: 942
download_size: 480669579
dataset_size: 831484746.516
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EnKop/primal-chaos | EnKop | "2024-11-29T12:09:31Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T10:47:29Z" | ---
license: apache-2.0
---
{{ card_data }}
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
mathildebindslev/MiniProjectML | mathildebindslev | "2024-11-29T11:13:42Z" | 3 | 0 | [
"task_categories:audio-classification",
"size_categories:n<1K",
"format:json",
"modality:text",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"trash",
"audio"
] | [
"audio-classification"
] | "2024-11-29T10:53:51Z" | ---
task_categories:
- audio-classification
tags:
- trash
- audio
---
This dataset is designed for training an audio classification model that identifies the type of trash being thrown into a bucket.
The model classifies sounds into the following categories: Metal, Glass, Plastic, Cardboard, and Noise (non-trash-related sounds).
The dataset was recorded and organized as part of an Edge Impulse project to create a system that sorts trash based on sound.
Link to Edge Impulse: https://studio.edgeimpulse.com/public/556872/live |
DT4LM/albertbase_mr_faster-alzantot_differential | DT4LM | "2024-11-29T11:00:00Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T10:55:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 44372.732447817834
num_examples: 337
download_size: 33583
dataset_size: 44372.732447817834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_faster-alzantot_differential | DT4LM | "2024-11-29T11:04:47Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:00:21Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 29331.135135135137
num_examples: 226
download_size: 22921
dataset_size: 29331.135135135137
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_leap_differential | DT4LM | "2024-11-29T11:03:13Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:00:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 52885.45275590551
num_examples: 405
download_size: 37426
dataset_size: 52885.45275590551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_leap_differential_original | DT4LM | "2024-11-29T11:04:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:03:13Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 52121.69291338583
num_examples: 405
download_size: 36193
dataset_size: 52121.69291338583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3ba_mr_faster-alzantot_differential_original | DT4LM | "2024-11-29T11:04:51Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:04:48Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 28956.87469287469
num_examples: 226
download_size: 21990
dataset_size: 28956.87469287469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ngochuyen2504/jenny-tts-6h-descriptions-v1 | Ngochuyen2504 | "2024-11-29T11:09:52Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T11:09:49Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 2668518
num_examples: 4000
download_size: 1223613
dataset_size: 2668518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidberenstein1957/daily-papers-docling-full-dataset | davidberenstein1957 | "2024-11-29T12:04:10Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:03:37Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tags
sequence: 'null'
- name: properties
dtype: 'null'
- name: error
dtype: 'null'
- name: raw_response
dtype: string
- name: version
dtype: string
- name: mime_type
dtype: string
- name: label
dtype: string
- name: filename
dtype: string
- name: page_no
dtype: int64
- name: mimetype
dtype: string
- name: dpi
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: text
dtype: string
- name: text_length
dtype: int64
- name: synced_at
dtype: 'null'
- name: file_name
dtype: image
splits:
- name: train
num_bytes: 744487980.33
num_examples: 14165
download_size: 657766213
dataset_size: 744487980.33
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_agnews_leap | DT4LM | "2024-11-29T12:21:43Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:19:53Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 170904
num_examples: 681
download_size: 122381
dataset_size: 170904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_agnews_leap_original | DT4LM | "2024-11-29T12:21:48Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:21:44Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 165950
num_examples: 681
download_size: 115911
dataset_size: 165950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nash-pAnDiTa/youssef-science-street-peroxide | Nash-pAnDiTa | "2024-11-29T12:23:23Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:23:14Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 152301488.0
num_examples: 15
download_size: 152136065
dataset_size: 152301488.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ylacombe/peoples_speech-tags-annotated | ylacombe | "2024-11-29T12:33:18Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:32:44Z" | ---
dataset_info:
config_name: clean
features:
- name: id
dtype: string
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 1151010413
num_examples: 1501271
- name: validation
num_bytes: 9529506
num_examples: 18622
- name: test
num_bytes: 17609193
num_examples: 34898
download_size: 490187975
dataset_size: 1178149112
configs:
- config_name: clean
data_files:
- split: train
path: clean/train-*
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
---
|
Tobius/f667011f-cd37-4dd8-9bc7-c5e95dc10170 | Tobius | "2024-11-29T12:43:03Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T12:42:57Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 108838.4
num_examples: 800
- name: test
num_bytes: 27209.6
num_examples: 200
download_size: 11886
dataset_size: 136048.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
kite4869xc/waymo | kite4869xc | "2024-11-29T12:57:12Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T12:57:12Z" | ---
license: apache-2.0
---
|
kite4869xc/waymo_dataset | kite4869xc | "2024-11-29T13:40:41Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T13:35:15Z" | ---
license: apache-2.0
---
|
Nash-pAnDiTa/youssef-Using-ChatGPT-to-learn-programming | Nash-pAnDiTa | "2024-11-29T13:38:00Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T13:37:51Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 169594818.0
num_examples: 16
download_size: 154450733
dataset_size: 169594818.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3base_rte_leap | DT4LM | "2024-11-30T08:05:50Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T13:39:48Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 58605
num_examples: 190
download_size: 47788
dataset_size: 58605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3base_rte_leap_original | DT4LM | "2024-11-30T08:05:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T13:39:54Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 58190
num_examples: 190
download_size: 44535
dataset_size: 58190
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nash-pAnDiTa/youssef-NobleInMedicine2024 | Nash-pAnDiTa | "2024-11-29T13:49:04Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T13:48:44Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 233672373.0
num_examples: 22
download_size: 230946683
dataset_size: 233672373.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tobius/e2f42034-1079-4a1e-996d-3e81ae5c78f3 | Tobius | "2024-11-29T13:54:39Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T13:54:35Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 108838.4
num_examples: 800
- name: test
num_bytes: 27209.6
num_examples: 200
download_size: 12123
dataset_size: 136048.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
DigiGreen/Kenya_Agri_queries | DigiGreen | "2024-11-29T14:22:34Z" | 3 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"region:us",
"agriculture",
"farming",
"farmerq-a",
"farmer_queries"
] | [
"question-answering"
] | "2024-11-29T14:17:02Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- agriculture
- farming
- farmerq-a
- farmer_queries
size_categories:
- 100K<n<1M
---
This is the dataset of farmer queries (anonymised) from Kenya on version 1 of farmer.chat (Telegram bot).
The data is generated through the use of bot over a period of 10 months from September 2023 tillJune 2024. |
hugosenet/request_denial_evaluation_of_responses_of_a_restrained_model_test | hugosenet | "2024-11-29T14:20:41Z" | 3 | 0 | [
"task_categories:text-generation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-11-29T14:20:36Z" | ---
language_creators:
- machine-generated
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K1T
source_datasets:
- extended
task_categories:
- text-generation
pretty_name: Request denial evaluation of responses to illicit queries with a restrained
model without primers
---
|
chaichangkun/so100_grasp_cube | chaichangkun | "2024-11-29T14:32:40Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100",
"grasp_cube"
] | [
"robotics"
] | "2024-11-29T14:31:29Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100
- grasp_cube
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
rhfeiyang/Art-Free-SAM | rhfeiyang | "2024-11-29T14:55:57Z" | 3 | 0 | [
"task_categories:text-to-image",
"region:us",
"Art-Free"
] | [
"text-to-image"
] | "2024-11-29T14:37:51Z" | ---
task_categories:
- text-to-image
tags:
- Art-Free
---
Our Art-Free-SAM contains the ids from original SA-1B dataset [here](https://ai.meta.com/datasets/segment-anything-downloads/).
We used the captions from [SAM-LLaVA-Captions10M](https://huggingface.co/datasets/PixArt-alpha/SAM-LLaVA-Captions10M/tree/main)
The folder structure should be like:
```
sam_dataset
├── captions
│ ├── 0.txt
│ ├── 1.txt
│ └── ...
├── images
│ ├── sa_000000
│ ├── 0.jpg
│ ├── 1.jpg
│ └── ...
│ ├── sa_000001
│ ├── 0.jpg
│ ├── 1.jpg
│ └── ...
│ ├── ...
│ └── sa_000999
└──
```
Download our [id_dict.pickle](https://huggingface.co/datasets/rhfeiyang/Art-Free-SAM/blob/main/id_dict.pickle) and [art-free-sam-loader.py](https://huggingface.co/datasets/rhfeiyang/Art-Free-SAM/blob/main/art-free-sam-loader.py), and [ids_train.pickle](https://huggingface.co/datasets/rhfeiyang/Art-Free-SAM/blob/main/ids_train.pickle), you can load the dataset by:
```python
from art_free_sam_loader import SamDataset
art_free_sam = SamDataset(image_folder_path=<path-to-sam-images>, caption_folder_path=<path-to-captios>, id_file= <path-to-ids>,id_dict_file=<path-to-id_dict>)
``` |
DT4LM/debertav3base_rte_clare | DT4LM | "2024-11-30T07:44:40Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T15:02:46Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 66539
num_examples: 212
download_size: 51207
dataset_size: 66539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/debertav3base_rte_clare_original | DT4LM | "2024-11-30T07:44:44Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T15:02:49Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 65807
num_examples: 212
download_size: 49805
dataset_size: 65807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oshvartz/so100_test | oshvartz | "2024-11-29T15:25:17Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | "2024-11-29T15:24:46Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
ferrazzipietro/tmp_results | ferrazzipietro | "2024-11-29T15:43:40Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T15:43:37Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 172478
num_examples: 94
- name: test
num_bytes: 1556215
num_examples: 738
download_size: 308569
dataset_size: 1728693
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
Ngochuyen2504/infore1_25hours-tags-v1 | Ngochuyen2504 | "2024-11-29T17:57:43Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T17:57:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 4740783
num_examples: 14935
download_size: 2013880
dataset_size: 4740783
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hafizasania/duet_transport_chatbot | hafizasania | "2024-11-29T19:38:03Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T18:39:45Z" | ---
license: apache-2.0
---
|
open-llm-leaderboard/Qwen__QwQ-32B-Preview-details | open-llm-leaderboard | "2024-11-29T18:51:00Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T18:47:20Z" | ---
pretty_name: Evaluation run of Qwen/QwQ-32B-Preview
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)\nThe dataset\
\ is composed of 38 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 1 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Qwen__QwQ-32B-Preview-details\"\
,\n\tname=\"Qwen__QwQ-32B-Preview__leaderboard_bbh_boolean_expressions\",\n\tsplit=\"\
latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from run\
\ 2024-11-29T18-47-19.440839](https://huggingface.co/datasets/open-llm-leaderboard/Qwen__QwQ-32B-Preview-details/blob/main/Qwen__QwQ-32B-Preview/results_2024-11-29T18-47-19.440839.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.5678191489361702,\n \"acc_stderr,none\"\
: 0.004516342962611267,\n \"inst_level_strict_acc,none\": 0.46882494004796166,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.581787521079258,\n \"acc_norm_stderr,none\": 0.004984831150161566,\n\
\ \"inst_level_loose_acc,none\": 0.4880095923261391,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.3567467652495379,\n \"prompt_level_loose_acc_stderr,none\": 0.020614562936479897,\n\
\ \"prompt_level_strict_acc,none\": 0.33826247689463956,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.020359772138166046,\n \"\
exact_match,none\": 0.22885196374622357,\n \"exact_match_stderr,none\"\
: 0.010715465924617387,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6663773650407915,\n\
\ \"acc_norm_stderr,none\": 0.005642651971847929,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6470588235294118,\n\
\ \"acc_norm_stderr,none\": 0.03504019983419238\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\":\
\ 0.028576958730437443\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.828,\n\
\ \"acc_norm_stderr,none\": 0.02391551394448624\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.856,\n \
\ \"acc_norm_stderr,none\": 0.022249407735450245\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.776,\n \"acc_norm_stderr,none\":\
\ 0.026421361687347884\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.7191780821917808,\n \"acc_norm_stderr,none\": 0.037320694849458984\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.832,\n \
\ \"acc_norm_stderr,none\": 0.023692813205492536\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8370786516853933,\n \"acc_norm_stderr,none\"\
: 0.02775782910660744\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.836,\n\
\ \"acc_norm_stderr,none\": 0.023465261002076715\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\":\
\ 0.024960691989171963\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.556,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2818791946308725,\n\
\ \"acc_norm_stderr,none\": 0.013046291338577345,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2777777777777778,\n \"acc_norm_stderr,none\": 0.03191178226713548\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2893772893772894,\n\
\ \"acc_norm_stderr,none\": 0.019424663872261782\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27455357142857145,\n \"acc_norm_stderr,none\"\
: 0.021108747290633768\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.33826247689463956,\n \"prompt_level_strict_acc_stderr,none\": 0.020359772138166046,\n\
\ \"inst_level_strict_acc,none\": 0.46882494004796166,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.3567467652495379,\n \"prompt_level_loose_acc_stderr,none\": 0.020614562936479897,\n\
\ \"inst_level_loose_acc,none\": 0.4880095923261391,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.22885196374622357,\n \"exact_match_stderr,none\"\
: 0.010715465924617387,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.4006514657980456,\n\
\ \"exact_match_stderr,none\": 0.028013177848580824\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.23577235772357724,\n \"exact_match_stderr,none\": 0.03843066495214836\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.09090909090909091,\n\
\ \"exact_match_stderr,none\": 0.0251172256361608\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.05357142857142857,\n \"exact_match_stderr,none\": 0.01348057551341636\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.21428571428571427,\n\
\ \"exact_match_stderr,none\": 0.03317288314377314\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.44041450777202074,\n \"exact_match_stderr,none\"\
: 0.035827245300360966\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.044444444444444446,\n \"exact_match_stderr,none\"\
: 0.01780263602032457\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5678191489361702,\n\
\ \"acc_stderr,none\": 0.004516342962611267\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.41005291005291006,\n \"acc_norm_stderr,none\"\
: 0.017653759371565242,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.488,\n\
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.29296875,\n \"acc_norm_stderr,none\"\
: 0.028500984607927556\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.5678191489361702,\n\
\ \"acc_stderr,none\": 0.004516342962611267,\n \"inst_level_strict_acc,none\"\
: 0.46882494004796166,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"acc_norm,none\": 0.581787521079258,\n \"acc_norm_stderr,none\"\
: 0.004984831150161566,\n \"inst_level_loose_acc,none\": 0.4880095923261391,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.3567467652495379,\n \"prompt_level_loose_acc_stderr,none\": 0.020614562936479897,\n\
\ \"prompt_level_strict_acc,none\": 0.33826247689463956,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.020359772138166046,\n \"exact_match,none\": 0.22885196374622357,\n \
\ \"exact_match_stderr,none\": 0.010715465924617387,\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6663773650407915,\n\
\ \"acc_norm_stderr,none\": 0.005642651971847929,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6470588235294118,\n \"acc_norm_stderr,none\"\
: 0.03504019983419238\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.828,\n \"acc_norm_stderr,none\": 0.02391551394448624\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.856,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.776,\n \"acc_norm_stderr,none\": 0.026421361687347884\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.7191780821917808,\n\
\ \"acc_norm_stderr,none\": 0.037320694849458984\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8370786516853933,\n \"acc_norm_stderr,none\"\
: 0.02775782910660744\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.836,\n \"acc_norm_stderr,none\": 0.023465261002076715\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.192,\n \"acc_norm_stderr,none\": 0.024960691989171963\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2818791946308725,\n\
\ \"acc_norm_stderr,none\": 0.013046291338577345,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2777777777777778,\n\
\ \"acc_norm_stderr,none\": 0.03191178226713548\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2893772893772894,\n \"acc_norm_stderr,none\": 0.019424663872261782\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.27455357142857145,\n \"acc_norm_stderr,none\"\
: 0.021108747290633768\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.33826247689463956,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.020359772138166046,\n \
\ \"inst_level_strict_acc,none\": 0.46882494004796166,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.3567467652495379,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020614562936479897,\n \"inst_level_loose_acc,none\"\
: 0.4880095923261391,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.22885196374622357,\n\
\ \"exact_match_stderr,none\": 0.010715465924617387,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.4006514657980456,\n \"exact_match_stderr,none\": 0.028013177848580824\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.23577235772357724,\n \"exact_match_stderr,none\": 0.03843066495214836\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.09090909090909091,\n \"exact_match_stderr,none\"\
: 0.0251172256361608\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.05357142857142857,\n \"exact_match_stderr,none\"\
: 0.01348057551341636\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.21428571428571427,\n \"exact_match_stderr,none\": 0.03317288314377314\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.44041450777202074,\n \"exact_match_stderr,none\"\
: 0.035827245300360966\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.044444444444444446,\n \"exact_match_stderr,none\": 0.01780263602032457\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.5678191489361702,\n \"acc_stderr,none\": 0.004516342962611267\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.41005291005291006,\n\
\ \"acc_norm_stderr,none\": 0.017653759371565242,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.29296875,\n \"acc_norm_stderr,none\": 0.028500984607927556\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Qwen/QwQ-32B-Preview
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_navigate
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_snarks
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_gpqa_extended
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_gpqa_main
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_gpqa_main_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_ifeval
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_ifeval_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_mmlu_pro
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_musr_object_placements
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-29T18-47-19.440839.jsonl'
- config_name: Qwen__QwQ-32B-Preview__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_29T18_47_19.440839
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-29T18-47-19.440839.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-29T18-47-19.440839.jsonl'
---
# Dataset Card for Evaluation run of Qwen/QwQ-32B-Preview
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Qwen__QwQ-32B-Preview-details",
name="Qwen__QwQ-32B-Preview__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-29T18-47-19.440839](https://huggingface.co/datasets/open-llm-leaderboard/Qwen__QwQ-32B-Preview-details/blob/main/Qwen__QwQ-32B-Preview/results_2024-11-29T18-47-19.440839.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.5678191489361702,
"acc_stderr,none": 0.004516342962611267,
"inst_level_strict_acc,none": 0.46882494004796166,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.581787521079258,
"acc_norm_stderr,none": 0.004984831150161566,
"inst_level_loose_acc,none": 0.4880095923261391,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3567467652495379,
"prompt_level_loose_acc_stderr,none": 0.020614562936479897,
"prompt_level_strict_acc,none": 0.33826247689463956,
"prompt_level_strict_acc_stderr,none": 0.020359772138166046,
"exact_match,none": 0.22885196374622357,
"exact_match_stderr,none": 0.010715465924617387,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6663773650407915,
"acc_norm_stderr,none": 0.005642651971847929,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.776,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7191780821917808,
"acc_norm_stderr,none": 0.037320694849458984
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8370786516853933,
"acc_norm_stderr,none": 0.02775782910660744
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.836,
"acc_norm_stderr,none": 0.023465261002076715
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2818791946308725,
"acc_norm_stderr,none": 0.013046291338577345,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2893772893772894,
"acc_norm_stderr,none": 0.019424663872261782
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27455357142857145,
"acc_norm_stderr,none": 0.021108747290633768
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.33826247689463956,
"prompt_level_strict_acc_stderr,none": 0.020359772138166046,
"inst_level_strict_acc,none": 0.46882494004796166,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3567467652495379,
"prompt_level_loose_acc_stderr,none": 0.020614562936479897,
"inst_level_loose_acc,none": 0.4880095923261391,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.22885196374622357,
"exact_match_stderr,none": 0.010715465924617387,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.4006514657980456,
"exact_match_stderr,none": 0.028013177848580824
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.23577235772357724,
"exact_match_stderr,none": 0.03843066495214836
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.05357142857142857,
"exact_match_stderr,none": 0.01348057551341636
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.21428571428571427,
"exact_match_stderr,none": 0.03317288314377314
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.44041450777202074,
"exact_match_stderr,none": 0.035827245300360966
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5678191489361702,
"acc_stderr,none": 0.004516342962611267
},
"leaderboard_musr": {
"acc_norm,none": 0.41005291005291006,
"acc_norm_stderr,none": 0.017653759371565242,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
}
},
"leaderboard": {
"acc,none": 0.5678191489361702,
"acc_stderr,none": 0.004516342962611267,
"inst_level_strict_acc,none": 0.46882494004796166,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.581787521079258,
"acc_norm_stderr,none": 0.004984831150161566,
"inst_level_loose_acc,none": 0.4880095923261391,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3567467652495379,
"prompt_level_loose_acc_stderr,none": 0.020614562936479897,
"prompt_level_strict_acc,none": 0.33826247689463956,
"prompt_level_strict_acc_stderr,none": 0.020359772138166046,
"exact_match,none": 0.22885196374622357,
"exact_match_stderr,none": 0.010715465924617387,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6663773650407915,
"acc_norm_stderr,none": 0.005642651971847929,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.828,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.776,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7191780821917808,
"acc_norm_stderr,none": 0.037320694849458984
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8370786516853933,
"acc_norm_stderr,none": 0.02775782910660744
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.836,
"acc_norm_stderr,none": 0.023465261002076715
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.192,
"acc_norm_stderr,none": 0.024960691989171963
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2818791946308725,
"acc_norm_stderr,none": 0.013046291338577345,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2893772893772894,
"acc_norm_stderr,none": 0.019424663872261782
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.27455357142857145,
"acc_norm_stderr,none": 0.021108747290633768
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.33826247689463956,
"prompt_level_strict_acc_stderr,none": 0.020359772138166046,
"inst_level_strict_acc,none": 0.46882494004796166,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3567467652495379,
"prompt_level_loose_acc_stderr,none": 0.020614562936479897,
"inst_level_loose_acc,none": 0.4880095923261391,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.22885196374622357,
"exact_match_stderr,none": 0.010715465924617387,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.4006514657980456,
"exact_match_stderr,none": 0.028013177848580824
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.23577235772357724,
"exact_match_stderr,none": 0.03843066495214836
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.05357142857142857,
"exact_match_stderr,none": 0.01348057551341636
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.21428571428571427,
"exact_match_stderr,none": 0.03317288314377314
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.44041450777202074,
"exact_match_stderr,none": 0.035827245300360966
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5678191489361702,
"acc_stderr,none": 0.004516342962611267
},
"leaderboard_musr": {
"acc_norm,none": 0.41005291005291006,
"acc_norm_stderr,none": 0.017653759371565242,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.29296875,
"acc_norm_stderr,none": 0.028500984607927556
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
anastasiafrosted/endpoint0_300 | anastasiafrosted | "2024-11-29T19:10:47Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:10:45Z" | ---
dataset_info:
features:
- name: n_invocations
dtype: int64
- name: avg_loc
dtype: float64
- name: avg_cyc_complexity
dtype: float64
- name: avg_num_of_imports
dtype: float64
- name: avg_argument_size
dtype: float64
- name: e_type_LSFProvider
dtype: int64
- name: e_type_CobaltProvider
dtype: int64
- name: e_type_PBSProProvider
dtype: int64
- name: e_type_LocalProvider
dtype: int64
- name: e_type_KubernetesProvider
dtype: int64
- name: e_type_SlurmProvider
dtype: int64
- name: timestamp
dtype: timestamp[us]
splits:
- name: train
num_bytes: 2457600
num_examples: 25600
download_size: 596720
dataset_size: 2457600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anastasiafrosted/endpoint2_30 | anastasiafrosted | "2024-11-29T19:13:56Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:13:53Z" | ---
dataset_info:
features:
- name: n_invocations
dtype: int64
- name: avg_loc
dtype: float64
- name: avg_cyc_complexity
dtype: float64
- name: avg_num_of_imports
dtype: float64
- name: avg_argument_size
dtype: float64
- name: e_type_LSFProvider
dtype: int64
- name: e_type_CobaltProvider
dtype: int64
- name: e_type_PBSProProvider
dtype: int64
- name: e_type_LocalProvider
dtype: int64
- name: e_type_KubernetesProvider
dtype: int64
- name: e_type_SlurmProvider
dtype: int64
- name: timestamp
dtype: timestamp[us]
splits:
- name: train
num_bytes: 3324384
num_examples: 34629
download_size: 392307
dataset_size: 3324384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marcov/docred_promptsource | marcov | "2024-11-29T19:37:04Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:30:07Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: sents
sequence:
sequence: string
- name: vertexSet
list:
list:
- name: name
dtype: string
- name: sent_id
dtype: int32
- name: pos
sequence: int32
- name: type
dtype: string
- name: labels
sequence:
- name: head
dtype: int32
- name: tail
dtype: int32
- name: relation_id
dtype: string
- name: relation_text
dtype: string
- name: evidence
sequence: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: validation
num_bytes: 50528660.3252505
num_examples: 8941
- name: test
num_bytes: 34870267.540125
num_examples: 7047
- name: train_annotated
num_bytes: 153818832.47173274
num_examples: 27398
- name: train_distant
num_bytes: 5037034967.125294
num_examples: 897390
download_size: 1755032299
dataset_size: 5276252727.462402
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: train_annotated
path: data/train_annotated-*
- split: train_distant
path: data/train_distant-*
---
|
marcov/web_questions_promptsource | marcov | "2024-11-29T19:45:28Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:45:23Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 6322517.0
num_examples: 18890
- name: test
num_bytes: 3423767.0
num_examples: 10160
download_size: 3256824
dataset_size: 9746284.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
marcov/glue_mrpc_promptsource | marcov | "2024-11-29T19:46:43Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:46:35Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_equivalent
'1': equivalent
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 16113182.701978501
num_examples: 23288
- name: validation
num_bytes: 1810423.9411764706
num_examples: 2598
- name: test
num_bytes: 7530577.036604555
num_examples: 10919
download_size: 11786564
dataset_size: 25454183.679759525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_qqp_promptsource | marcov | "2024-11-29T20:00:36Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T19:54:40Z" | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 1122875522.0
num_examples: 2183076
- name: validation
num_bytes: 124744941.0
num_examples: 242580
- name: test
num_bytes: 1212532028.0
num_examples: 2345790
download_size: 861173708
dataset_size: 2460152491.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_mnli_mismatched_promptsource | marcov | "2024-11-29T20:05:26Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:04:58Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: validation
num_bytes: 91111255.0
num_examples: 147480
- name: test
num_bytes: 91034428.0
num_examples: 147705
download_size: 74363557
dataset_size: 182145683.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_qnli_promptsource | marcov | "2024-11-29T20:08:10Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:07:07Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 388467238.0
num_examples: 523715
- name: validation
num_bytes: 20585393.0
num_examples: 27315
- name: test
num_bytes: 20619773.0
num_examples: 27315
download_size: 191077076
dataset_size: 429672404.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_wnli_promptsource | marcov | "2024-11-29T20:09:01Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:08:57Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 1800360.0
num_examples: 3175
- name: validation
num_bytes: 203141.0
num_examples: 355
- name: test
num_bytes: 546936.0
num_examples: 730
download_size: 468652
dataset_size: 2550437.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_stsb_promptsource | marcov | "2024-11-29T20:09:58Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:09:51Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: float32
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 18511253.0
num_examples: 28745
- name: validation
num_bytes: 5021140.0
num_examples: 7500
- name: test
num_bytes: 4336388.0
num_examples: 6895
download_size: 7459539
dataset_size: 27868781.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_ax_promptsource | marcov | "2024-11-29T20:10:20Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:10:17Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: test
num_bytes: 4630204.0
num_examples: 5520
download_size: 698656
dataset_size: 4630204.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
marcov/glue_sst2_promptsource | marcov | "2024-11-29T20:11:56Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:11:37Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 121349382.0
num_examples: 336745
- name: validation
num_bytes: 2027412.0
num_examples: 4360
- name: test
num_bytes: 4185889.0
num_examples: 9105
download_size: 34100881
dataset_size: 127562683.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_cola_promptsource | marcov | "2024-11-29T20:12:52Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:12:46Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': unacceptable
'1': acceptable
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 17174176.0
num_examples: 42755
- name: validation
num_bytes: 2106582.0
num_examples: 5215
- name: test
num_bytes: 2137976.0
num_examples: 5315
download_size: 3422798
dataset_size: 21418734.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_rte_promptsource | marcov | "2024-11-29T20:13:50Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:13:41Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 11829720.0
num_examples: 12450
- name: validation
num_bytes: 1280676.0
num_examples: 1385
- name: test
num_bytes: 13784530.0
num_examples: 15000
download_size: 12326416
dataset_size: 26894926.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
marcov/glue_mnli_promptsource | marcov | "2024-11-29T20:36:38Z" | 3 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:27:12Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 3542330327.0
num_examples: 5890530
- name: validation_matched
num_bytes: 87592314.0
num_examples: 147225
- name: validation_mismatched
num_bytes: 91111255.0
num_examples: 147480
- name: test_matched
num_bytes: 87805424.0
num_examples: 146940
- name: test_mismatched
num_bytes: 91034428.0
num_examples: 147705
download_size: 1658407677
dataset_size: 3899873748.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation_matched
path: data/validation_matched-*
- split: validation_mismatched
path: data/validation_mismatched-*
- split: test_matched
path: data/test_matched-*
- split: test_mismatched
path: data/test_mismatched-*
---
|
FrancophonIA/Budget_Belgium | FrancophonIA | "2024-11-29T20:32:42Z" | 3 | 0 | [
"task_categories:translation",
"language:fr",
"language:nl",
"region:us"
] | [
"translation"
] | "2024-11-29T20:31:05Z" | ---
language:
- fr
- nl
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19222
## Description
This resource contains a translation memory (TMX) of the budget of Belgium in Dutch and French. It is based on the report of the 2018 budget and contains very relevant terminology. The donated resource contains 6 files: (1) begroting.tmx (= the aligned TMX) (2) Publicatie van de Algemene Uitgavenbegroting aangepaste 2018.xlsx (= the original file, also available online through Belgium's open data portal) (3) + (4) begroting_omschrijving_NL.txt and begroting_omschrijving_FR.txt (= the short descriptions of budget items in Dutch and French, with the same items per line) (5) + (6) begroting_lange_omschrijving_NL.txt and begroting_lange_omschrijving_FR.txt (= same as 3 & 4, but longer descriptions for the same items, also aligned).
## Citation
```
Budget Belgium (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19222
``` |
FrancophonIA/Coalition_Agreement_Belgium_2014 | FrancophonIA | "2024-11-29T20:35:30Z" | 3 | 0 | [
"task_categories:translation",
"language:fr",
"language:nl",
"region:us"
] | [
"translation"
] | "2024-11-29T20:33:38Z" | ---
language:
- fr
- nl
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19269
## Description
This resource is a Dutch-French translation memory (TMX) created from Belgium's 2014 coalition agreement and also includes the original (aligned) files as txts.
## Citation
```
Coalition Agreement Belgium 2014 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19269
``` |
saurabhy27-outcomes/finetune_speech_corpus_1111 | saurabhy27-outcomes | "2024-11-29T20:39:31Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:39:02Z" | ---
dataset_info:
- config_name: en
features:
- name: term
dtype: string
- name: text
dtype: string
- name: voice
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 1154959.2
num_examples: 8
- name: test
num_bytes: 288739.8
num_examples: 2
download_size: 1410263
dataset_size: 1443699.0
- config_name: zn
features:
- name: term
dtype: string
- name: text
dtype: string
- name: voice
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 1154959.2
num_examples: 8
- name: test
num_bytes: 288739.8
num_examples: 2
download_size: 1420991
dataset_size: 1443699.0
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
- split: test
path: en/test-*
- config_name: zn
data_files:
- split: train
path: zn/train-*
- split: test
path: zn/test-*
---
|
FrancophonIA/Constituicao_da_Republica_Portuguesa | FrancophonIA | "2024-11-29T20:42:32Z" | 3 | 0 | [
"task_categories:translation",
"language:en",
"language:fr",
"language:pt",
"region:us"
] | [
"translation"
] | "2024-11-29T20:41:40Z" | ---
language:
- en
- fr
- pt
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19368
## Citation
```
Constituição da República Portuguesa (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19368
``` |
marcov/swag_regular_promptsource | marcov | "2024-11-29T20:43:20Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:41:42Z" | ---
dataset_info:
features:
- name: video-id
dtype: string
- name: fold-ind
dtype: string
- name: startphrase
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: gold-source
dtype: string
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 461680074.0
num_examples: 514822
- name: validation
num_bytes: 127975226.0
num_examples: 140042
- name: test
num_bytes: 127584122.0
num_examples: 140035
download_size: 254052350
dataset_size: 717239422.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
FrancophonIA/Praias_2007 | FrancophonIA | "2024-11-29T20:45:16Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:43:06Z" | ---
language:
- de
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19401
## Citation
```
Praias 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19401
``` |
marcov/biosses_promptsource | marcov | "2024-11-29T20:43:57Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T20:43:55Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 1055334.0
num_examples: 1100
download_size: 213914
dataset_size: 1055334.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FrancophonIA/Museus_2007 | FrancophonIA | "2024-11-29T20:47:08Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:45:37Z" | ---
language:
- de
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19402
## Citation
```
Museus 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19402
``` |
FrancophonIA/Artigos_visitportugal_2007 | FrancophonIA | "2024-11-29T20:53:39Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:52:37Z" | ---
language:
- de
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19412
## Citation
```
Artigos visitportugal 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19412
``` |
FrancophonIA/Localidades_2007 | FrancophonIA | "2024-11-29T20:56:29Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:54:38Z" | ---
language:
- de
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19413
## Citation
```
Localidades 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19413
``` |
halltape/output | halltape | "2024-11-29T20:58:04Z" | 3 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-29T20:56:22Z" | ---
license: mit
---
|
FrancophonIA/Parques_e_reservas_2007 | FrancophonIA | "2024-11-29T20:58:07Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:57:31Z" | ---
language:
- de
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19414
## Citation
```
Parques e reservas 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19414
``` |
FrancophonIA/localidades_alentejo | FrancophonIA | "2024-11-29T20:59:58Z" | 3 | 0 | [
"task_categories:translation",
"language:pt",
"language:it",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T20:58:58Z" | ---
language:
- pt
- it
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19417
## Citation
```
localidades alentejo (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19417
``` |
FrancophonIA/Taxa_municipal_turistica_faro | FrancophonIA | "2024-11-29T21:02:44Z" | 3 | 0 | [
"task_categories:translation",
"language:es",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T21:01:27Z" | ---
language:
- es
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/21403
## Citation
```
TAXA MUNICIPAL TURÍSTICA FARO (2023). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/21403
``` |
marcov/hans_promptsource | marcov | "2024-11-29T21:05:41Z" | 3 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T21:04:50Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': non-entailment
- name: parse_premise
dtype: string
- name: parse_hypothesis
dtype: string
- name: binary_parse_premise
dtype: string
- name: binary_parse_hypothesis
dtype: string
- name: heuristic
dtype: string
- name: subcase
dtype: string
- name: template
dtype: string
- name: template_name
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 240998620.0
num_examples: 300000
- name: validation
num_bytes: 240715490.0
num_examples: 300000
download_size: 80584756
dataset_size: 481714110.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
FrancophonIA/Lei_da_Paridade | FrancophonIA | "2024-11-29T21:07:02Z" | 3 | 0 | [
"task_categories:translation",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T21:06:20Z" | ---
language:
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/21417
## Citation
```
Lei da Paridade nos Órgãos Colegiais Representativos do Poder Político (2023). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/21417
``` |
marcov/craigslist_bargains_promptsource | marcov | "2024-11-29T21:07:27Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T21:07:13Z" | ---
dataset_info:
features:
- name: agent_info
sequence:
- name: Bottomline
dtype: string
- name: Role
dtype: string
- name: Target
dtype: float32
- name: agent_turn
sequence: int32
- name: dialogue_acts
sequence:
- name: intent
dtype: string
- name: price
dtype: float32
- name: utterance
sequence: string
- name: items
sequence:
- name: Category
dtype: string
- name: Images
dtype: string
- name: Price
dtype: float32
- name: Description
dtype: string
- name: Title
dtype: string
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 100587712.0
num_examples: 31482
- name: test
num_bytes: 16028723.0
num_examples: 5028
- name: validation
num_bytes: 11428215.0
num_examples: 3582
download_size: 30986149
dataset_size: 128044650.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
FrancophonIA/Codigo_de_Conduta_dos_Deputados | FrancophonIA | "2024-11-29T21:09:32Z" | 3 | 0 | [
"task_categories:translation",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T21:08:48Z" | ---
language:
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19418
## Citation
```
Código de Conduta dos Deputados à Assembleia da República (2023). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/21418
``` |
FrancophonIA/Regime_Juridico_Inqueritos_Parlamentares | FrancophonIA | "2024-11-29T21:11:09Z" | 3 | 0 | [
"task_categories:translation",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T21:10:12Z" | ---
language:
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19419
## Citation
```
Regime Jurídico dos Inquéritos Parlamentares (2023). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/21419
``` |
marcov/acronym_identification_promptsource | marcov | "2024-11-29T21:10:25Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T21:10:12Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': B-long
'1': B-short
'2': I-long
'3': I-short
'4': O
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: train
num_bytes: 221583682.52955875
num_examples: 80383
- name: validation
num_bytes: 27188873.57212192
num_examples: 9868
- name: test
num_bytes: 16108512.0
num_examples: 7000
download_size: 27551938
dataset_size: 264881068.10168067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
FrancophonIA/Estatuto_dos_Funcionarios_Parlamentares | FrancophonIA | "2024-11-29T21:12:48Z" | 3 | 0 | [
"task_categories:translation",
"language:pt",
"language:en",
"language:fr",
"region:us"
] | [
"translation"
] | "2024-11-29T21:11:39Z" | ---
language:
- pt
- en
- fr
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19420
## Citation
```
Estatuto dos Funcionários Parlamentares (2023). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/21420
``` |
user9000/CLEVR-HOPE | user9000 | "2024-11-29T21:14:16Z" | 3 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-11-29T21:14:16Z" | ---
license: cc-by-4.0
---
|
ziyu3141/rich_feedback_test_new_all | ziyu3141 | "2024-11-29T21:27:48Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T21:17:09Z" | ---
dataset_info:
features:
- name: Filename
dtype: string
- name: Aesthetics score
dtype: float64
- name: Artifact score
dtype: float64
- name: Misalignment score
dtype: float64
- name: Overall score
dtype: float64
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
splits:
- name: train
num_bytes: 99534427480
num_examples: 15810
download_size: 181470749
dataset_size: 99534427480
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FrancophonIA/Esterm_2 | FrancophonIA | "2024-11-29T21:22:59Z" | 3 | 0 | [
"task_categories:translation",
"language:de",
"language:fr",
"language:en",
"language:ru",
"language:fi",
"language:et",
"language:la",
"region:us"
] | [
"translation"
] | "2024-11-29T21:20:16Z" | ---
language:
- de
- fr
- en
- ru
- fi
- et
- la
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19865
## Description
Esterm 2 is a multilingual terminology database of the Estonian Language Institute, which combines terminology from different fields. It contains information on both the term projects of the EIT and the terms researched in the course of responding to the EIT's terminology queries.
## Citation
```
EKI ühendterminibaas Esterm 2 (2022). Version unspecified. [Dataset (Lexical/Conceptual Resource)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/lcr/19865
``` |
open-llm-leaderboard/DreadPoor__WIP-Acacia-8B-Model_Stock-details | open-llm-leaderboard | "2024-11-29T21:29:34Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T21:25:50Z" | ---
pretty_name: Evaluation run of DreadPoor/WIP-Acacia-8B-Model_Stock
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [DreadPoor/WIP-Acacia-8B-Model_Stock](https://huggingface.co/DreadPoor/WIP-Acacia-8B-Model_Stock)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/DreadPoor__WIP-Acacia-8B-Model_Stock-details\"\
,\n\tname=\"DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-29T21-25-50.060860](https://huggingface.co/datasets/open-llm-leaderboard/DreadPoor__WIP-Acacia-8B-Model_Stock-details/blob/main/DreadPoor__WIP-Acacia-8B-Model_Stock/results_2024-11-29T21-25-50.060860.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.6284658040665434,\n \"\
prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n \"inst_level_loose_acc,none\"\
: 0.7194244604316546,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.15709969788519634,\n \"exact_match_stderr,none\"\
: 0.00946496305892503,\n \"acc_norm,none\": 0.4750291866649371,\n \
\ \"acc_norm_stderr,none\": 0.005373063781032417,\n \"acc,none\"\
: 0.37367021276595747,\n \"acc_stderr,none\": 0.004410571933521376,\n\
\ \"inst_level_strict_acc,none\": 0.6762589928057554,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.5730129390018485,\n \"prompt_level_strict_acc_stderr,none\": 0.021285933050061243,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5169241451136956,\n \"acc_norm_stderr,none\"\
: 0.0062288773189484396,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.816,\n\
\ \"acc_norm_stderr,none\": 0.02455581299422255\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.588,\n\
\ \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.444,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\":\
\ 0.02925692860650181\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\":\
\ 0.030938207620401222\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\":\
\ 0.031145209846548512\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\":\
\ 0.03109668818482536\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.66,\n \"acc_norm_stderr,none\": 0.030020073605457876\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.584,\n \
\ \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5342465753424658,\n \"acc_norm_stderr,none\": 0.04142522736934774\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.704,\n \
\ \"acc_norm_stderr,none\": 0.028928939388379697\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6179775280898876,\n \"acc_norm_stderr,none\"\
: 0.03652112637307604\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.268,\n\
\ \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\":\
\ 0.025537121574548162\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\":\
\ 0.02936106757521985\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3062080536912752,\n\
\ \"acc_norm_stderr,none\": 0.013363479514082741,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3181818181818182,\n \"acc_norm_stderr,none\": 0.0331847733384533\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2948717948717949,\n\
\ \"acc_norm_stderr,none\": 0.01953225605335253\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.5730129390018485,\n \"prompt_level_strict_acc_stderr,none\": 0.021285933050061243,\n\
\ \"inst_level_strict_acc,none\": 0.6762589928057554,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.6284658040665434,\n \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n\
\ \"inst_level_loose_acc,none\": 0.7194244604316546,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.15709969788519634,\n \"exact_match_stderr,none\"\
: 0.00946496305892503,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.31596091205211724,\n\
\ \"exact_match_stderr,none\": 0.026576416772305225\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.09848484848484848,\n\
\ \"exact_match_stderr,none\": 0.026033680930226354\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.03214285714285714,\n \"exact_match_stderr,none\": 0.01055955866175321\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.1038961038961039,\n\
\ \"exact_match_stderr,none\": 0.02466795220435413\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.29015544041450775,\n \"exact_match_stderr,none\"\
: 0.032752644677915166\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.37367021276595747,\n\
\ \"acc_stderr,none\": 0.004410571933521376\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.421957671957672,\n \"acc_norm_stderr,none\"\
: 0.01746179776757259,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.592,\n\
\ \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.34375,\n \"acc_norm_stderr,none\"\
: 0.029743078779677763\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.6284658040665434,\n \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n\
\ \"inst_level_loose_acc,none\": 0.7194244604316546,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.15709969788519634,\n \"exact_match_stderr,none\"\
: 0.00946496305892503,\n \"acc_norm,none\": 0.4750291866649371,\n \
\ \"acc_norm_stderr,none\": 0.005373063781032417,\n \"acc,none\": 0.37367021276595747,\n\
\ \"acc_stderr,none\": 0.004410571933521376,\n \"inst_level_strict_acc,none\"\
: 0.6762589928057554,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_strict_acc,none\": 0.5730129390018485,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021285933050061243,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5169241451136956,\n \"acc_norm_stderr,none\"\
: 0.0062288773189484396,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.5,\n \"acc_norm_stderr,none\": 0.031686212526223896\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.444,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.392,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.66,\n \"acc_norm_stderr,none\": 0.030020073605457876\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5342465753424658,\n\
\ \"acc_norm_stderr,none\": 0.04142522736934774\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.704,\n \"acc_norm_stderr,none\": 0.028928939388379697\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6179775280898876,\n \"acc_norm_stderr,none\"\
: 0.03652112637307604\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.492,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3062080536912752,\n\
\ \"acc_norm_stderr,none\": 0.013363479514082741,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3181818181818182,\n\
\ \"acc_norm_stderr,none\": 0.0331847733384533\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2948717948717949,\n \"acc_norm_stderr,none\": 0.01953225605335253\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.5730129390018485,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021285933050061243,\n \
\ \"inst_level_strict_acc,none\": 0.6762589928057554,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.6284658040665434,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n \"inst_level_loose_acc,none\"\
: 0.7194244604316546,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.15709969788519634,\n\
\ \"exact_match_stderr,none\": 0.00946496305892503,\n \"alias\": \"\
\ - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.31596091205211724,\n \"exact_match_stderr,none\": 0.026576416772305225\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.09848484848484848,\n \"exact_match_stderr,none\"\
: 0.026033680930226354\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.03214285714285714,\n \"exact_match_stderr,none\"\
: 0.01055955866175321\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.1038961038961039,\n \"exact_match_stderr,none\": 0.02466795220435413\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.29015544041450775,\n \"exact_match_stderr,none\"\
: 0.032752644677915166\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.37367021276595747,\n \"acc_stderr,none\": 0.004410571933521376\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.421957671957672,\n\
\ \"acc_norm_stderr,none\": 0.01746179776757259,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.34375,\n \"acc_norm_stderr,none\": 0.029743078779677763\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ }\n}\n```"
repo_url: https://huggingface.co/DreadPoor/WIP-Acacia-8B-Model_Stock
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_navigate
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_snarks
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_gpqa_extended
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_gpqa_main
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_gpqa_main_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_ifeval
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_ifeval_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_mmlu_pro
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_musr_object_placements
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-29T21-25-50.060860.jsonl'
- config_name: DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_29T21_25_50.060860
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-29T21-25-50.060860.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-29T21-25-50.060860.jsonl'
---
# Dataset Card for Evaluation run of DreadPoor/WIP-Acacia-8B-Model_Stock
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [DreadPoor/WIP-Acacia-8B-Model_Stock](https://huggingface.co/DreadPoor/WIP-Acacia-8B-Model_Stock)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/DreadPoor__WIP-Acacia-8B-Model_Stock-details",
name="DreadPoor__WIP-Acacia-8B-Model_Stock__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-29T21-25-50.060860](https://huggingface.co/datasets/open-llm-leaderboard/DreadPoor__WIP-Acacia-8B-Model_Stock-details/blob/main/DreadPoor__WIP-Acacia-8B-Model_Stock/results_2024-11-29T21-25-50.060860.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.6284658040665434,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.7194244604316546,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.15709969788519634,
"exact_match_stderr,none": 0.00946496305892503,
"acc_norm,none": 0.4750291866649371,
"acc_norm_stderr,none": 0.005373063781032417,
"acc,none": 0.37367021276595747,
"acc_stderr,none": 0.004410571933521376,
"inst_level_strict_acc,none": 0.6762589928057554,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5730129390018485,
"prompt_level_strict_acc_stderr,none": 0.021285933050061243,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5169241451136956,
"acc_norm_stderr,none": 0.0062288773189484396,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5342465753424658,
"acc_norm_stderr,none": 0.04142522736934774
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.704,
"acc_norm_stderr,none": 0.028928939388379697
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6179775280898876,
"acc_norm_stderr,none": 0.03652112637307604
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3062080536912752,
"acc_norm_stderr,none": 0.013363479514082741,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2948717948717949,
"acc_norm_stderr,none": 0.01953225605335253
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5730129390018485,
"prompt_level_strict_acc_stderr,none": 0.021285933050061243,
"inst_level_strict_acc,none": 0.6762589928057554,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6284658040665434,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.7194244604316546,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15709969788519634,
"exact_match_stderr,none": 0.00946496305892503,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.31596091205211724,
"exact_match_stderr,none": 0.026576416772305225
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09848484848484848,
"exact_match_stderr,none": 0.026033680930226354
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.1038961038961039,
"exact_match_stderr,none": 0.02466795220435413
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.29015544041450775,
"exact_match_stderr,none": 0.032752644677915166
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37367021276595747,
"acc_stderr,none": 0.004410571933521376
},
"leaderboard_musr": {
"acc_norm,none": 0.421957671957672,
"acc_norm_stderr,none": 0.01746179776757259,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34375,
"acc_norm_stderr,none": 0.029743078779677763
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.6284658040665434,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.7194244604316546,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.15709969788519634,
"exact_match_stderr,none": 0.00946496305892503,
"acc_norm,none": 0.4750291866649371,
"acc_norm_stderr,none": 0.005373063781032417,
"acc,none": 0.37367021276595747,
"acc_stderr,none": 0.004410571933521376,
"inst_level_strict_acc,none": 0.6762589928057554,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5730129390018485,
"prompt_level_strict_acc_stderr,none": 0.021285933050061243,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5169241451136956,
"acc_norm_stderr,none": 0.0062288773189484396,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.031686212526223896
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5342465753424658,
"acc_norm_stderr,none": 0.04142522736934774
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.704,
"acc_norm_stderr,none": 0.028928939388379697
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6179775280898876,
"acc_norm_stderr,none": 0.03652112637307604
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.492,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3062080536912752,
"acc_norm_stderr,none": 0.013363479514082741,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2948717948717949,
"acc_norm_stderr,none": 0.01953225605335253
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5730129390018485,
"prompt_level_strict_acc_stderr,none": 0.021285933050061243,
"inst_level_strict_acc,none": 0.6762589928057554,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6284658040665434,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.7194244604316546,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15709969788519634,
"exact_match_stderr,none": 0.00946496305892503,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.31596091205211724,
"exact_match_stderr,none": 0.026576416772305225
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09848484848484848,
"exact_match_stderr,none": 0.026033680930226354
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.1038961038961039,
"exact_match_stderr,none": 0.02466795220435413
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.29015544041450775,
"exact_match_stderr,none": 0.032752644677915166
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37367021276595747,
"acc_stderr,none": 0.004410571933521376
},
"leaderboard_musr": {
"acc_norm,none": 0.421957671957672,
"acc_norm_stderr,none": 0.01746179776757259,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.34375,
"acc_norm_stderr,none": 0.029743078779677763
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
FrancophonIA/ANNODIS | FrancophonIA | "2024-11-29T21:45:20Z" | 3 | 0 | [
"language:fr",
"license:cc-by-nc-sa-3.0",
"region:us"
] | null | "2024-11-29T21:40:07Z" | ---
language:
- fr
viewer: false
license: cc-by-nc-sa-3.0
---
> [!NOTE]
> Dataset origin: http://redac.univ-tlse2.fr/corpus/annodis/
## Presentation
La ressource ANNODIS est un ensemble diversifié de textes en français enrichis manuellement d'annotations de structures discursives. Elle est le résultat du projet ANNODIS (ANNOtation DIScursive), projet financé par l'ANR. Ses caractéristiques principales :
deux annotations (correspondant à deux approches distinctes de l'organisation discursive)
L'annotation en relations rhétoriques comprend la délimitation de 3188 Unités Élémentaires de Discours (EDU) et 1395 Unités Complexes de Discours (CDU) reliées par 3355 relations de discours typées (e.g. contraste, élaboration, résultat, attribution, etc.)
L'annotation en structures multi-échelles qui fournit 991 structures énumératives, 588 chaînes topicales et l'ensemble des indices qui leur sont associés (e.g. 3456 expressions topicales)
des textes (total 687 000 mots) issus de quatre sources :
Est Républicain (39 articles, 10 000 mots)
Wikipédia (30 articles + 30 extraits, 242 000 mots)
Actes du Congrès Mondial de Linguistique Française 2008 (25 articles, 169 000 mots)
Rapports de l'Institut Français de Relations Internationales (32 rapports, 266 000 mots)
Les corpus ont été annotés avec Glozz, plate-forme développée dans le cadre d'ANNODIS
## Citation
```
Muller P., Vergez-Couret M., Prévot L., Asher N., Benamara F., Bras M., Le Draoulec A., Vieu L. (2012).
Manuel d'annotation en relations de discours du projet ANNODIS. Carnets de Grammaire 21, 34p. [ PDF : http://w3.erss.univ-tlse2.fr/textes/publications/CarnetsGrammaire/carnGram21.pdf]
```
```
Colléter M., Fabre C., Ho-Dac L.-M., Péry-Woodley M.-P., Rebeyrolle J., Tanguy L. (2012).
La ressource ANNODIS multi-échelle : guide d'annotation et "bonus" Carnets de Grammaire 20, 63p. [ PDF : http://w3.erss.univ-tlse2.fr/textes/publications/CarnetsGrammaire/carnGram20.pdf ]
``` |
FrancophonIA/ClaimsKG | FrancophonIA | "2024-11-29T22:14:20Z" | 3 | 0 | [
"multilinguality:multilingual",
"language:fr",
"language:en",
"region:us"
] | null | "2024-11-29T22:07:38Z" | ---
language:
- fr
- en
multilinguality:
- multilingual
viewer: false
---
> [!NOTE]
> Dataset origin: https://lium.univ-lemans.fr/frnewslink/
## Description
ClaimsKG is a knowledge graph of metadata information for fact-checked claims scraped from popular fact-checking sites. In addition to providing a single dataset of claims and associated metadata, truth ratings are harmonized and additional information is provided for each claim, e.g., about mentioned entities. Please see (https://data.gesis.org/claimskg/) for further details about the data model, query examples and statistics.
The dataset facilitates structured queries about claims, their truth values, involved entities, authors, dates, and other kinds of metadata. ClaimsKG is generated through a (semi-)automated pipeline, which harvests claim-related data from popular fact-checking web sites, annotates them with related entities from DBpedia/Wikipedia, and lifts all data to RDF using established vocabularies (such as schema.org).
The latest release of ClaimsKG covers 74066 claims and 72127 Claim Reviews. This is the fourth release of the dataset where data was scraped till Jan 31, 2023 containing claims published between 1996 and 2023 from 13 fact-checking websites. The websites are Fullfact, Politifact, TruthOrFiction, Checkyourfact, Vishvanews, AFP (French), AFP, Polygraph, EU factcheck, Factograph, Fatabyyano, Snopes and Africacheck. The claim-review (fact-checking) period for claims ranges between the year 1996 to 2023. Similar to the previous release, the Entity fishing python client (https://github.com/hirmeos/entity-fishing-client-python) has been used for entity linking and disambiguation in this release. Improvements have been made in the web scraping and data preprocessing pipeline to extract more entities from both claims and claims reviews. Currently, ClaimsKG contains 3408386 entities detected and referenced with DBpedia.
This latest release of ClaimsKG supersedes the previous versions as it contained all the claims from the previous versions together in addition to the additional new claims as well as improved entity annotation resulting in a higher number of entities.
## Citation
```
@misc{SDN-10.7802-2620,
author = "Gangopadhyay, Susmita and Schellhammer, Sebastian and Boland, Katarina and Sch{\"u}ller, Sascha and Todorov, Konstantin and Tchechmedjiev, Andon and Zapilko, Benjamin and Fafalios, Pavlos and Jabeen, Hajira and Dietze, Stefan",
title = "ClaimsKG - A Knowledge Graph of Fact-Checked Claims (January, 2023)",
year = "2023",
howpublished = "GESIS, Cologne. Data File Version 2.0.0, https://doi.org/10.7802/2620",
doi = "10.7802/2620",
}
``` |
DT4LM/albertbasev2_rte_pair_faster-alzantot | DT4LM | "2024-11-29T22:41:06Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T22:41:03Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 47528
num_examples: 147
download_size: 39911
dataset_size: 47528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_rte_pair_faster-alzantot_original | DT4LM | "2024-11-29T22:41:10Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T22:41:07Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 47205
num_examples: 147
download_size: 39680
dataset_size: 47205
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/qa_large_0_4_speechq | amuvarma | "2024-11-29T23:57:02Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T23:52:30Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 11822414204.0
num_examples: 80000
download_size: 10988942555
dataset_size: 11822414204.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sufianstek/phc_vital_signs | sufianstek | "2024-11-30T00:33:00Z" | 3 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-30T00:32:57Z" | ---
license: mit
---
|
oakwood/demo_curtain | oakwood | "2024-11-30T00:34:50Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"curtain"
] | [
"robotics"
] | "2024-11-30T00:34:36Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- curtain
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
ashercn97/reasoning-data-v2-2 | ashercn97 | "2024-11-30T00:45:35Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T00:45:34Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 24418126
num_examples: 4000
download_size: 12107288
dataset_size: 24418126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WANGYJ0325/crag | WANGYJ0325 | "2024-11-30T01:05:34Z" | 3 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-30T00:51:52Z" | ---
license: apache-2.0
---
|
oakwood/test_20241130 | oakwood | "2024-11-30T01:11:20Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | "2024-11-30T01:11:09Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
mpanda27/common_voice_16_0_ro_pseudo_labelled | mpanda27 | "2024-11-30T01:37:07Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T01:28:23Z" | ---
dataset_info:
config_name: ro
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: condition_on_prev
sequence: int64
- name: whisper_transcript
dtype: string
splits:
- name: train
num_bytes: 650572662.0
num_examples: 734
- name: validation
num_bytes: 485173030.0
num_examples: 546
- name: test
num_bytes: 528897106.0
num_examples: 597
download_size: 1513163928
dataset_size: 1664642798.0
configs:
- config_name: ro
data_files:
- split: train
path: ro/train-*
- split: validation
path: ro/validation-*
- split: test
path: ro/test-*
---
|
ashercn97/test-distiset-1 | ashercn97 | "2024-11-30T01:44:54Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-11-30T01:43:31Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': illogical
'1': logical
splits:
- name: train
num_bytes: 27171
num_examples: 100
download_size: 17095
dataset_size: 27171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for test-distiset-1
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/test-distiset-1/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/ashercn97/test-distiset-1/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "Just because 7 out of 10 people prefer pizza over burgers does not necessarily mean that 7/10 people prefer pizza, because the sample may not be representative of the entire population and we are rounding the result which is an approximation."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("ashercn97/test-distiset-1", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("ashercn97/test-distiset-1")
```
</details>
|
EdsonKanou/sql_training | EdsonKanou | "2024-11-30T02:05:22Z" | 3 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T02:01:10Z" | ---
license: mit
dataset_info:
features:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 80840
num_examples: 50
download_size: 12074
dataset_size: 80840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashercn97/reasoning-data-v3-1 | ashercn97 | "2024-11-30T03:09:52Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T03:09:51Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8123629
num_examples: 1000
download_size: 3773971
dataset_size: 8123629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashercn97/reasoning-data-v3-2 | ashercn97 | "2024-11-30T03:34:47Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T03:34:45Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 45229809
num_examples: 4000
download_size: 20190882
dataset_size: 45229809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashercn97/reasoning-data-v4-1 | ashercn97 | "2024-11-30T04:01:07Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T04:01:05Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 39095272
num_examples: 2000
download_size: 16191447
dataset_size: 39095272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Taylor658/fluoroscopy_techniques | Taylor658 | "2024-11-30T05:00:09Z" | 3 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | null | "2024-11-30T04:58:32Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': digital-fluoroscopy
'1': mobile-fluoroscopy
'2': conventional-fluoroscopy
splits:
- name: train
num_bytes: 61388
num_examples: 250
download_size: 24571
dataset_size: 61388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
# Dataset Card for fluoroscopy_techniques
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Taylor658/fluoroscopy_techniques/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Taylor658/fluoroscopy_techniques/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"labels": [
0
],
"text": "This new imaging technology uses a flat-panel detector to provide continuous X-ray images in real-time, allowing for dynamic viewing of moving structures without the need for sequential exposures."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Taylor658/fluoroscopy_techniques", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Taylor658/fluoroscopy_techniques")
```
</details>
|
ashwiniai/anatomy-corpus-test | ashwiniai | "2024-11-30T05:13:04Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T05:10:04Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: page_idx
dtype: int64
- name: document_name
dtype: string
- name: file_path
dtype: string
- name: file_url
dtype: string
- name: loader_name
dtype: string
splits:
- name: pdfplumbertextloader
num_bytes: 23313
num_examples: 6
- name: pypdf2textloader
num_bytes: 23554
num_examples: 6
- name: pymupdf4llmtextloader
num_bytes: 22607
num_examples: 6
download_size: 51369
dataset_size: 69474
configs:
- config_name: default
data_files:
- split: pdfplumbertextloader
path: data/pdfplumbertextloader-*
- split: pypdf2textloader
path: data/pypdf2textloader-*
- split: pymupdf4llmtextloader
path: data/pymupdf4llmtextloader-*
---
|
maanasharma5/arabic_sft_data | maanasharma5 | "2024-11-30T05:27:39Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T05:27:38Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: refusal
dtype: string
splits:
- name: train
num_bytes: 24078936
num_examples: 15000
download_size: 9974963
dataset_size: 24078936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rathore11/snoopy | rathore11 | "2024-11-30T05:31:54Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-30T05:29:56Z" | ---
license: apache-2.0
---
|
amuvarma/luna-full-conversations-250 | amuvarma | "2024-11-30T05:30:42Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T05:30:26Z" | ---
dataset_info:
features:
- name: messsages
sequence: string
splits:
- name: train
num_bytes: 207067.0
num_examples: 250
download_size: 133978
dataset_size: 207067.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hexuan21/math-sft-mix-full-w4-sub-1 | hexuan21 | "2024-11-30T06:13:11Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T06:12:04Z" | ---
license: apache-2.0
---
|
JsZe/distributed-computing-complex | JsZe | "2024-11-30T06:46:14Z" | 3 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering",
"text-generation"
] | "2024-11-30T06:32:09Z" | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100<n<1K
source_datasets:
- original
task_categories:
- question-answering
- text-generation
task_ids:
- open-domain-qa
---
# Distributed Systems Q&A Dataset
This dataset is collection of question-and-answer pairs related to distributed systems, compiled from a list of commonly asked questions in a college-level class.
This dataset is designed to assist educators, researchers, and developers working on tuning AI models, chatbots, or educational tools in the field of distributed systems.
### Key Features:
- **Questions**: A variety of questions covering fundamental distributed systems concepts.
- **Answers**: Detailed, accurate, and explanatory answers.
- **Shuffled Order**: Entries are shuffled for non-sequential learning.
---
## Dataset Structure
The dataset is provided in CSV format, with the following columns:
| Column | Description |
|----------|-------------------------------------------------|
| Question | A question about distributed systems. |
| Answer | A corresponding answer explaining the concept. |
### Sample Entries:
| Question | Answer |
|-----------------------------------------------|-----------------------------------------------------------------------------------------|
| What are the main properties of a distributed transaction? | The main properties of a distributed transaction are atomicity, consistency, isolation, and durability (ACID). Atomicity ensures all operations are completed or none at all. Consistency ensures the system remains in a valid state. Isolation ensures transactions do not interfere with each other. Durability ensures results are permanent. |
| How do distributed systems handle 'Deadlock Detection'? | Distributed systems handle deadlock detection by monitoring resource allocation and communication patterns. Algorithms like wait-for graphs and probe-based methods identify cycles or unresolved dependencies, allowing the system to detect and resolve deadlocks promptly. |
---
## Licensing
This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
## Citation
If you use this dataset in your research or applications, please cite it as follows:
```
Author(s): Jeffrey Zhou, K. Mani Chandy, Sachin Adlakha
Title: Distributed Systems Q&A Dataset
URL: https://huggingface.co/datasets/JsZe/distributed-computing-complex
License: MIT License
Date: [2024-07-14]
```
|
open-llm-leaderboard/mkxu__llama-3-8b-po1-details | open-llm-leaderboard | "2024-11-30T06:59:05Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T06:55:30Z" | ---
pretty_name: Evaluation run of mkxu/llama-3-8b-po1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mkxu/llama-3-8b-po1](https://huggingface.co/mkxu/llama-3-8b-po1)\nThe dataset\
\ is composed of 38 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 1 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/mkxu__llama-3-8b-po1-details\"\
,\n\tname=\"mkxu__llama-3-8b-po1__leaderboard_bbh_boolean_expressions\",\n\tsplit=\"\
latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from run\
\ 2024-11-30T06-55-29.252571](https://huggingface.co/datasets/open-llm-leaderboard/mkxu__llama-3-8b-po1-details/blob/main/mkxu__llama-3-8b-po1/results_2024-11-30T06-55-29.252571.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.5623501199040767,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.3438077634011091,\n \"prompt_level_strict_acc_stderr,none\": 0.020439793487859976,\n\
\ \"exact_match,none\": 0.0702416918429003,\n \"exact_match_stderr,none\"\
: 0.006863454031669159,\n \"acc,none\": 0.3562167553191489,\n \
\ \"acc_stderr,none\": 0.004365923714430882,\n \"prompt_level_loose_acc,none\"\
: 0.4491682070240296,\n \"prompt_level_loose_acc_stderr,none\": 0.021405093233588298,\n\
\ \"acc_norm,none\": 0.4525878842910883,\n \"acc_norm_stderr,none\"\
: 0.005350400462318365,\n \"inst_level_strict_acc,none\": 0.4724220623501199,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.4943586182954348,\n \"acc_norm_stderr,none\": 0.006221085972017242,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6737967914438503,\n\
\ \"acc_norm_stderr,none\": 0.03437574439341202\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.552,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.652,\n \"acc_norm_stderr,none\":\
\ 0.030186568464511673\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5342465753424658,\n \"acc_norm_stderr,none\": 0.04142522736934774\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.550561797752809,\n \"acc_norm_stderr,none\": 0.037389649660569645\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.152,\n\
\ \"acc_norm_stderr,none\": 0.022752024491765464\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.232,\n \"acc_norm_stderr,none\":\
\ 0.026750070374865202\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\":\
\ 0.030266288057359866\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29697986577181207,\n\
\ \"acc_norm_stderr,none\": 0.0132451965442603,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2878787878787879,\n \"acc_norm_stderr,none\": 0.03225883512300998\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2857142857142857,\n\
\ \"acc_norm_stderr,none\": 0.019351013185102753\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.3438077634011091,\n \"prompt_level_strict_acc_stderr,none\": 0.020439793487859976,\n\
\ \"inst_level_strict_acc,none\": 0.4724220623501199,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.4491682070240296,\n \"prompt_level_loose_acc_stderr,none\": 0.021405093233588298,\n\
\ \"inst_level_loose_acc,none\": 0.5623501199040767,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0702416918429003,\n \"exact_match_stderr,none\"\
: 0.006863454031669159,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.13029315960912052,\n\
\ \"exact_match_stderr,none\": 0.019243609597826783\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.032520325203252036,\n \"exact_match_stderr,none\": 0.016058998205879745\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.007575757575757576,\n\
\ \"exact_match_stderr,none\": 0.007575757575757577\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0035714285714285713,\n \"exact_match_stderr,none\": 0.0035714285714285713\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.07142857142857142,\n\
\ \"exact_match_stderr,none\": 0.020820824576076338\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.15025906735751296,\n \"exact_match_stderr,none\"\
: 0.025787723180723855\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05185185185185185,\n \"exact_match_stderr,none\"\
: 0.019154368449050496\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3562167553191489,\n\
\ \"acc_stderr,none\": 0.004365923714430882\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.37962962962962965,\n \"acc_norm_stderr,none\"\
: 0.017120448240540476,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.552,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28125,\n \"acc_norm_stderr,none\"\
: 0.028155620586096754\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.5623501199040767,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_strict_acc,none\": 0.3438077634011091,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.020439793487859976,\n \"exact_match,none\": 0.0702416918429003,\n \
\ \"exact_match_stderr,none\": 0.006863454031669159,\n \"acc,none\":\
\ 0.3562167553191489,\n \"acc_stderr,none\": 0.004365923714430882,\n \
\ \"prompt_level_loose_acc,none\": 0.4491682070240296,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.021405093233588298,\n \"acc_norm,none\": 0.4525878842910883,\n \
\ \"acc_norm_stderr,none\": 0.005350400462318365,\n \"inst_level_strict_acc,none\"\
: 0.4724220623501199,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.4943586182954348,\n \"acc_norm_stderr,none\": 0.006221085972017242,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6737967914438503,\n \"acc_norm_stderr,none\"\
: 0.03437574439341202\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.652,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5342465753424658,\n\
\ \"acc_norm_stderr,none\": 0.04142522736934774\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.550561797752809,\n \"acc_norm_stderr,none\"\
: 0.037389649660569645\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.152,\n \"acc_norm_stderr,none\": 0.022752024491765464\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.232,\n \"acc_norm_stderr,none\": 0.026750070374865202\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29697986577181207,\n\
\ \"acc_norm_stderr,none\": 0.0132451965442603,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2878787878787879,\n\
\ \"acc_norm_stderr,none\": 0.03225883512300998\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2857142857142857,\n \"acc_norm_stderr,none\": 0.019351013185102753\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.3438077634011091,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.020439793487859976,\n \
\ \"inst_level_strict_acc,none\": 0.4724220623501199,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.4491682070240296,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021405093233588298,\n \"inst_level_loose_acc,none\"\
: 0.5623501199040767,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0702416918429003,\n\
\ \"exact_match_stderr,none\": 0.006863454031669159,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.13029315960912052,\n \"exact_match_stderr,none\": 0.019243609597826783\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.032520325203252036,\n \"exact_match_stderr,none\": 0.016058998205879745\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.007575757575757576,\n \"exact_match_stderr,none\"\
: 0.007575757575757577\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0035714285714285713,\n \"exact_match_stderr,none\"\
: 0.0035714285714285713\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.07142857142857142,\n \"exact_match_stderr,none\": 0.020820824576076338\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.15025906735751296,\n \"exact_match_stderr,none\"\
: 0.025787723180723855\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05185185185185185,\n \"exact_match_stderr,none\": 0.019154368449050496\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3562167553191489,\n \"acc_stderr,none\": 0.004365923714430882\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.37962962962962965,\n\
\ \"acc_norm_stderr,none\": 0.017120448240540476,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28125,\n \"acc_norm_stderr,none\": 0.028155620586096754\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mkxu/llama-3-8b-po1
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_navigate
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_snarks
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_gpqa_extended
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_gpqa_main
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_gpqa_main_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_ifeval
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_ifeval_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_mmlu_pro
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_musr_object_placements
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-30T06-55-29.252571.jsonl'
- config_name: mkxu__llama-3-8b-po1__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_30T06_55_29.252571
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-30T06-55-29.252571.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-30T06-55-29.252571.jsonl'
---
# Dataset Card for Evaluation run of mkxu/llama-3-8b-po1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mkxu/llama-3-8b-po1](https://huggingface.co/mkxu/llama-3-8b-po1)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/mkxu__llama-3-8b-po1-details",
name="mkxu__llama-3-8b-po1__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-30T06-55-29.252571](https://huggingface.co/datasets/open-llm-leaderboard/mkxu__llama-3-8b-po1-details/blob/main/mkxu__llama-3-8b-po1/results_2024-11-30T06-55-29.252571.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.5623501199040767,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.3438077634011091,
"prompt_level_strict_acc_stderr,none": 0.020439793487859976,
"exact_match,none": 0.0702416918429003,
"exact_match_stderr,none": 0.006863454031669159,
"acc,none": 0.3562167553191489,
"acc_stderr,none": 0.004365923714430882,
"prompt_level_loose_acc,none": 0.4491682070240296,
"prompt_level_loose_acc_stderr,none": 0.021405093233588298,
"acc_norm,none": 0.4525878842910883,
"acc_norm_stderr,none": 0.005350400462318365,
"inst_level_strict_acc,none": 0.4724220623501199,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4943586182954348,
"acc_norm_stderr,none": 0.006221085972017242,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6737967914438503,
"acc_norm_stderr,none": 0.03437574439341202
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.652,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5342465753424658,
"acc_norm_stderr,none": 0.04142522736934774
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.550561797752809,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.232,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29697986577181207,
"acc_norm_stderr,none": 0.0132451965442603,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3438077634011091,
"prompt_level_strict_acc_stderr,none": 0.020439793487859976,
"inst_level_strict_acc,none": 0.4724220623501199,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4491682070240296,
"prompt_level_loose_acc_stderr,none": 0.021405093233588298,
"inst_level_loose_acc,none": 0.5623501199040767,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0702416918429003,
"exact_match_stderr,none": 0.006863454031669159,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13029315960912052,
"exact_match_stderr,none": 0.019243609597826783
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.032520325203252036,
"exact_match_stderr,none": 0.016058998205879745
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.15025906735751296,
"exact_match_stderr,none": 0.025787723180723855
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3562167553191489,
"acc_stderr,none": 0.004365923714430882
},
"leaderboard_musr": {
"acc_norm,none": 0.37962962962962965,
"acc_norm_stderr,none": 0.017120448240540476,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.5623501199040767,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.3438077634011091,
"prompt_level_strict_acc_stderr,none": 0.020439793487859976,
"exact_match,none": 0.0702416918429003,
"exact_match_stderr,none": 0.006863454031669159,
"acc,none": 0.3562167553191489,
"acc_stderr,none": 0.004365923714430882,
"prompt_level_loose_acc,none": 0.4491682070240296,
"prompt_level_loose_acc_stderr,none": 0.021405093233588298,
"acc_norm,none": 0.4525878842910883,
"acc_norm_stderr,none": 0.005350400462318365,
"inst_level_strict_acc,none": 0.4724220623501199,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4943586182954348,
"acc_norm_stderr,none": 0.006221085972017242,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6737967914438503,
"acc_norm_stderr,none": 0.03437574439341202
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.652,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5342465753424658,
"acc_norm_stderr,none": 0.04142522736934774
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.550561797752809,
"acc_norm_stderr,none": 0.037389649660569645
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.152,
"acc_norm_stderr,none": 0.022752024491765464
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.232,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29697986577181207,
"acc_norm_stderr,none": 0.0132451965442603,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2857142857142857,
"acc_norm_stderr,none": 0.019351013185102753
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3438077634011091,
"prompt_level_strict_acc_stderr,none": 0.020439793487859976,
"inst_level_strict_acc,none": 0.4724220623501199,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.4491682070240296,
"prompt_level_loose_acc_stderr,none": 0.021405093233588298,
"inst_level_loose_acc,none": 0.5623501199040767,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0702416918429003,
"exact_match_stderr,none": 0.006863454031669159,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13029315960912052,
"exact_match_stderr,none": 0.019243609597826783
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.032520325203252036,
"exact_match_stderr,none": 0.016058998205879745
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0035714285714285713,
"exact_match_stderr,none": 0.0035714285714285713
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.15025906735751296,
"exact_match_stderr,none": 0.025787723180723855
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3562167553191489,
"acc_stderr,none": 0.004365923714430882
},
"leaderboard_musr": {
"acc_norm,none": 0.37962962962962965,
"acc_norm_stderr,none": 0.017120448240540476,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Yuyang-z/CamVid-30K | Yuyang-z | "2024-11-30T07:51:54Z" | 3 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-11-30T06:56:35Z" | ---
license: cc-by-4.0
---
|