datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.54M
| likes
int64 0
6.35k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_stepwise_dpo_binarized_filtered_2048 | ZixuanKe | "2024-11-25T20:01:09Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:01:04Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: justification
dtype: string
- name: llama3_prompt_length
dtype: int64
- name: llama3_chosen_length
dtype: int64
- name: llama3_rejected_length
dtype: int64
splits:
- name: train
num_bytes: 168095820.6101605
num_examples: 27517
- name: validation
num_bytes: 7973626.823788546
num_examples: 1281
download_size: 29686685
dataset_size: 176069447.43394905
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Avvvvva/M2-AIFT-Candidates | Avvvvva | "2024-11-25T20:03:48Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:03:46Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 36984
num_examples: 10
download_size: 35857
dataset_size: 36984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CEBangu/Txt360-CC-subsample | CEBangu | "2024-11-25T20:51:01Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:04:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: subset
dtype: string
splits:
- name: train
num_bytes: 1065011768
num_examples: 300000
download_size: 637662925
dataset_size: 1065011768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Subset of the CommonCrawl portion of the Txt 360 dataset.
Citation: txt360data2024,
TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend,
Liping Tang, Nikhil Ranjan, Omkar Pangarkar, Xuezhi Liang, Zhen Wang, Li An, Bhaskar Rao, Linghao Jin, Huijuan Wang, Zhoujun Cheng, Suqi Sun, Cun Mu, Victor Miller, Xuezhe Ma, Yue Peng, Zhengzhong Liu, Eric P. Xing,
2024
|
Avvvvva/M2-AIFT-LLMJudge | Avvvvva | "2024-11-25T20:07:28Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:07:25Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 56880
num_examples: 30
download_size: 36196
dataset_size: 56880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Avvvvva/M2-DPO-LLMJudge | Avvvvva | "2024-11-25T20:07:31Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:07:28Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 28835
num_examples: 10
download_size: 38660
dataset_size: 28835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmed275/generated_summaries_sled | ahmed275 | "2024-11-25T20:08:56Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:08:54Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: year
dtype: int64
- name: url
dtype: string
- name: opinionOfTheCourt
dtype: string
- name: syllabus
dtype: string
- name: issueArea
dtype: float64
- name: decisionDirection
dtype: float64
- name: partyWinning
dtype: float64
- name: voteDistribution
dtype: float64
- name: respondentType
dtype: int64
- name: respondent
dtype: float64
- name: __index_level_0__
dtype: int64
- name: generated_summary
dtype: string
splits:
- name: train
num_bytes: 22120297
num_examples: 547
download_size: 11616870
dataset_size: 22120297
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kanakapriya/phi3again | kanakapriya | "2024-11-25T20:09:15Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-25T20:09:14Z" | ---
license: mit
---
|
Nash-pAnDiTa/Moamn-ifniqd1i12l | Nash-pAnDiTa | "2024-11-25T20:09:38Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:09:21Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 96698103.0
num_examples: 10
download_size: 85881963
dataset_size: 96698103.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-details | open-llm-leaderboard | "2024-11-25T20:16:13Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T20:12:21Z" | ---
pretty_name: Evaluation run of FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit](https://huggingface.co/FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-details\"\
,\n\tname=\"FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-25T20-12-20.428213](https://huggingface.co/datasets/open-llm-leaderboard/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-details/blob/main/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit/results_2024-11-25T20-12-20.428213.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.14852061170212766,\n \"acc_stderr,none\"\
: 0.0032421236259070727,\n \"inst_level_loose_acc,none\": 0.37290167865707435,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0,\n \"acc_norm,none\"\
: 0.3220910623946037,\n \"acc_norm_stderr,none\": 0.00504920523613927,\n\
\ \"prompt_level_strict_acc,none\": 0.24584103512014788,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.01852941708079555,\n \"\
prompt_level_loose_acc,none\": 0.2587800369685767,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.018846992560712525,\n \"inst_level_strict_acc,none\": 0.35731414868105515,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.33101892032633223,\n \"acc_norm_stderr,none\": 0.005812731468023277,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.47058823529411764,\n\
\ \"acc_norm_stderr,none\": 0.03659829510813266\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\":\
\ 0.02857695873043744\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.46,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\":\
\ 0.022995023034068682\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.224,\n \"acc_norm_stderr,none\": 0.026421361687347884\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.164,\n \"acc_norm_stderr,none\": 0.02346526100207671\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\":\
\ 0.031191596026022818\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.2671232876712329,\n \"acc_norm_stderr,none\": 0.03674407640319397\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.14,\n \
\ \"acc_norm_stderr,none\": 0.021989409645240245\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\":\
\ 0.024760377727750513\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.42134831460674155,\n \"acc_norm_stderr,none\": 0.03711441405960183\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.148,\n\
\ \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\":\
\ 0.03069633626739458\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.012830796318556012,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.23737373737373738,\n \"acc_norm_stderr,none\": 0.030313710538198924\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.28205128205128205,\n\
\ \"acc_norm_stderr,none\": 0.019275803929950375\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.24584103512014788,\n \"prompt_level_strict_acc_stderr,none\": 0.01852941708079555,\n\
\ \"inst_level_strict_acc,none\": 0.35731414868105515,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.2587800369685767,\n \"prompt_level_loose_acc_stderr,none\": 0.018846992560712525,\n\
\ \"inst_level_loose_acc,none\": 0.37290167865707435,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"alias\": \" - leaderboard_math_hard\"\n },\n \
\ \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\": {\n \
\ \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\"\
: \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\":\
\ 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\"\
: 0.14852061170212766,\n \"acc_stderr,none\": 0.0032421236259070727\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.33994708994708994,\n\
\ \"acc_norm_stderr,none\": 0.016720981909741844,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23046875,\n\
\ \"acc_norm_stderr,none\": 0.026372364120563745\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n }\n },\n \"leaderboard\": {\n \"acc,none\"\
: 0.14852061170212766,\n \"acc_stderr,none\": 0.0032421236259070727,\n \
\ \"inst_level_loose_acc,none\": 0.37290167865707435,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"acc_norm,none\": 0.3220910623946037,\n \"acc_norm_stderr,none\"\
: 0.00504920523613927,\n \"prompt_level_strict_acc,none\": 0.24584103512014788,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01852941708079555,\n \
\ \"prompt_level_loose_acc,none\": 0.2587800369685767,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.018846992560712525,\n \"inst_level_strict_acc,none\": 0.35731414868105515,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.33101892032633223,\n\
\ \"acc_norm_stderr,none\": 0.005812731468023277,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.47058823529411764,\n \"acc_norm_stderr,none\"\
: 0.03659829510813266\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.224,\n \"acc_norm_stderr,none\": 0.026421361687347884\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.164,\n \"acc_norm_stderr,none\": 0.02346526100207671\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2671232876712329,\n\
\ \"acc_norm_stderr,none\": 0.03674407640319397\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.42134831460674155,\n \"acc_norm_stderr,none\"\
: 0.03711441405960183\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.148,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2676174496644295,\n\
\ \"acc_norm_stderr,none\": 0.012830796318556012,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.23737373737373738,\n\
\ \"acc_norm_stderr,none\": 0.030313710538198924\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.28205128205128205,\n \"acc_norm_stderr,none\": 0.019275803929950375\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.26339285714285715,\n \"acc_norm_stderr,none\"\
: 0.02083369001657866\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.24584103512014788,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01852941708079555,\n \
\ \"inst_level_strict_acc,none\": 0.35731414868105515,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.2587800369685767,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.018846992560712525,\n \"inst_level_loose_acc,none\"\
: 0.37290167865707435,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0,\n \"alias\": \" - leaderboard_math_hard\"\
\n },\n \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\": {\n\
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.14852061170212766,\n\
\ \"acc_stderr,none\": 0.0032421236259070727\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.33994708994708994,\n \"acc_norm_stderr,none\"\
: 0.016720981909741844,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23046875,\n\
\ \"acc_norm_stderr,none\": 0.026372364120563745\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n }\n}\n```"
repo_url: https://huggingface.co/FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_ifeval
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_ifeval_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T20-12-20.428213.jsonl'
- config_name: FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T20_12_20.428213
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T20-12-20.428213.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T20-12-20.428213.jsonl'
---
# Dataset Card for Evaluation run of FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit](https://huggingface.co/FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-details",
name="FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T20-12-20.428213](https://huggingface.co/datasets/open-llm-leaderboard/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit-details/blob/main/FlofloB__40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit/results_2024-11-25T20-12-20.428213.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.14852061170212766,
"acc_stderr,none": 0.0032421236259070727,
"inst_level_loose_acc,none": 0.37290167865707435,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"acc_norm,none": 0.3220910623946037,
"acc_norm_stderr,none": 0.00504920523613927,
"prompt_level_strict_acc,none": 0.24584103512014788,
"prompt_level_strict_acc_stderr,none": 0.01852941708079555,
"prompt_level_loose_acc,none": 0.2587800369685767,
"prompt_level_loose_acc_stderr,none": 0.018846992560712525,
"inst_level_strict_acc,none": 0.35731414868105515,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.33101892032633223,
"acc_norm_stderr,none": 0.005812731468023277,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.47058823529411764,
"acc_norm_stderr,none": 0.03659829510813266
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2671232876712329,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.42134831460674155,
"acc_norm_stderr,none": 0.03711441405960183
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.148,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.012830796318556012,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.24584103512014788,
"prompt_level_strict_acc_stderr,none": 0.01852941708079555,
"inst_level_strict_acc,none": 0.35731414868105515,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.2587800369685767,
"prompt_level_loose_acc_stderr,none": 0.018846992560712525,
"inst_level_loose_acc,none": 0.37290167865707435,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.14852061170212766,
"acc_stderr,none": 0.0032421236259070727
},
"leaderboard_musr": {
"acc_norm,none": 0.33994708994708994,
"acc_norm_stderr,none": 0.016720981909741844,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23046875,
"acc_norm_stderr,none": 0.026372364120563745
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
}
},
"leaderboard": {
"acc,none": 0.14852061170212766,
"acc_stderr,none": 0.0032421236259070727,
"inst_level_loose_acc,none": 0.37290167865707435,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"acc_norm,none": 0.3220910623946037,
"acc_norm_stderr,none": 0.00504920523613927,
"prompt_level_strict_acc,none": 0.24584103512014788,
"prompt_level_strict_acc_stderr,none": 0.01852941708079555,
"prompt_level_loose_acc,none": 0.2587800369685767,
"prompt_level_loose_acc_stderr,none": 0.018846992560712525,
"inst_level_strict_acc,none": 0.35731414868105515,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.33101892032633223,
"acc_norm_stderr,none": 0.005812731468023277,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.47058823529411764,
"acc_norm_stderr,none": 0.03659829510813266
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.164,
"acc_norm_stderr,none": 0.02346526100207671
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2671232876712329,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.42134831460674155,
"acc_norm_stderr,none": 0.03711441405960183
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.148,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2676174496644295,
"acc_norm_stderr,none": 0.012830796318556012,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.26339285714285715,
"acc_norm_stderr,none": 0.02083369001657866
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.24584103512014788,
"prompt_level_strict_acc_stderr,none": 0.01852941708079555,
"inst_level_strict_acc,none": 0.35731414868105515,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.2587800369685767,
"prompt_level_loose_acc_stderr,none": 0.018846992560712525,
"inst_level_loose_acc,none": 0.37290167865707435,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.14852061170212766,
"acc_stderr,none": 0.0032421236259070727
},
"leaderboard_musr": {
"acc_norm,none": 0.33994708994708994,
"acc_norm_stderr,none": 0.016720981909741844,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23046875,
"acc_norm_stderr,none": 0.026372364120563745
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/zelk12__MT-Merge2-gemma-2-9B-details | open-llm-leaderboard | "2024-11-25T20:22:40Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T20:18:59Z" | ---
pretty_name: Evaluation run of zelk12/MT-Merge2-gemma-2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zelk12/MT-Merge2-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge2-gemma-2-9B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/zelk12__MT-Merge2-gemma-2-9B-details\"\
,\n\tname=\"zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-25T20-18-59.215211](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT-Merge2-gemma-2-9B-details/blob/main/zelk12__MT-Merge2-gemma-2-9B/results_2024-11-25T20-18-59.215211.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.15558912386706947,\n \"exact_match_stderr,none\"\
: 0.009395560341133794,\n \"acc_norm,none\": 0.5509145155013621,\n \
\ \"acc_norm_stderr,none\": 0.0052798095671504255,\n \"prompt_level_loose_acc,none\"\
: 0.7781885397412199,\n \"prompt_level_loose_acc_stderr,none\": 0.017878765407944433,\n\
\ \"inst_level_loose_acc,none\": 0.8441247002398081,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n\
\ \"acc,none\": 0.43816489361702127,\n \"acc_stderr,none\"\
: 0.004523476746563679,\n \"inst_level_strict_acc,none\": 0.8249400479616307,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6094428050685645,\n \"acc_norm_stderr,none\": 0.006032576873904748,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6470588235294118,\n\
\ \"acc_norm_stderr,none\": 0.03504019983419238\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\":\
\ 0.03114520984654851\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.636,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\":\
\ 0.03010450339231644\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6164383561643836,\n \"acc_norm_stderr,none\": 0.04038112474853568\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.816,\n \
\ \"acc_norm_stderr,none\": 0.02455581299422255\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6685393258426966,\n \"acc_norm_stderr,none\": 0.03538285323537675\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.884,\n\
\ \"acc_norm_stderr,none\": 0.020293429803083823\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\":\
\ 0.03069633626739458\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.35067114093959734,\n\
\ \"acc_norm_stderr,none\": 0.013833961416620248,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.35858585858585856,\n \"acc_norm_stderr,none\": 0.034169036403915276\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3608058608058608,\n\
\ \"acc_norm_stderr,none\": 0.020570977668247264\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33482142857142855,\n \"acc_norm_stderr,none\"\
: 0.02232142857142857\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n\
\ \"inst_level_strict_acc,none\": 0.8249400479616307,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7781885397412199,\n \"prompt_level_loose_acc_stderr,none\": 0.017878765407944433,\n\
\ \"inst_level_loose_acc,none\": 0.8441247002398081,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.15558912386706947,\n \"exact_match_stderr,none\"\
: 0.009395560341133794,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.32247557003257327,\n\
\ \"exact_match_stderr,none\": 0.02672084427631396\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \" \
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.022727272727272728,\n\
\ \"exact_match_stderr,none\": 0.0130210469090637\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02857142857142857,\n \"exact_match_stderr,none\": 0.009973998820736053\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.14935064935064934,\n\
\ \"exact_match_stderr,none\": 0.028815962452887128\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.27461139896373055,\n \"exact_match_stderr,none\"\
: 0.03221024508041151\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.044444444444444446,\n \"exact_match_stderr,none\"\
: 0.01780263602032457\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43816489361702127,\n\
\ \"acc_stderr,none\": 0.004523476746563679\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42063492063492064,\n \"acc_norm_stderr,none\"\
: 0.017592458763710066,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.54,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\"\
: 0.02838843806999465\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ }\n },\n \"leaderboard\": {\n \"exact_match,none\": 0.15558912386706947,\n\
\ \"exact_match_stderr,none\": 0.009395560341133794,\n \"acc_norm,none\"\
: 0.5509145155013621,\n \"acc_norm_stderr,none\": 0.0052798095671504255,\n\
\ \"prompt_level_loose_acc,none\": 0.7781885397412199,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.017878765407944433,\n \"inst_level_loose_acc,none\": 0.8441247002398081,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n\
\ \"acc,none\": 0.43816489361702127,\n \"acc_stderr,none\": 0.004523476746563679,\n\
\ \"inst_level_strict_acc,none\": 0.8249400479616307,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6094428050685645,\n \"acc_norm_stderr,none\"\
: 0.006032576873904748,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6470588235294118,\n \"acc_norm_stderr,none\"\
: 0.03504019983419238\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6164383561643836,\n\
\ \"acc_norm_stderr,none\": 0.04038112474853568\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6685393258426966,\n \"acc_norm_stderr,none\"\
: 0.03538285323537675\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.35067114093959734,\n\
\ \"acc_norm_stderr,none\": 0.013833961416620248,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.35858585858585856,\n\
\ \"acc_norm_stderr,none\": 0.034169036403915276\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3608058608058608,\n \"acc_norm_stderr,none\": 0.020570977668247264\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33482142857142855,\n \"acc_norm_stderr,none\"\
: 0.02232142857142857\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7504621072088724,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n \
\ \"inst_level_strict_acc,none\": 0.8249400479616307,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7781885397412199,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.017878765407944433,\n \"inst_level_loose_acc,none\"\
: 0.8441247002398081,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.15558912386706947,\n\
\ \"exact_match_stderr,none\": 0.009395560341133794,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.32247557003257327,\n \"exact_match_stderr,none\": 0.02672084427631396\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02857142857142857,\n \"exact_match_stderr,none\"\
: 0.009973998820736053\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.14935064935064934,\n \"exact_match_stderr,none\": 0.028815962452887128\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.27461139896373055,\n \"exact_match_stderr,none\"\
: 0.03221024508041151\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.044444444444444446,\n \"exact_match_stderr,none\": 0.01780263602032457\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.43816489361702127,\n \"acc_stderr,none\": 0.004523476746563679\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.42063492063492064,\n\
\ \"acc_norm_stderr,none\": 0.017592458763710066,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\": 0.02838843806999465\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ }\n}\n```"
repo_url: https://huggingface.co/zelk12/MT-Merge2-gemma-2-9B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_ifeval
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_ifeval_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T20-18-59.215211.jsonl'
- config_name: zelk12__MT-Merge2-gemma-2-9B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T20_18_59.215211
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T20-18-59.215211.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T20-18-59.215211.jsonl'
---
# Dataset Card for Evaluation run of zelk12/MT-Merge2-gemma-2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [zelk12/MT-Merge2-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge2-gemma-2-9B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/zelk12__MT-Merge2-gemma-2-9B-details",
name="zelk12__MT-Merge2-gemma-2-9B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T20-18-59.215211](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT-Merge2-gemma-2-9B-details/blob/main/zelk12__MT-Merge2-gemma-2-9B/results_2024-11-25T20-18-59.215211.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.15558912386706947,
"exact_match_stderr,none": 0.009395560341133794,
"acc_norm,none": 0.5509145155013621,
"acc_norm_stderr,none": 0.0052798095671504255,
"prompt_level_loose_acc,none": 0.7781885397412199,
"prompt_level_loose_acc_stderr,none": 0.017878765407944433,
"inst_level_loose_acc,none": 0.8441247002398081,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679,
"inst_level_strict_acc,none": 0.8249400479616307,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6094428050685645,
"acc_norm_stderr,none": 0.006032576873904748,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6164383561643836,
"acc_norm_stderr,none": 0.04038112474853568
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6685393258426966,
"acc_norm_stderr,none": 0.03538285323537675
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.35067114093959734,
"acc_norm_stderr,none": 0.013833961416620248,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35858585858585856,
"acc_norm_stderr,none": 0.034169036403915276
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3608058608058608,
"acc_norm_stderr,none": 0.020570977668247264
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33482142857142855,
"acc_norm_stderr,none": 0.02232142857142857
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8249400479616307,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7781885397412199,
"prompt_level_loose_acc_stderr,none": 0.017878765407944433,
"inst_level_loose_acc,none": 0.8441247002398081,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15558912386706947,
"exact_match_stderr,none": 0.009395560341133794,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.32247557003257327,
"exact_match_stderr,none": 0.02672084427631396
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.14935064935064934,
"exact_match_stderr,none": 0.028815962452887128
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.27461139896373055,
"exact_match_stderr,none": 0.03221024508041151
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679
},
"leaderboard_musr": {
"acc_norm,none": 0.42063492063492064,
"acc_norm_stderr,none": 0.017592458763710066,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
}
},
"leaderboard": {
"exact_match,none": 0.15558912386706947,
"exact_match_stderr,none": 0.009395560341133794,
"acc_norm,none": 0.5509145155013621,
"acc_norm_stderr,none": 0.0052798095671504255,
"prompt_level_loose_acc,none": 0.7781885397412199,
"prompt_level_loose_acc_stderr,none": 0.017878765407944433,
"inst_level_loose_acc,none": 0.8441247002398081,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679,
"inst_level_strict_acc,none": 0.8249400479616307,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6094428050685645,
"acc_norm_stderr,none": 0.006032576873904748,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6164383561643836,
"acc_norm_stderr,none": 0.04038112474853568
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6685393258426966,
"acc_norm_stderr,none": 0.03538285323537675
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.35067114093959734,
"acc_norm_stderr,none": 0.013833961416620248,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35858585858585856,
"acc_norm_stderr,none": 0.034169036403915276
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3608058608058608,
"acc_norm_stderr,none": 0.020570977668247264
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33482142857142855,
"acc_norm_stderr,none": 0.02232142857142857
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8249400479616307,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7781885397412199,
"prompt_level_loose_acc_stderr,none": 0.017878765407944433,
"inst_level_loose_acc,none": 0.8441247002398081,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15558912386706947,
"exact_match_stderr,none": 0.009395560341133794,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.32247557003257327,
"exact_match_stderr,none": 0.02672084427631396
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02857142857142857,
"exact_match_stderr,none": 0.009973998820736053
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.14935064935064934,
"exact_match_stderr,none": 0.028815962452887128
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.27461139896373055,
"exact_match_stderr,none": 0.03221024508041151
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43816489361702127,
"acc_stderr,none": 0.004523476746563679
},
"leaderboard_musr": {
"acc_norm,none": 0.42063492063492064,
"acc_norm_stderr,none": 0.017592458763710066,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dreilly/Toyota-Smarthome | dreilly | "2024-11-25T21:48:15Z" | 0 | 0 | [
"license:other",
"arxiv:2010.14982",
"region:us"
] | null | "2024-11-25T20:19:44Z" | ---
license: other
license_name: smarthome
license_link: https://project.inria.fr/toyotasmarthome/files/2020/12/License_v2.pdf
extra_gated_fields:
Your name: text
Your affiliation (university, company, etc): text
What do you plan to use the dataset for? (brief description): text
You agree not to re-distribute this dataset: checkbox
You agree not to use this dataset for commercial purposes: checkbox
extra_gated_heading: "Read and acknowledge the license below to access the repository"
extra_gated_description: "License: https://project.inria.fr/toyotasmarthome/files/2020/12/License_v2.pdf"
extra_gated_button_content: "Access the dataset"
---
# The Toyota Smarthome Dataset
This page introduces the Toyota Smarthome dataset. Smarthome has been recorded in an apartment equipped with 7 Kinect v1 cameras. It contains the common daily living activities of 18 subjects. The subjects are senior people in the age range 60-80 years old. The dataset has a resolution of 640×480 and offers 3 modalities: RGB + Depth + 3D Skeleton. The 3D skeleton joints were extracted from RGB. For privacy-preserving reasons, the face of the subjects is blurred. Currently, two versions of the dataset are provided: Toyota Smarthome Trimmed and Toyota Smarthome Untrimmed.
The Toyota Smarthome Dataset consists of **two** versions: Trimmed and Untrimmed.
| **Version** | **Paper link** |
|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Toyota Smarthome Trimmed** | [![Paper](https://img.shields.io/badge/Read%20on%20CVF-📄-blue.svg)](https://openaccess.thecvf.com/content_ICCV_2019/html/Das_Toyota_Smarthome_Real-World_Activities_of_Daily_Living_ICCV_2019_paper.html) |
| **Toyota Smarthome Untrimmed** | [![Paper](https://img.shields.io/badge/Read%20on%20arXiv-📄-green.svg)](https://arxiv.org/abs/2010.14982) |
## Toyota Smarthome Trimmed
Toyota Smarthome Trimmed has been designed for the activity classification task of 31 activities. The videos were clipped per activity, resulting in a total of 16,115 short RGB+D video samples. activities were performed in a natural manner. As a result, the dataset poses a unique combination of challenges: high intra-class variation, high-class imbalance, and activities with similar motion and high duration variance. Activities were annotated with both coarse and fine-grained labels. These characteristics differentiate Toyota Smarthome Trimmed from other datasets for activity classification.
```
📂 Toyota_Smarthome_Trimmed
├── 📁 csvs
│ ├── 📁 cross_subject
│ │ ├── train.csv / val.csv / test.csv
│ ├── 📁 cross_view_1
│ │ ├── train.csv / val.csv / test.csv
│ ├── 📁 cross_view_2
│ │ ├── train.csv / val.csv / test.csv
├── 📁 raw_data
│ ├── rgb.zip
│ ├── skeletons.zip
├── 📁 cropped_224x224_data.zip
│ ├── rgb.zip
│ ├── skeletons.zip
```
## Toyota Smarthome Untrimmed (TSU)
Toyota Smarthome Untrimmed (TSU) is targeting the activity detection task in long untrimmed videos. Therefore, in TSU, we kept the entire recording when the person is visible. The dataset contains 536 videos with an average duration of 21 mins. Since this dataset is based on the same footage video as Toyota Smarthome Trimmed version, it features the same challenges and introduces additional ones. To better tackle the real-world challenges in the untrimmed video, we densely annotate the dataset with 51 activities.
```
📂 Toyota_Smarthome_Untrimmed
├── Annotations.zip
├── Videos_mp4.zip
├── Skeletons.zip
```
|
IanAndJohn/iphone_img | IanAndJohn | "2024-11-25T20:24:18Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T20:22:51Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: Latitude
dtype: float64
- name: Longitude
dtype: float64
splits:
- name: train
num_bytes: 306873415.0
num_examples: 100
download_size: 306697640
dataset_size: 306873415.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- |
TenzinGayche/benchmark_melong_261123 | TenzinGayche | "2024-11-25T20:24:28Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:24:26Z" | ---
dataset_info:
features:
- name: Source
dtype: string
- name: Target
dtype: string
- name: File_Name
dtype: string
- name: Machine Aligned
dtype: bool
- name: en_inference
dtype: string
- name: bo_inference
dtype: string
- name: bleu
dtype: float64
- name: boen_bleu
dtype: float64
- name: enbo_bleu
dtype: float64
splits:
- name: train
num_bytes: 7967715
num_examples: 9118
download_size: 3580891
dataset_size: 7967715
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mateusz1017/company_reports_features_combined_3 | Mateusz1017 | "2024-11-25T21:31:35Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:25:47Z" | ---
dataset_info:
features:
- name: __index_level_0__
dtype: float64
- name: features
sequence:
sequence: float64
- name: cik
dtype: string
- name: year
dtype: string
- name: section_1
dtype: string
- name: company_name
dtype: string
- name: sic_code
dtype: string
- name: input_ids
sequence: int64
- name: ticker
sequence: string
- name: returns
dtype: float64
- name: logged_monthly_returns_matrix
sequence: float64
- name: input_ids_length
dtype: float64
splits:
- name: train
num_bytes: 10037779130
num_examples: 8840
download_size: 4789061215
dataset_size: 10037779130
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Trelis/bird-songs | Trelis | "2024-11-25T22:27:20Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:27:21Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: url
dtype: string
- name: license
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5208
num_examples: 42
- name: validation
num_bytes: 612
num_examples: 5
download_size: 6467
dataset_size: 5820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Newvel/narrativeqa_filtered_unique | Newvel | "2024-11-25T20:29:50Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:29:16Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 383221308
num_examples: 1102
- name: test
num_bytes: 117383252
num_examples: 355
- name: validation
num_bytes: 39413163
num_examples: 115
download_size: 301215564
dataset_size: 540017723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
mooshiponz/tester | mooshiponz | "2024-11-25T23:13:19Z" | 0 | 0 | [
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:summarization",
"language:en",
"license:unknown",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal"
] | [
"question-answering",
"sentence-similarity",
"summarization"
] | "2024-11-25T20:32:53Z" | ---
license: unknown
task_categories:
- question-answering
- sentence-similarity
- summarization
language:
- en
tags:
- legal
pretty_name: Test_terers
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
akshaya-244/MathVision-224x224 | akshaya-244 | "2024-11-25T20:34:37Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:34:36Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: image
dtype: string
- name: decoded_image
dtype: image
- name: answer
dtype: string
- name: solution
dtype: string
- name: level
dtype: int64
- name: subject
dtype: string
splits:
- name: nano
num_bytes: 3725000.0
num_examples: 152
download_size: 3715213
dataset_size: 3725000.0
configs:
- config_name: default
data_files:
- split: nano
path: data/nano-*
---
|
allenai/tulu-3-sft-olmo-mixture | allenai | "2024-11-26T00:04:03Z" | 0 | 0 | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"multilinguality:multilingual",
"source_datasets:allenai/coconot",
"source_datasets:ai2-adapt-dev/flan_v2_converted",
"source_datasets:HuggingFaceH4/no_robots",
"source_datasets:OpenAssistant/oasst1",
"source_datasets:allenai/tulu-3-personas-math",
"source_datasets:allenai/tulu-3-sft-personas-math-grade",
"source_datasets:allenai/tulu-3-sft-personas-code",
"source_datasets:allenai/tulu-3-personas-algebra",
"source_datasets:allenai/tulu-3-sft-personas-instruction-following",
"source_datasets:AI-MO/NuminaMath-TIR",
"source_datasets:allenai/wildguardmix",
"source_datasets:allenai/wildjailbreak",
"source_datasets:allenai/tulu-3-hard-coded",
"source_datasets:CohereForAI/aya_dataset",
"source_datasets:allenai/WildChat-1M",
"source_datasets:LipengCS/Table-GPT",
"source_datasets:allenai/SciRIFF",
"language:amh",
"language:arb",
"language:ary",
"language:ars",
"language:acq",
"language:arz",
"language:apc",
"language:ben",
"language:ceb",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:eus",
"language:fil",
"language:fin",
"language:fra",
"language:gle",
"language:guj",
"language:hat",
"language:hau",
"language:hin",
"language:hun",
"language:ibo",
"language:ind",
"language:ita",
"language:jav",
"language:jpn",
"language:kan",
"language:kir",
"language:kor",
"language:kur",
"language:lit",
"language:mal",
"language:mar",
"language:mlg",
"language:msa",
"language:mya",
"language:nep",
"language:nld",
"language:nso",
"language:nya",
"language:pan",
"language:pes",
"language:pol",
"language:por",
"language:pus",
"language:rus",
"language:sin",
"language:sna",
"language:snd",
"language:som",
"language:spa",
"language:sqi",
"language:srp",
"language:sun",
"language:swa",
"language:swe",
"language:tam",
"language:tel",
"language:tha",
"language:tur",
"language:ukr",
"language:urd",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:zho",
"language:zul",
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"other"
] | "2024-11-25T20:34:50Z" | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language:
- amh
- arb
- ary
- ars
- acq
- arz
- apc
- ben
- ceb
- dan
- deu
- ell
- eng
- eus
- fil
- fin
- fra
- gle
- guj
- hat
- hau
- hin
- hun
- ibo
- ind
- ita
- jav
- jpn
- kan
- kir
- kor
- kur
- lit
- mal
- mar
- mlg
- msa
- mya
- nep
- nld
- nso
- nya
- pan
- pes
- pol
- por
- pus
- rus
- sin
- sna
- snd
- som
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tel
- tha
- tur
- ukr
- urd
- vie
- wol
- xho
- yor
- zho
- zul
license: odc-by
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- allenai/coconot
- ai2-adapt-dev/flan_v2_converted
- HuggingFaceH4/no_robots
- OpenAssistant/oasst1
- allenai/tulu-3-personas-math
- allenai/tulu-3-sft-personas-math-grade
- allenai/tulu-3-sft-personas-code
- allenai/tulu-3-personas-algebra
- allenai/tulu-3-sft-personas-instruction-following
- AI-MO/NuminaMath-TIR
- allenai/wildguardmix
- allenai/wildjailbreak
- allenai/tulu-3-hard-coded
- CohereForAI/aya_dataset
- allenai/WildChat-1M
- LipengCS/Table-GPT
- allenai/SciRIFF
task_categories:
- other
dataset_info:
features:
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2914250826.5647593
num_examples: 939343
download_size: 1412954868
dataset_size: 2914250826.5647593
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
*Note that this collection is licensed under ODC-BY-1.0 license; different licenses apply to subsets of the data. Some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
The OLMo v2 SFT mixture was used to train the [OLMo models](https://huggingface.co/collections/allenai/olmo-v2-models-6744f0938a9e7c6340140de8).
It contains 939,344 samples from the following sets:
- [CoCoNot](https://huggingface.co/datasets/allenai/coconot) (ODC-BY-1.0), 10,983 prompts (Brahman et al., 2024)
- [FLAN v2](https://github.com/google-research/FLAN/tree/main) via [`ai2-adapt-dev/flan_v2_converted`](https://huggingface.co/datasets/ai2-adapt-dev/flan_v2_converted), 89,982 prompts (Longpre et al., 2023)
- [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) (CC-BY-NC-4.0), 9,500 prompts (Rajani et al. 2023)
- [OpenAssistant Guanaco](https://huggingface.co/datasets/OpenAssistant/oasst1) (Apache 2.0), 7,132 prompts (Kopf et al., 2024)
- [Tulu 3 Persona MATH](https://huggingface.co/datasets/allenai/tulu-3-personas-math) (ODC-BY-1.0), 149,960 prompts
- [Tulu 3 Persona GSM](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math-grade) (ODC-BY-1.0), 49,980 prompts
- [Tulu 3 Persona Python](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-code) (ODC-BY-1.0), 34,999 prompts
- [Tulu 3 Persona Algebra](https://huggingface.co/datasets/allenai/tulu-3-personas-algebra) (ODC-BY-1.0), 20,000 prompts
- [Tulu 3 Persona IF](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-instruction-following) (ODC-BY-1.0), 29,980 prompts
- [NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) (Apache 2.0), 64,312 prompts (Beeching et al. 2024)
- [Tulu 3 WildGuardMix](https://huggingface.co/datasets/allenai/wildguardmix) (Apache 2.0), 50,000 prompts (Han et al., 2024)
- [Tulu 3 WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak) (ODC-BY-1.0), 50,000 prompts (Wildteaming, 2024)
- [Tulu 3 Hardcoded](https://huggingface.co/datasets/allenai/tulu-3-hard-coded) (CC-BY-4.0), 240 prompts
- [Aya](https://huggingface.co/datasets/CohereForAI/aya_dataset) (Apache 2.0), 100,000 prompts (Singh et al., 2024)
- [WildChat GPT-4](https://huggingface.co/datasets/allenai/WildChat-1M) (ODC-BY-1.0), 100,000 prompts (Zhao et al., 2024)
- [TableGPT](https://huggingface.co/datasets/LipengCS/Table-GPT) (MIT), 5,000 prompts (Zha et al., 2023)
- [SciRIFF](https://huggingface.co/datasets/allenai/SciRIFF) (ODC-BY-1.0), 10,000 prompts (Wadden et al., 2024)
- [Evol CodeAlpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) (Apache 2.0), 107,276 prompts (Luo et al., 2023)
## Dataset Structure
Each example in the dataset contains the standard instruction-tuning data points as follow:
- `id` (str): a unique identifier
- `messages` (list): message format used for supervised fine-tuning (this contains user prompt and assistant responses)
- `source` (str): the source dataset for the given sample
### Model Family
| **Stage** | **OLMo-2-1124-7B** | **OLMo-2-1124-13B** |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| **Base Model** | [OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo2-7B-1124) | [OLMo-2-1124-13B](https://huggingface.co/allenai/OLMo2-13B-1124) |
| **SFT** | [OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
| **DPO** | [OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
## License
This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes output data generated from third party models that are subject to separate terms governing their use. For more information on license and terms, consult each subset linked above.
## Citation
If OLMo or any of the related materials were helpful to your work, please cite: |
Talha185/talha-GenAI-Dataset | Talha185 | "2024-11-25T20:37:10Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:36:41Z" | ---
dataset_info:
features:
- name: image
dtype:
array3_d:
shape:
- 4032
- 3024
- 3
dtype: uint8
- name: label
dtype: string
splits:
- name: train
num_bytes: 817136735
num_examples: 12
download_size: 241057706
dataset_size: 817136735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jason-sjh/spetial_token_only_ger_dgs_1124_v2 | Jason-sjh | "2024-11-25T20:44:28Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:44:21Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 3229029
num_examples: 7096
- name: dev
num_bytes: 230327
num_examples: 519
- name: test
num_bytes: 278088
num_examples: 642
download_size: 806768
dataset_size: 3737444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/airoboros_stage_2_gpt-4o-mini_no_filter | mlfoundations-dev | "2024-11-25T20:48:51Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:48:28Z" | ---
dataset_info:
features:
- name: min_docsearch_score
dtype: float64
- name: airoboros_subset
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: embedding
sequence: float64
- name: too_similar
dtype: bool
- name: similar_text
dtype: string
- name: similar_text_distance
dtype: float64
splits:
- name: train
num_bytes: 525472901
num_examples: 134456
download_size: 501559503
dataset_size: 525472901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
P-H-B-D-a16z/ViZDoom-Basic | P-H-B-D-a16z | "2024-11-25T22:58:36Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:52:36Z" | ---
dataset_info:
features:
- name: episode_id
dtype: int64
- name: frames
dtype: binary
- name: actions
dtype: int64
- name: health
dtype: int64
- name: step_ids
dtype: int64
splits:
- name: train
num_bytes: 1399327657
num_examples: 247927
download_size: 1337637652
dataset_size: 1399327657
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
duarteocarmo/farinando | duarteocarmo | "2024-11-25T20:53:17Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:53:14Z" | ---
dataset_info:
features:
- name: conversations
struct:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1026349
num_examples: 191
- name: test
num_bytes: 121486
num_examples: 22
download_size: 388576
dataset_size: 1147835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Jotschi/wikipedia_knowledge_base_en | Jotschi | "2024-11-25T21:43:05Z" | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"annotations_creators:machine-generated",
"language:en",
"license:cc-by-sa-3.0",
"region:us",
"english",
"synthetic"
] | [
"text-generation",
"text2text-generation",
"text-retrieval"
] | "2024-11-25T20:58:53Z" | ---
license: cc-by-sa-3.0
language:
- en
tags:
- english
- synthetic
annotations_creators:
- machine-generated
pretty_name: Wikipedia Knowledge Base
size_categories:
- n<117M
task_categories:
- text-generation
- text2text-generation
- text-retrieval
---
# Dataset Card for Wikipedia Knowledge Base
The dataset contains 117_364_716 extracted facts from a subset of selected wikipedia articles.
## Dataset Description
- **Curated by:** Jotschi
- **Language(s) (NLP):** English
## Dataset Creation
The dataset was created using LLM processing a subset of the [English Wikipedia 20231101.en dataset](https://huggingface.co/datasets/wikimedia/wikipedia/tree/main/20231101.en).
```json
{
"language": null,
"title": "Artificial intelligence",
"url": "https://en.wikipedia.org/wiki/Artificial%20intelligence",
"id": "1164",
"facts": [
{
"text": "Two most widely used AI textbooks in 2023"
},
{
"text": "Four most widely used AI textbooks in 2008"
},
{
"text": "Convolutional Neural Networks (CNN) introduced by Kunihiko Fukushima in 1980"
},
{
"text": "AI and machine learning technology is used in most essential applications of 2020s."
},
{
"text": "In a 2017 survey, one in five companies reported they had incorporated AI in some offerings or processes."
},
{
"text": "AI algorithms experience exponential slowdown for large problems due to combinatorial explosion."
},
{
"text": "Humans primarily use intuitive judgments rather than step-by-step deduction for problem-solving."
},
{
"text": "In classical planning, the agent knows exactly what the effect of any action will be."
},
{
"text": "In most real-world problems, the agent may not know for certain what will happen after each possible action (it is not deterministic)."
},
{
"text": "The space of possible future actions and situations is typically intractably large."
},
{
"text": "A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way."
},
{
"text": "A Markov decision process also has a reward function that supplies the utility of each state and the cost of each action."
},
{
"text": "AI & ML in Fusion was published as a video lecture"
},
{
"text": "David H. Autor's 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) discusses workplace automation"
},
{
"text": "Margaret Boden's 'Mind As Machine' (2006) explores artificial intelligence"
}
…
]
}
```
## Disclaimer
Please note that the LLM process can distort the extracted facts, and no guarantee can be made regarding the correctness of the extracted facts.
|
anatoliifesiuk/finetuning_demo | anatoliifesiuk | "2024-11-25T20:59:40Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T20:59:36Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 68892
num_examples: 106
download_size: 21443
dataset_size: 68892
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
IEEZ/questions | IEEZ | "2024-11-25T21:05:34Z" | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:02:25Z" | ---
license: mit
---
|
LunarMartins/vozpesadelo | LunarMartins | "2024-11-25T22:00:39Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-11-25T21:04:39Z" | ---
license: openrail
---
|
haggs/test | haggs | "2024-11-25T21:10:38Z" | 0 | 0 | [
"task_categories:token-classification",
"language:aa",
"license:apache-2.0",
"size_categories:n>1T",
"region:us",
"chemistry",
"code",
"music"
] | [
"token-classification"
] | "2024-11-25T21:07:31Z" | ---
license: apache-2.0
task_categories:
- token-classification
language:
- aa
tags:
- chemistry
- code
- music
pretty_name: prettytest
size_categories:
- n>1T
--- |
HumanLLMs/log | HumanLLMs | "2024-11-25T21:08:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:08:33Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: selected_model
dtype: string
- name: pair
dtype: string
- name: submission_time
dtype: string
splits:
- name: train
num_bytes: 1317
num_examples: 10
download_size: 2877
dataset_size: 1317
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reflection-gen/ds_chat_pos_reflct_rmsprop_iter3_sppo_hard_new_cn_rl_oj_iter3-pos-bin-reflct | reflection-gen | "2024-11-25T21:12:17Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:12:16Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
- name: reflection_generate_0
dtype: string
- name: reflection_generate_0_score
dtype: int64
- name: reflection_traceback_0
dtype: string
- name: reflection_generate_1
dtype: string
- name: reflection_generate_1_score
dtype: int64
- name: reflection_traceback_1
dtype: string
- name: reflection_generate_2
dtype: string
- name: reflection_generate_2_score
dtype: int64
- name: reflection_traceback_2
dtype: string
- name: reflection_generate_3
dtype: string
- name: reflection_generate_3_score
dtype: int64
- name: reflection_traceback_3
dtype: string
- name: average_reflection_score
dtype: float64
- name: chosen_average_reflection_score
dtype: float64
splits:
- name: train
num_bytes: 31596224
num_examples: 2778
download_size: 10662211
dataset_size: 31596224
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter3_sppo_hard_new_cn_rl_oj_iter3-pos-bin-reflct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SAVE0x0/reddit_dataset_218 | SAVE0x0 | "2024-11-25T23:02:00Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2024-11-25T21:15:37Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** SAVE0x0/reddit_dataset_218
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 0
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{SAVE0x02024datauniversereddit_dataset_218,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={SAVE0x0},
year={2024},
url={https://huggingface.co/datasets/SAVE0x0/reddit_dataset_218},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 30818900
- **Date Range:** 2010-04-28 to 2024-11-22
- **Last Updated:** 2024-11-25
### Data Distribution
- Posts: 4.61%
- Comments: 95.39%
### Top 10 Subreddits
For full statistics, please refer to the `reddit_stats.json` file in the repository.
| Rank | Item | Percentage |
|------|------|------------|
| 1 | r/AmItheAsshole | 3.09% |
| 2 | r/politics | 2.89% |
| 3 | r/AskReddit | 2.76% |
| 4 | r/wallstreetbets | 2.72% |
| 5 | r/teenagers | 2.34% |
| 6 | r/NoStupidQuestions | 2.15% |
| 7 | r/nfl | 2.02% |
| 8 | r/pics | 1.93% |
| 9 | r/mildlyinfuriating | 1.91% |
| 10 | r/gaming | 1.85% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2024-11-25 | 30818900 | 30818900 |
|
aalexchengg/cryptonite | aalexchengg | "2024-11-25T21:18:28Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:18:24Z" | ---
dataset_info:
features:
- name: publisher
dtype: string
- name: date
dtype: string
- name: author
dtype: string
- name: number
dtype: string
- name: orientation
dtype: string
- name: clue
dtype: string
- name: answer
dtype: string
- name: enumeration
dtype: string
- name: quick
dtype: string
- name: sub_publisher
dtype: string
splits:
- name: train
num_bytes: 64412125
num_examples: 470804
- name: test
num_bytes: 3584226
num_examples: 26157
- name: val
num_bytes: 3578419
num_examples: 26156
download_size: 26392291
dataset_size: 71574770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
|
open-llm-leaderboard/oopere__pruned60-llama-1b-details | open-llm-leaderboard | "2024-11-25T21:24:00Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T21:20:54Z" | ---
pretty_name: Evaluation run of oopere/pruned60-llama-1b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [oopere/pruned60-llama-1b](https://huggingface.co/oopere/pruned60-llama-1b)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/oopere__pruned60-llama-1b-details\"\
,\n\tname=\"oopere__pruned60-llama-1b__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-11-25T21-20-53.829333](https://huggingface.co/datasets/open-llm-leaderboard/oopere__pruned60-llama-1b-details/blob/main/oopere__pruned60-llama-1b/results_2024-11-25T21-20-53.829333.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.30016863406408095,\n \"acc_norm_stderr,none\"\
: 0.004973624525121431,\n \"prompt_level_loose_acc,none\": 0.1367837338262477,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.014787002800682885,\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0,\n\
\ \"inst_level_loose_acc,none\": 0.2446043165467626,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.133086876155268,\n \"prompt_level_strict_acc_stderr,none\": 0.014617009342904459,\n\
\ \"inst_level_strict_acc,none\": 0.23261390887290168,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.11727061170212766,\n\
\ \"acc_stderr,none\": 0.00293330704065535,\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.2966498871723659,\n \"acc_norm_stderr,none\": 0.0056913336275985485,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5187165775401069,\n\
\ \"acc_norm_stderr,none\": 0.03663608375537843\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.304,\n\
\ \"acc_norm_stderr,none\": 0.02915021337415965\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\":\
\ 0.03162125257572558\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.084,\n \"acc_norm_stderr,none\": 0.017578738526776348\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.516,\n \
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\":\
\ 0.025537121574548162\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.052,\n\
\ \"acc_norm_stderr,none\": 0.014070391025641678\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.2808219178082192,\n \"acc_norm_stderr,none\"\
: 0.037320694849458984\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\":\
\ 0.02391551394448624\n },\n \"leaderboard_bbh_ruin_names\": {\n \
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.276,\n \"acc_norm_stderr,none\": 0.02832853727421142\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.5393258426966292,\n\
\ \"acc_norm_stderr,none\": 0.03746587736387869\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.224,\n\
\ \"acc_norm_stderr,none\": 0.026421361687347884\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\":\
\ 0.021172081336336534\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.24916107382550334,\n\
\ \"acc_norm_stderr,none\": 0.01254098574419822,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2474747474747475,\n \"acc_norm_stderr,none\": 0.030746300742124484\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.25824175824175827,\n\
\ \"acc_norm_stderr,none\": 0.01874762138022973\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.23883928571428573,\n \"acc_norm_stderr,none\"\
: 0.02016681446395684\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.133086876155268,\n \"prompt_level_strict_acc_stderr,none\": 0.014617009342904457,\n\
\ \"inst_level_strict_acc,none\": 0.23261390887290168,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.1367837338262477,\n \"prompt_level_loose_acc_stderr,none\": 0.014787002800682885,\n\
\ \"inst_level_loose_acc,none\": 0.2446043165467626,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"alias\": \" - leaderboard_math_hard\"\n },\n \
\ \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\": {\n \
\ \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\"\
: \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\":\
\ 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\"\
: 0.11727061170212766,\n \"acc_stderr,none\": 0.00293330704065535\n \
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4074074074074074,\n\
\ \"acc_norm_stderr,none\": 0.01732644518538479,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.234375,\n\
\ \"acc_norm_stderr,none\": 0.02652733398834892\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\":\
\ 0.03167708558254714\n }\n },\n \"leaderboard\": {\n \"acc_norm,none\"\
: 0.30016863406408095,\n \"acc_norm_stderr,none\": 0.004973624525121431,\n\
\ \"prompt_level_loose_acc,none\": 0.1367837338262477,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.014787002800682885,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"inst_level_loose_acc,none\": 0.2446043165467626,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.133086876155268,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.014617009342904459,\n \"inst_level_strict_acc,none\"\
: 0.23261390887290168,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"acc,none\": 0.11727061170212766,\n \"acc_stderr,none\": 0.00293330704065535,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.2966498871723659,\n \"acc_norm_stderr,none\": 0.0056913336275985485,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.084,\n \"acc_norm_stderr,none\": 0.017578738526776348\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.052,\n \"acc_norm_stderr,none\": 0.014070391025641678\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2808219178082192,\n\
\ \"acc_norm_stderr,none\": 0.037320694849458984\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\": 0.02391551394448624\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.276,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5393258426966292,\n \"acc_norm_stderr,none\"\
: 0.03746587736387869\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.224,\n \"acc_norm_stderr,none\": 0.026421361687347884\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.216,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.128,\n \"acc_norm_stderr,none\": 0.021172081336336534\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.24916107382550334,\n\
\ \"acc_norm_stderr,none\": 0.01254098574419822,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2474747474747475,\n\
\ \"acc_norm_stderr,none\": 0.030746300742124484\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.25824175824175827,\n \"acc_norm_stderr,none\": 0.01874762138022973\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.23883928571428573,\n \"acc_norm_stderr,none\"\
: 0.02016681446395684\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.133086876155268,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.014617009342904457,\n \
\ \"inst_level_strict_acc,none\": 0.23261390887290168,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.1367837338262477,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.014787002800682885,\n \"inst_level_loose_acc,none\"\
: 0.2446043165467626,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0,\n \"alias\": \" - leaderboard_math_hard\"\
\n },\n \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\": {\n\
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.11727061170212766,\n\
\ \"acc_stderr,none\": 0.00293330704065535\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4074074074074074,\n \"acc_norm_stderr,none\"\
: 0.01732644518538479,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.234375,\n\
\ \"acc_norm_stderr,none\": 0.02652733398834892\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n }\n}\n```"
repo_url: https://huggingface.co/oopere/pruned60-llama-1b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_ifeval
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_ifeval_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T21-20-53.829333.jsonl'
- config_name: oopere__pruned60-llama-1b__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T21_20_53.829333
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T21-20-53.829333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T21-20-53.829333.jsonl'
---
# Dataset Card for Evaluation run of oopere/pruned60-llama-1b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [oopere/pruned60-llama-1b](https://huggingface.co/oopere/pruned60-llama-1b)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/oopere__pruned60-llama-1b-details",
name="oopere__pruned60-llama-1b__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T21-20-53.829333](https://huggingface.co/datasets/open-llm-leaderboard/oopere__pruned60-llama-1b-details/blob/main/oopere__pruned60-llama-1b/results_2024-11-25T21-20-53.829333.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.30016863406408095,
"acc_norm_stderr,none": 0.004973624525121431,
"prompt_level_loose_acc,none": 0.1367837338262477,
"prompt_level_loose_acc_stderr,none": 0.014787002800682885,
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"inst_level_loose_acc,none": 0.2446043165467626,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904459,
"inst_level_strict_acc,none": 0.23261390887290168,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.11727061170212766,
"acc_stderr,none": 0.00293330704065535,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.2966498871723659,
"acc_norm_stderr,none": 0.0056913336275985485,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.084,
"acc_norm_stderr,none": 0.017578738526776348
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.052,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2808219178082192,
"acc_norm_stderr,none": 0.037320694849458984
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5393258426966292,
"acc_norm_stderr,none": 0.03746587736387869
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.24916107382550334,
"acc_norm_stderr,none": 0.01254098574419822,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2474747474747475,
"acc_norm_stderr,none": 0.030746300742124484
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.25824175824175827,
"acc_norm_stderr,none": 0.01874762138022973
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.23883928571428573,
"acc_norm_stderr,none": 0.02016681446395684
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904457,
"inst_level_strict_acc,none": 0.23261390887290168,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.1367837338262477,
"prompt_level_loose_acc_stderr,none": 0.014787002800682885,
"inst_level_loose_acc,none": 0.2446043165467626,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.11727061170212766,
"acc_stderr,none": 0.00293330704065535
},
"leaderboard_musr": {
"acc_norm,none": 0.4074074074074074,
"acc_norm_stderr,none": 0.01732644518538479,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.234375,
"acc_norm_stderr,none": 0.02652733398834892
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
}
},
"leaderboard": {
"acc_norm,none": 0.30016863406408095,
"acc_norm_stderr,none": 0.004973624525121431,
"prompt_level_loose_acc,none": 0.1367837338262477,
"prompt_level_loose_acc_stderr,none": 0.014787002800682885,
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"inst_level_loose_acc,none": 0.2446043165467626,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904459,
"inst_level_strict_acc,none": 0.23261390887290168,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.11727061170212766,
"acc_stderr,none": 0.00293330704065535,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.2966498871723659,
"acc_norm_stderr,none": 0.0056913336275985485,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.084,
"acc_norm_stderr,none": 0.017578738526776348
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.052,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2808219178082192,
"acc_norm_stderr,none": 0.037320694849458984
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.276,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5393258426966292,
"acc_norm_stderr,none": 0.03746587736387869
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.216,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.128,
"acc_norm_stderr,none": 0.021172081336336534
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.24916107382550334,
"acc_norm_stderr,none": 0.01254098574419822,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2474747474747475,
"acc_norm_stderr,none": 0.030746300742124484
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.25824175824175827,
"acc_norm_stderr,none": 0.01874762138022973
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.23883928571428573,
"acc_norm_stderr,none": 0.02016681446395684
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.133086876155268,
"prompt_level_strict_acc_stderr,none": 0.014617009342904457,
"inst_level_strict_acc,none": 0.23261390887290168,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.1367837338262477,
"prompt_level_loose_acc_stderr,none": 0.014787002800682885,
"inst_level_loose_acc,none": 0.2446043165467626,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.11727061170212766,
"acc_stderr,none": 0.00293330704065535
},
"leaderboard_musr": {
"acc_norm,none": 0.4074074074074074,
"acc_norm_stderr,none": 0.01732644518538479,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.234375,
"acc_norm_stderr,none": 0.02652733398834892
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
addidas23/categorized_articles | addidas23 | "2024-11-26T01:27:28Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:24:55Z" | ---
dataset_info:
features:
- name: new_title
dtype: string
- name: Date
dtype: string
- name: GOID
dtype: int64
- name: category
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: sentiment
list:
- name: label
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 65613413
num_examples: 282624
download_size: 33050412
dataset_size: 65613413
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Cnam-LMSSC/french_librispeech_vibravoxed_chunk_4 | Cnam-LMSSC | "2024-11-25T22:07:46Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:29:18Z" | ---
dataset_info:
features:
- name: airborne
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: speaker_id
dtype: string
- name: throat_microphone_simulated
dtype:
audio:
sampling_rate: 16000
- name: rigid_in_ear_microphone_simulated
dtype:
audio:
sampling_rate: 16000
- name: soft_in_ear_microphone_simulated
dtype:
audio:
sampling_rate: 16000
- name: temple_vibration_pickup_simulated
dtype:
audio:
sampling_rate: 16000
- name: forehead_accelerometer_simulated
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 72005271518.0
num_examples: 25000
download_size: 66892436642
dataset_size: 72005271518.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brianmatzelle/destiny-hasan_piker-conversations-100k | brianmatzelle | "2024-11-26T00:29:07Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:30:13Z" | ---
dataset_info:
features:
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 72831725
num_examples: 100438
download_size: 19572069
dataset_size: 72831725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Avvvvva/M1-DPO-PairRM | Avvvvva | "2024-11-25T21:34:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:34:33Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 24112
num_examples: 10
download_size: 32835
dataset_size: 24112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Avvvvva/M2-DPO-PairRM | Avvvvva | "2024-11-25T21:40:55Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:40:54Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 28660
num_examples: 10
download_size: 39364
dataset_size: 28660
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mateusz1017/company_reports_features_combined_complete | Mateusz1017 | "2024-11-25T23:27:26Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:53:35Z" | ---
dataset_info:
features:
- name: __index_level_0__
dtype: int64
- name: features
sequence:
sequence: float64
- name: cik
dtype: string
- name: year
dtype: string
- name: section_1
dtype: string
- name: company_name
dtype: string
- name: sic_code
dtype: string
- name: input_ids
sequence: int64
- name: ticker
sequence: string
- name: returns
dtype: float64
- name: logged_monthly_returns_matrix
sequence: float64
- name: input_ids_length
dtype: int64
splits:
- name: train
num_bytes: 17786059759
num_examples: 15724
download_size: 8259874029
dataset_size: 17786059759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sartifyllc/swahili-self-instruct-data | sartifyllc | "2024-11-26T01:28:48Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T21:54:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1085882
num_examples: 6099
download_size: 543539
dataset_size: 1085882
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
k4d3/fart_fetish | k4d3 | "2024-11-25T22:11:08Z" | 0 | 0 | [
"license:wtfpl",
"size_categories:10K<n<100K",
"format:text",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-25T22:00:10Z" | ---
license: wtfpl
---
|
akshaya-244/MathVisionResized | akshaya-244 | "2024-11-25T22:02:13Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:02:09Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: image
dtype: string
- name: decoded_image
dtype: image
- name: answer
dtype: string
- name: solution
dtype: string
- name: level
dtype: int64
- name: subject
dtype: string
splits:
- name: test
num_bytes: 52513887.0
num_examples: 3040
- name: testmini
num_bytes: 5952656.0
num_examples: 304
download_size: 57879249
dataset_size: 58466543.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: testmini
path: data/testmini-*
---
|
open-llm-leaderboard/icefog72__IceDrunkenCherryRP-7b-details | open-llm-leaderboard | "2024-11-25T22:09:45Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:06:47Z" | ---
pretty_name: Evaluation run of icefog72/IceDrunkenCherryRP-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [icefog72/IceDrunkenCherryRP-7b](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/icefog72__IceDrunkenCherryRP-7b-details\"\
,\n\tname=\"icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-25T22-06-47.167580](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__IceDrunkenCherryRP-7b-details/blob/main/icefog72__IceDrunkenCherryRP-7b/results_2024-11-25T22-06-47.167580.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.47689463955637706,\n \"prompt_level_loose_acc_stderr,none\": 0.02149358829110096,\n\
\ \"exact_match,none\": 0.06268882175226587,\n \"exact_match_stderr,none\"\
: 0.006525049774700846,\n \"acc,none\": 0.30992353723404253,\n \
\ \"acc_stderr,none\": 0.004216237086078009,\n \"acc_norm,none\"\
: 0.47035932027500327,\n \"acc_norm_stderr,none\": 0.005330323393972458,\n\
\ \"prompt_level_strict_acc,none\": 0.4177449168207024,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n \"\
inst_level_loose_acc,none\": 0.5911270983213429,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5075507724353411,\n \"acc_norm_stderr,none\"\
: 0.006146177305130497,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.808,\n\
\ \"acc_norm_stderr,none\": 0.02496069198917196\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.688,\n\
\ \"acc_norm_stderr,none\": 0.029361067575219852\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\":\
\ 0.031355968923772626\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.76,\n \
\ \"acc_norm_stderr,none\": 0.027065293652238982\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n\
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\":\
\ 0.031191596026022818\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4246575342465753,\n \"acc_norm_stderr,none\": 0.04104862657656195\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.556,\n \
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6797752808988764,\n \"acc_norm_stderr,none\": 0.03506900770722058\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.252,\n\
\ \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013371083374985824,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2828282828282828,\n \"acc_norm_stderr,none\": 0.032087795587867514\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.32051282051282054,\n\
\ \"acc_norm_stderr,none\": 0.019990105460697117\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.4177449168207024,\n \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n\
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.47689463955637706,\n \"prompt_level_loose_acc_stderr,none\": 0.02149358829110096,\n\
\ \"inst_level_loose_acc,none\": 0.5911270983213429,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.06268882175226587,\n \"exact_match_stderr,none\"\
: 0.006525049774700846,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.09446254071661238,\n\
\ \"exact_match_stderr,none\": 0.016719462370368424\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.015151515151515152,\n\
\ \"exact_match_stderr,none\": 0.01067276863717474\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02142857142857143,\n \"exact_match_stderr,none\": 0.008669434577665551\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.05194805194805195,\n\
\ \"exact_match_stderr,none\": 0.017941344490765\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.30992353723404253,\n\
\ \"acc_stderr,none\": 0.004216237086078009\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4444444444444444,\n \"acc_norm_stderr,none\"\
: 0.017783559448746142,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.568,\n\
\ \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\"\
: 0.030964342373467638\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_strict_acc,none\"\
: 0.5347721822541966,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_loose_acc,none\": 0.47689463955637706,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.02149358829110096,\n \"exact_match,none\": 0.06268882175226587,\n \
\ \"exact_match_stderr,none\": 0.006525049774700846,\n \"acc,none\":\
\ 0.30992353723404253,\n \"acc_stderr,none\": 0.004216237086078009,\n \
\ \"acc_norm,none\": 0.47035932027500327,\n \"acc_norm_stderr,none\"\
: 0.005330323393972458,\n \"prompt_level_strict_acc,none\": 0.4177449168207024,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n \
\ \"inst_level_loose_acc,none\": 0.5911270983213429,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5075507724353411,\n \"acc_norm_stderr,none\"\
: 0.006146177305130497,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4246575342465753,\n\
\ \"acc_norm_stderr,none\": 0.04104862657656195\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6797752808988764,\n \"acc_norm_stderr,none\"\
: 0.03506900770722058\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013371083374985824,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2828282828282828,\n\
\ \"acc_norm_stderr,none\": 0.032087795587867514\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.32051282051282054,\n \"acc_norm_stderr,none\": 0.019990105460697117\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.4177449168207024,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02122341916161409,\n \
\ \"inst_level_strict_acc,none\": 0.5347721822541966,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.47689463955637706,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02149358829110096,\n \"inst_level_loose_acc,none\"\
: 0.5911270983213429,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.06268882175226587,\n\
\ \"exact_match_stderr,none\": 0.006525049774700846,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.09446254071661238,\n \"exact_match_stderr,none\": 0.016719462370368424\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.024390243902439025,\n \"exact_match_stderr,none\": 0.013965813032045565\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.015151515151515152,\n \"exact_match_stderr,none\"\
: 0.01067276863717474\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02142857142857143,\n \"exact_match_stderr,none\"\
: 0.008669434577665551\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.05194805194805195,\n \"exact_match_stderr,none\": 0.017941344490765\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.16580310880829016,\n \"exact_match_stderr,none\"\
: 0.026839845022314426\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.30992353723404253,\n \"acc_stderr,none\": 0.004216237086078009\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4444444444444444,\n\
\ \"acc_norm_stderr,none\": 0.017783559448746142,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.42578125,\n \"acc_norm_stderr,none\": 0.030964342373467638\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ }\n}\n```"
repo_url: https://huggingface.co/icefog72/IceDrunkenCherryRP-7b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_ifeval
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-06-47.167580.jsonl'
- config_name: icefog72__IceDrunkenCherryRP-7b__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T22_06_47.167580
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-06-47.167580.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-06-47.167580.jsonl'
---
# Dataset Card for Evaluation run of icefog72/IceDrunkenCherryRP-7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [icefog72/IceDrunkenCherryRP-7b](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/icefog72__IceDrunkenCherryRP-7b-details",
name="icefog72__IceDrunkenCherryRP-7b__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T22-06-47.167580](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__IceDrunkenCherryRP-7b-details/blob/main/icefog72__IceDrunkenCherryRP-7b/results_2024-11-25T22-06-47.167580.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.47689463955637706,
"prompt_level_loose_acc_stderr,none": 0.02149358829110096,
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009,
"acc_norm,none": 0.47035932027500327,
"acc_norm_stderr,none": 0.005330323393972458,
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5075507724353411,
"acc_norm_stderr,none": 0.006146177305130497,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013371083374985824,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2828282828282828,
"acc_norm_stderr,none": 0.032087795587867514
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32051282051282054,
"acc_norm_stderr,none": 0.019990105460697117
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.47689463955637706,
"prompt_level_loose_acc_stderr,none": 0.02149358829110096,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09446254071661238,
"exact_match_stderr,none": 0.016719462370368424
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009
},
"leaderboard_musr": {
"acc_norm,none": 0.4444444444444444,
"acc_norm_stderr,none": 0.017783559448746142,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
}
},
"leaderboard": {
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.47689463955637706,
"prompt_level_loose_acc_stderr,none": 0.02149358829110096,
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009,
"acc_norm,none": 0.47035932027500327,
"acc_norm_stderr,none": 0.005330323393972458,
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5075507724353411,
"acc_norm_stderr,none": 0.006146177305130497,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013371083374985824,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2828282828282828,
"acc_norm_stderr,none": 0.032087795587867514
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32051282051282054,
"acc_norm_stderr,none": 0.019990105460697117
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4177449168207024,
"prompt_level_strict_acc_stderr,none": 0.02122341916161409,
"inst_level_strict_acc,none": 0.5347721822541966,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.47689463955637706,
"prompt_level_loose_acc_stderr,none": 0.02149358829110096,
"inst_level_loose_acc,none": 0.5911270983213429,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.06268882175226587,
"exact_match_stderr,none": 0.006525049774700846,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.09446254071661238,
"exact_match_stderr,none": 0.016719462370368424
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.024390243902439025,
"exact_match_stderr,none": 0.013965813032045565
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.015151515151515152,
"exact_match_stderr,none": 0.01067276863717474
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.16580310880829016,
"exact_match_stderr,none": 0.026839845022314426
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.30992353723404253,
"acc_stderr,none": 0.004216237086078009
},
"leaderboard_musr": {
"acc_norm,none": 0.4444444444444444,
"acc_norm_stderr,none": 0.017783559448746142,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.42578125,
"acc_norm_stderr,none": 0.030964342373467638
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/AGI-0__smartllama3.1-8B-001-details | open-llm-leaderboard | "2024-11-25T22:18:59Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:15:59Z" | ---
pretty_name: Evaluation run of AGI-0/smartllama3.1-8B-001
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [AGI-0/smartllama3.1-8B-001](https://huggingface.co/AGI-0/smartllama3.1-8B-001)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/AGI-0__smartllama3.1-8B-001-details\"\
,\n\tname=\"AGI-0__smartllama3.1-8B-001__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-25T22-15-59.091478](https://huggingface.co/datasets/open-llm-leaderboard/AGI-0__smartllama3.1-8B-001-details/blob/main/AGI-0__smartllama3.1-8B-001/results_2024-11-25T22-15-59.091478.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.27911275415896486,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n \"\
inst_level_loose_acc,none\": 0.44364508393285373,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_strict_acc,none\": 0.4244604316546763,\n \
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\"\
: 0.3486535904255319,\n \"acc_stderr,none\": 0.0043446242238720624,\n\
\ \"exact_match,none\": 0.11858006042296072,\n \"exact_match_stderr,none\"\
: 0.008407847968363483,\n \"prompt_level_loose_acc,none\": 0.30129390018484287,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.019744473483514352,\n \
\ \"acc_norm,none\": 0.43702166299130885,\n \"acc_norm_stderr,none\"\
: 0.005364986018351183,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4639819475785454,\n\
\ \"acc_norm_stderr,none\": 0.006206268211754922,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6310160427807486,\n\
\ \"acc_norm_stderr,none\": 0.03538078548260318\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.528,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.66,\n \
\ \"acc_norm_stderr,none\": 0.030020073605457876\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542942\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542942\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\":\
\ 0.03063032594455827\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4041095890410959,\n \"acc_norm_stderr,none\": 0.04075198570039319\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.484,\n \
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.412,\n \"acc_norm_stderr,none\":\
\ 0.03119159602602282\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6573033707865169,\n \"acc_norm_stderr,none\": 0.03567395111782629\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.188,\n\
\ \"acc_norm_stderr,none\": 0.024760377727750513\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\":\
\ 0.024760377727750513\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\":\
\ 0.029462657598578648\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3062080536912752,\n\
\ \"acc_norm_stderr,none\": 0.013365961955378957,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.30808080808080807,\n \"acc_norm_stderr,none\": 0.03289477330098615\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.30036630036630035,\n\
\ \"acc_norm_stderr,none\": 0.019636438043304946\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3125,\n \"acc_norm_stderr,none\"\
: 0.021923384489444957\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.27911275415896486,\n \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n\
\ \"inst_level_strict_acc,none\": 0.4244604316546763,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.30129390018484287,\n \"prompt_level_loose_acc_stderr,none\": 0.019744473483514352,\n\
\ \"inst_level_loose_acc,none\": 0.44364508393285373,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.11858006042296072,\n \"exact_match_stderr,none\"\
: 0.008407847968363483,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.2671009771986971,\n\
\ \"exact_match_stderr,none\": 0.025292927347085815\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.04065040650406504,\n \"exact_match_stderr,none\": 0.017878907564437465\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.03787878787878788,\n\
\ \"exact_match_stderr,none\": 0.016679279394712563\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.014285714285714285,\n \"exact_match_stderr,none\": 0.0071043508939153165\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.05194805194805195,\n\
\ \"exact_match_stderr,none\": 0.017941344490765\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.23316062176165803,\n \"exact_match_stderr,none\"\
: 0.03051611137147603\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05925925925925926,\n \"exact_match_stderr,none\"\
: 0.02039673654232189\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3486535904255319,\n\
\ \"acc_stderr,none\": 0.0043446242238720624\n },\n \"\
leaderboard_musr\": {\n \"acc_norm,none\": 0.43783068783068785,\n \
\ \"acc_norm_stderr,none\": 0.01766500144084901,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.3359375,\n\
\ \"acc_norm_stderr,none\": 0.029577647634376425\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.27911275415896486,\n \"prompt_level_strict_acc_stderr,none\": 0.019303080958497275,\n\
\ \"inst_level_loose_acc,none\": 0.44364508393285373,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_strict_acc,none\": 0.4244604316546763,\n \
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.3486535904255319,\n\
\ \"acc_stderr,none\": 0.0043446242238720624,\n \"exact_match,none\"\
: 0.11858006042296072,\n \"exact_match_stderr,none\": 0.008407847968363483,\n\
\ \"prompt_level_loose_acc,none\": 0.30129390018484287,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.019744473483514352,\n \"acc_norm,none\": 0.43702166299130885,\n \
\ \"acc_norm_stderr,none\": 0.005364986018351183,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4639819475785454,\n\
\ \"acc_norm_stderr,none\": 0.006206268211754922,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6310160427807486,\n \"acc_norm_stderr,none\"\
: 0.03538078548260318\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.66,\n \"acc_norm_stderr,none\": 0.030020073605457876\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4041095890410959,\n\
\ \"acc_norm_stderr,none\": 0.04075198570039319\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.412,\n \"acc_norm_stderr,none\": 0.03119159602602282\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6573033707865169,\n \"acc_norm_stderr,none\"\
: 0.03567395111782629\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3062080536912752,\n\
\ \"acc_norm_stderr,none\": 0.013365961955378957,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.30808080808080807,\n\
\ \"acc_norm_stderr,none\": 0.03289477330098615\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.30036630036630035,\n \"acc_norm_stderr,none\": 0.019636438043304946\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3125,\n \"acc_norm_stderr,none\": 0.021923384489444957\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.27911275415896486,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.019303080958497275,\n \"inst_level_strict_acc,none\": 0.4244604316546763,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.30129390018484287,\n \"prompt_level_loose_acc_stderr,none\": 0.019744473483514352,\n\
\ \"inst_level_loose_acc,none\": 0.44364508393285373,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.11858006042296072,\n \"exact_match_stderr,none\": 0.008407847968363483,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.2671009771986971,\n \"exact_match_stderr,none\": 0.025292927347085815\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.04065040650406504,\n \"exact_match_stderr,none\": 0.017878907564437465\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.03787878787878788,\n \"exact_match_stderr,none\"\
: 0.016679279394712563\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.014285714285714285,\n \"exact_match_stderr,none\"\
: 0.0071043508939153165\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.05194805194805195,\n \"exact_match_stderr,none\": 0.017941344490765\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.23316062176165803,\n \"exact_match_stderr,none\"\
: 0.03051611137147603\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05925925925925926,\n \"exact_match_stderr,none\": 0.02039673654232189\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3486535904255319,\n \"acc_stderr,none\": 0.0043446242238720624\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43783068783068785,\n\
\ \"acc_norm_stderr,none\": 0.01766500144084901,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3359375,\n \"acc_norm_stderr,none\": 0.029577647634376425\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ }\n}\n```"
repo_url: https://huggingface.co/AGI-0/smartllama3.1-8B-001
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_ifeval
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-15-59.091478.jsonl'
- config_name: AGI-0__smartllama3.1-8B-001__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T22_15_59.091478
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-15-59.091478.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-15-59.091478.jsonl'
---
# Dataset Card for Evaluation run of AGI-0/smartllama3.1-8B-001
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AGI-0/smartllama3.1-8B-001](https://huggingface.co/AGI-0/smartllama3.1-8B-001)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/AGI-0__smartllama3.1-8B-001-details",
name="AGI-0__smartllama3.1-8B-001__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T22-15-59.091478](https://huggingface.co/datasets/open-llm-leaderboard/AGI-0__smartllama3.1-8B-001-details/blob/main/AGI-0__smartllama3.1-8B-001/results_2024-11-25T22-15-59.091478.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_loose_acc,none": 0.44364508393285373,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.4244604316546763,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3486535904255319,
"acc_stderr,none": 0.0043446242238720624,
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.008407847968363483,
"prompt_level_loose_acc,none": 0.30129390018484287,
"prompt_level_loose_acc_stderr,none": 0.019744473483514352,
"acc_norm,none": 0.43702166299130885,
"acc_norm_stderr,none": 0.005364986018351183,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4639819475785454,
"acc_norm_stderr,none": 0.006206268211754922,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4041095890410959,
"acc_norm_stderr,none": 0.04075198570039319
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.412,
"acc_norm_stderr,none": 0.03119159602602282
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6573033707865169,
"acc_norm_stderr,none": 0.03567395111782629
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3062080536912752,
"acc_norm_stderr,none": 0.013365961955378957,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30808080808080807,
"acc_norm_stderr,none": 0.03289477330098615
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3125,
"acc_norm_stderr,none": 0.021923384489444957
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_strict_acc,none": 0.4244604316546763,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.30129390018484287,
"prompt_level_loose_acc_stderr,none": 0.019744473483514352,
"inst_level_loose_acc,none": 0.44364508393285373,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.008407847968363483,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2671009771986971,
"exact_match_stderr,none": 0.025292927347085815
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04065040650406504,
"exact_match_stderr,none": 0.017878907564437465
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.014285714285714285,
"exact_match_stderr,none": 0.0071043508939153165
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.23316062176165803,
"exact_match_stderr,none": 0.03051611137147603
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3486535904255319,
"acc_stderr,none": 0.0043446242238720624
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.01766500144084901,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3359375,
"acc_norm_stderr,none": 0.029577647634376425
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_loose_acc,none": 0.44364508393285373,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.4244604316546763,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3486535904255319,
"acc_stderr,none": 0.0043446242238720624,
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.008407847968363483,
"prompt_level_loose_acc,none": 0.30129390018484287,
"prompt_level_loose_acc_stderr,none": 0.019744473483514352,
"acc_norm,none": 0.43702166299130885,
"acc_norm_stderr,none": 0.005364986018351183,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4639819475785454,
"acc_norm_stderr,none": 0.006206268211754922,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4041095890410959,
"acc_norm_stderr,none": 0.04075198570039319
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.412,
"acc_norm_stderr,none": 0.03119159602602282
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6573033707865169,
"acc_norm_stderr,none": 0.03567395111782629
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3062080536912752,
"acc_norm_stderr,none": 0.013365961955378957,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30808080808080807,
"acc_norm_stderr,none": 0.03289477330098615
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3125,
"acc_norm_stderr,none": 0.021923384489444957
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.27911275415896486,
"prompt_level_strict_acc_stderr,none": 0.019303080958497275,
"inst_level_strict_acc,none": 0.4244604316546763,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.30129390018484287,
"prompt_level_loose_acc_stderr,none": 0.019744473483514352,
"inst_level_loose_acc,none": 0.44364508393285373,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.008407847968363483,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2671009771986971,
"exact_match_stderr,none": 0.025292927347085815
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04065040650406504,
"exact_match_stderr,none": 0.017878907564437465
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.014285714285714285,
"exact_match_stderr,none": 0.0071043508939153165
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.05194805194805195,
"exact_match_stderr,none": 0.017941344490765
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.23316062176165803,
"exact_match_stderr,none": 0.03051611137147603
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3486535904255319,
"acc_stderr,none": 0.0043446242238720624
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.01766500144084901,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3359375,
"acc_norm_stderr,none": 0.029577647634376425
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
maliahson/agriagri | maliahson | "2024-11-25T22:17:05Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:16:58Z" | ---
dataset_info:
features:
- name: path
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: length
dtype: float64
splits:
- name: train
num_bytes: 21600611.0
num_examples: 20
download_size: 21590511
dataset_size: 21600611.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Xtest/function_dataset_with_ast_processed_ddad | Xtest | "2024-11-25T22:27:03Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T22:17:12Z" | ---
dataset_info:
features:
- name: function_all
dtype: string
- name: function_name
dtype: string
- name: function_body
dtype: string
- name: function_all_unknow
dtype: string
- name: ast
struct:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
sequence: 'null'
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: 'null'
- name: line
dtype: int64
- name: spelling
dtype: string
- name: Modified Code
dtype: string
- name: S-Expression of Original Code
dtype: string
- name: S-Expression of Modified Code
dtype: string
- name: AST Image Original
dtype: string
- name: AST Image Modified
dtype: string
- name: Root Node
dtype: string
splits:
- name: train
num_bytes: 695919
num_examples: 10
- name: test
num_bytes: 871495
num_examples: 10
download_size: 539242
dataset_size: 1567414
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-bin | reflection-gen | "2024-11-25T22:20:30Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:20:29Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: chosen_probs
dtype: float64
- name: chosen_probs_win
dtype: float64
- name: chosen_probs_lose
dtype: float64
splits:
- name: train
num_bytes: 6965211
num_examples: 2813
download_size: 2749267
dataset_size: 6965211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-bin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-full_resp_trace | reflection-gen | "2024-11-25T22:20:31Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:20:30Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 16625072
num_examples: 2813
download_size: 5958222
dataset_size: 16625072
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-full_resp_trace"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-bin_all_pairs | reflection-gen | "2024-11-25T22:20:32Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:20:31Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 13850989
num_examples: 5443
download_size: 3926603
dataset_size: 13850989
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-bin_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mallard74/eval_medical_benchmark | Mallard74 | "2024-11-25T22:20:58Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:20:57Z" | ---
dataset_info:
features:
- name: query_id
dtype: int64
- name: user_input
dtype: string
- name: reference
dtype: string
- name: corpus
sequence: string
splits:
- name: train
num_bytes: 1139
num_examples: 3
download_size: 4431
dataset_size: 1139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fatlindmazreku/dialects_dataset | fatlindmazreku | "2024-11-25T22:51:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:23:23Z" | ---
dataset_info:
features:
- name: Teksti
dtype: string
- name: Dialekti
dtype: string
splits:
- name: train
num_bytes: 435
num_examples: 10
download_size: 1401
dataset_size: 435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JeromeUwU/finetunedemo | JeromeUwU | "2024-11-25T22:24:07Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:24:00Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 242271740
num_examples: 231636
download_size: 98474169
dataset_size: 242271740
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sleeping4cat/alexandria-exp | sleeping4cat | "2024-11-25T22:26:45Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:26:13Z" | ---
license: mit
---
|
hsuvaskakoty/Wide-Analysis-v2 | hsuvaskakoty | "2024-11-25T23:11:44Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:27:50Z" | ---
configs:
- config_name: en_full_wikidata_entities
data_files:
- split: train
path: "en/full/wikidata_entities/train.csv"
- split: test
path: "en/full/wikidata_entities/test.csv"
- split: val
path: "en/full/wikidata_entities/val.csv"
- config_name: en_full_wikidata_properties
data_files:
- split: train
path: "en/full/wikidata_properties/train.csv"
- split: test
path: "en/full/wikidata_properties/test.csv"
- split: val
path: "en/full/wikidata_properties/val.csv"
- config_name: en_full_wikinews
data_files:
- split: train
path: "en/full/wikinews/train.csv"
- split: test
path: "en/full/wikinews/test.csv"
- split: val
path: "en/full/wikinews/val.csv"
- config_name: en_full_wikipedia
data_files:
- split: train
path: "en/full/wikipedia/train.csv"
- split: test
path: "en/full/wikipedia/test.csv"
- split: val
path: "en/full/wikipedia/val.csv"
- config_name: en_full_wikiquote
data_files:
- split: train
path: "en/full/wikiquote/train.csv"
- split: test
path: "en/full/wikiquote/test.csv"
- split: val
path: "en/full/wikiquote/val.csv"
- config_name: en_label_masked_wikidata_entities
data_files:
- split: train
path: "en/label_masked/wikidata_entities/train.csv"
- split: test
path: "en/label_masked/wikidata_entities/test.csv"
- split: val
path: "en/label_masked/wikidata_entities/val.csv"
- config_name: en_label_masked_wikidata_properties
data_files:
- split: train
path: "en/label_masked/wikidata_properties/train.csv"
- split: test
path: "en/label_masked/wikidata_properties/test.csv"
- split: val
path: "en/label_masked/wikidata_properties/val.csv"
- config_name: en_label_masked_wikinews
data_files:
- split: train
path: "en/label_masked/wikinews/train.csv"
- split: test
path: "en/label_masked/wikinews/test.csv"
- split: val
path: "en/label_masked/wikinews/val.csv"
- config_name: en_label_masked_wikipedia
data_files:
- split: train
path: "en/label_masked/wikipedia/train.csv"
- split: test
path: "en/label_masked/wikipedia/test.csv"
- config_name: en_label_masked_wikiquote
data_files:
- split: train
path: "en/label_masked/wikiquote/train.csv"
- split: test
path: "en/label_masked/wikiquote/test.csv"
- split: val
path: "en/label_masked/wikiquote/val.csv"
---
# WiDe-Analysis Dataset
<!-- Provide a quick summary of the dataset. -->
This is the dataset for WiDe Analysis Extended version
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
```python
from datasets import load_dataset
dataset = load_dataset(
"hsuvaskakoty/Wide-Analysis-v2",
name="en_full_wikidata_entities",
split="train",
trust_remote_code=True
)
```
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Xtest/function_dataset_with_ast_processed_dda22312d | Xtest | "2024-11-25T23:37:04Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:31:18Z" | ---
dataset_info:
features:
- name: function_all
dtype: string
- name: function_name
dtype: string
- name: function_body
dtype: string
- name: function_all_unknow
dtype: string
- name: ast
struct:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
sequence: 'null'
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: 'null'
- name: line
dtype: int64
- name: spelling
dtype: string
- name: Modified Code
dtype: string
- name: S-Expression of Original Code
dtype: string
- name: S-Expression of Modified Code
dtype: string
- name: AST Image Original
dtype: string
- name: AST Image Modified
dtype: string
- name: Root Node
dtype: string
splits:
- name: train
num_bytes: 695919
num_examples: 10
- name: test
num_bytes: 871495
num_examples: 10
download_size: 539250
dataset_size: 1567414
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
kuklinmike/wikipedia_ru | kuklinmike | "2024-11-25T22:38:45Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:33:24Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7632869201.84156
num_examples: 1520810
download_size: 4473154243
dataset_size: 7632869201.84156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dipl0/ALS_FULL_Tokens_Instruct | Dipl0 | "2024-11-25T23:21:19Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:39:20Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: response
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 107648073
num_examples: 13076
download_size: 21437413
dataset_size: 107648073
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SamaYousef/updated_Rev3_9643_2021 | SamaYousef | "2024-11-25T23:22:20Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:39:20Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 423121761.841
num_examples: 2887
download_size: 514089130
dataset_size: 423121761.841
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pjramg/AgML-apple_detection_usa | pjramg | "2024-11-25T22:52:44Z" | 0 | 0 | [
"task_categories:object-detection",
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"object-detection"
] | "2024-11-25T22:43:38Z" | ---
license: apache-2.0
task_categories:
- object-detection
pretty_name: AgML-Apple-Detection-USA
size_categories:
- 1K<n<10K
---
AgML Download]: Extracting files for apple_detection_usa... Done!
================================================================================
You have just downloaded apple_detection_usa.
This dataset has no license.
When using this dataset, please cite the following:
@article{karkee2019apple,
title={Apple Dataset Benchmark from Orchard Environment in Modern Fruiting Wall},
author={Karkee, Manoj and Bhusal, Santosh and Zhang, Qin},
year={2019}
}
You can find additional information about this dataset at:
https://hdl.handle.net/2376/17721
This message will not be automatically shown
again. To view this message again, in an AgMLDataLoader
run `loader.info.citation_summary()`. Otherwise, you
can use `agml.data.source(<name>).citation_summary().`
================================================================================
==================== DATASET SUMMARY ====================
Name: apple_detection_usa
Machine Learning Task: object_detection
Agricultural Task: fruit_detection
Location:
continent: north_america
country: usa
Sensor Modality: rgb
Real Or Synthetic: real
Platform: ground
Input Data Format: png
Annotation Format: coco_json
Number of Images: 2290
Documentation: https://hdl.handle.net/2376/17721
Stats:
mean:
- 0.2810896933078766
- 0.29005560278892517
- 0.2775411605834961
std:
- 0.18863406777381897
- 0.18647761642932892
- 0.1885077804327011
Classes:
'1': apple
External Image Sources: [] |
amazon/CodePrefBench | amazon | "2024-11-25T23:04:54Z" | 0 | 0 | [
"task_categories:other",
"language:code",
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"arxiv:2410.03837",
"region:us",
"code"
] | [
"other"
] | "2024-11-25T22:45:01Z" | ---
license: cc-by-nc-4.0
task_categories:
- other
language:
- code
tags:
- code
pretty_name: CodePrefBench
size_categories:
- 1K<n<10K
---
# CodePreference
- **Homepage:** https://llm-code-preference.github.io/
- **Repository:** https://github.com/amazon-science/llm-code-preference
- **Paper:** [Link](https://arxiv.org/abs/2410.03837)
## Data Fields
* `task_id` (`string`): The unique identifier for the task.
* `instruction` (`string`): The instruction prompt to write code.
* `choices` (`List[string]`): Two responses where one is preferred over the other.
* `gt_choice` (`int`): `0` or `1` indicating the preferred choice.
## Usage
```python
# Environment setup
git clone https://github.com/amazon-science/llm-code-preference.git
cd llm-code-preference
pip install -r requirements.txt
# Evaluation
## OpenAI server
python codefavor/evaluate.py --model-id "gpt-4o-2024-05-13" --model-type openai --concurrency 80
## Other OpenAI-compatible servers (vLLM, DeepSeek APIs, etc.)
python codefavor/evaluate.py --model-id "google/gemma-2-27b-it" --model-type openai --concurrency 80 --model-url http://localhost:8000/v1
## Claude models at Bedrock
python codefavor/evaluate.py --model-id "anthropic.claude-3-sonnet-20240229-v1:0" --model-type bedrock --concurrency 10
## Pairwise RM
python codefavor/evaluate.py --model-id "./models/mix-cls-mistral-7b-it_bs32_ep1_lr5e-6-l3-70b/checkpoint-688" --model-type pair-rm
```
## Citation
```bibtex
@article{liu2024learning,
title = {Learning Code Preference via Synthetic Evolution},
author = {Liu, Jiawei and Nguyen, Thanh and Shang, Mingyue and Ding, Hantian and Li, Xiaopeng and Yu, Yu and Kumar, Varun and Wang, Zijian},
journal = {arXiv preprint arXiv:2410.03837},
year = {2024},
}
```
|
mlfoundations-dev/oh_v1.2_sin_camel_chemistry_diversity | mlfoundations-dev | "2024-11-26T01:09:45Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:45:16Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: shard_id
dtype: string
- name: output
dtype: string
- name: ngram_3_uniqueness
dtype: float64
- name: entropy
dtype: float64
- name: gini_index
dtype: float64
splits:
- name: train
num_bytes: 2333446914
num_examples: 864214
download_size: 1291169643
dataset_size: 2333446914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marcov/qa_zre_promptsource | marcov | "2024-11-26T00:28:25Z" | 0 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:46:20Z" | ---
dataset_info:
features:
- name: relation
dtype: string
- name: question
dtype: string
- name: subject
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
- name: template_name
dtype: string
- name: template
dtype: string
- name: rendered_input
dtype: string
- name: rendered_output
dtype: string
splits:
- name: test
num_bytes: 789618741.0
num_examples: 960000
- name: validation
num_bytes: 39636958.0
num_examples: 48000
- name: train
num_bytes: 55219705711.0
num_examples: 67200000
download_size: 18720303573
dataset_size: 56048961410.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: train
path: data/train-*
---
|
HanxuHU/gemma2-9B-it-ultrafeedback-annotate-ultrafb-judge-5-majority-filtered | HanxuHU | "2024-11-26T00:09:41Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:52:39Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: all_generated_responses
sequence: string
- name: scores
sequence: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 697110455
num_examples: 53246
- name: test
num_bytes: 28633857
num_examples: 1962
download_size: 363276468
dataset_size: 725744312
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/Delta-Vector__Control-8B-details | open-llm-leaderboard | "2024-11-25T23:02:11Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:58:24Z" | ---
pretty_name: Evaluation run of Delta-Vector/Control-8B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Delta-Vector/Control-8B](https://huggingface.co/Delta-Vector/Control-8B)\nThe\
\ dataset is composed of 38 configuration(s), each one corresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Delta-Vector__Control-8B-details\"\
,\n\tname=\"Delta-Vector__Control-8B__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-11-25T22-58-23.311876](https://huggingface.co/datasets/open-llm-leaderboard/Delta-Vector__Control-8B-details/blob/main/Delta-Vector__Control-8B/results_2024-11-25T22-58-23.311876.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.13746223564954682,\n \"exact_match_stderr,none\"\
: 0.008980535434491049,\n \"inst_level_loose_acc,none\": 0.6139088729016786,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.46737579452587885,\n \"acc_norm_stderr,none\": 0.005426899446646522,\n\
\ \"inst_level_strict_acc,none\": 0.6007194244604317,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5138632162661737,\n \"prompt_level_loose_acc_stderr,none\": 0.0215083020678561,\n\
\ \"prompt_level_strict_acc,none\": 0.49722735674676527,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021516243323548144,\n \"\
acc,none\": 0.3731715425531915,\n \"acc_stderr,none\": 0.004409382233559222,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5028640860961638,\n \"acc_norm_stderr,none\"\
: 0.006289588700343956,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.812,\n\
\ \"acc_norm_stderr,none\": 0.02476037772775051\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5775401069518716,\n \"acc_norm_stderr,none\"\
: 0.0362182402075336\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.564,\n\
\ \"acc_norm_stderr,none\": 0.03142556706028136\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.516,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.716,\n \
\ \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\":\
\ 0.03114520984654851\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4246575342465753,\n \"acc_norm_stderr,none\": 0.04104862657656195\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\":\
\ 0.031425567060281365\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6573033707865169,\n \"acc_norm_stderr,none\": 0.03567395111782629\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.344,\n\
\ \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.224,\n \"acc_norm_stderr,none\":\
\ 0.026421361687347884\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\":\
\ 0.029844039047465857\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3162751677852349,\n\
\ \"acc_norm_stderr,none\": 0.013465313690484522,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.35353535353535354,\n \"acc_norm_stderr,none\": 0.03406086723547151\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2893772893772894,\n\
\ \"acc_norm_stderr,none\": 0.019424663872261782\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3325892857142857,\n \"acc_norm_stderr,none\"\
: 0.022284195136714192\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.49722735674676527,\n \"prompt_level_strict_acc_stderr,none\": 0.021516243323548144,\n\
\ \"inst_level_strict_acc,none\": 0.6007194244604317,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5138632162661737,\n \"prompt_level_loose_acc_stderr,none\": 0.0215083020678561,\n\
\ \"inst_level_loose_acc,none\": 0.6139088729016786,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.13746223564954682,\n \"exact_match_stderr,none\"\
: 0.008980535434491049,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.2768729641693811,\n\
\ \"exact_match_stderr,none\": 0.025579194330922362\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.03787878787878788,\n\
\ \"exact_match_stderr,none\": 0.016679279394712563\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.017857142857142856,\n \"exact_match_stderr,none\": 0.007928503387888855\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.07792207792207792,\n\
\ \"exact_match_stderr,none\": 0.021670471414711772\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.27461139896373055,\n \"exact_match_stderr,none\"\
: 0.03221024508041151\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05925925925925926,\n \"exact_match_stderr,none\"\
: 0.02039673654232189\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3731715425531915,\n\
\ \"acc_stderr,none\": 0.004409382233559222\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4351851851851852,\n \"acc_norm_stderr,none\"\
: 0.017732697340968846,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.568,\n\
\ \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.390625,\n \"acc_norm_stderr,none\"\
: 0.030552886284181364\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ }\n },\n \"leaderboard\": {\n \"exact_match,none\": 0.13746223564954682,\n\
\ \"exact_match_stderr,none\": 0.008980535434491049,\n \"inst_level_loose_acc,none\"\
: 0.6139088729016786,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"acc_norm,none\": 0.46737579452587885,\n \"acc_norm_stderr,none\"\
: 0.005426899446646522,\n \"inst_level_strict_acc,none\": 0.6007194244604317,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5138632162661737,\n \"prompt_level_loose_acc_stderr,none\": 0.0215083020678561,\n\
\ \"prompt_level_strict_acc,none\": 0.49722735674676527,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021516243323548144,\n \"acc,none\": 0.3731715425531915,\n \"acc_stderr,none\"\
: 0.004409382233559222,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5028640860961638,\n \"acc_norm_stderr,none\"\
: 0.006289588700343956,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.812,\n \"acc_norm_stderr,none\": 0.02476037772775051\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5775401069518716,\n \"acc_norm_stderr,none\"\
: 0.0362182402075336\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4246575342465753,\n\
\ \"acc_norm_stderr,none\": 0.04104862657656195\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6573033707865169,\n \"acc_norm_stderr,none\"\
: 0.03567395111782629\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.224,\n \"acc_norm_stderr,none\": 0.026421361687347884\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3162751677852349,\n\
\ \"acc_norm_stderr,none\": 0.013465313690484522,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.35353535353535354,\n\
\ \"acc_norm_stderr,none\": 0.03406086723547151\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2893772893772894,\n \"acc_norm_stderr,none\": 0.019424663872261782\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3325892857142857,\n \"acc_norm_stderr,none\"\
: 0.022284195136714192\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.49722735674676527,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021516243323548144,\n \
\ \"inst_level_strict_acc,none\": 0.6007194244604317,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.5138632162661737,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.0215083020678561,\n \"inst_level_loose_acc,none\"\
: 0.6139088729016786,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.13746223564954682,\n\
\ \"exact_match_stderr,none\": 0.008980535434491049,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.2768729641693811,\n \"exact_match_stderr,none\": 0.025579194330922362\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.03787878787878788,\n \"exact_match_stderr,none\"\
: 0.016679279394712563\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.017857142857142856,\n \"exact_match_stderr,none\"\
: 0.007928503387888855\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.07792207792207792,\n \"exact_match_stderr,none\": 0.021670471414711772\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.27461139896373055,\n \"exact_match_stderr,none\"\
: 0.03221024508041151\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05925925925925926,\n \"exact_match_stderr,none\": 0.02039673654232189\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3731715425531915,\n \"acc_stderr,none\": 0.004409382233559222\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4351851851851852,\n\
\ \"acc_norm_stderr,none\": 0.017732697340968846,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.390625,\n \"acc_norm_stderr,none\": 0.030552886284181364\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.348,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Delta-Vector/Control-8B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_ifeval
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-58-23.311876.jsonl'
- config_name: Delta-Vector__Control-8B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T22_58_23.311876
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-58-23.311876.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-58-23.311876.jsonl'
---
# Dataset Card for Evaluation run of Delta-Vector/Control-8B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Delta-Vector/Control-8B](https://huggingface.co/Delta-Vector/Control-8B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Delta-Vector__Control-8B-details",
name="Delta-Vector__Control-8B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T22-58-23.311876](https://huggingface.co/datasets/open-llm-leaderboard/Delta-Vector__Control-8B-details/blob/main/Delta-Vector__Control-8B/results_2024-11-25T22-58-23.311876.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.13746223564954682,
"exact_match_stderr,none": 0.008980535434491049,
"inst_level_loose_acc,none": 0.6139088729016786,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.46737579452587885,
"acc_norm_stderr,none": 0.005426899446646522,
"inst_level_strict_acc,none": 0.6007194244604317,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5138632162661737,
"prompt_level_loose_acc_stderr,none": 0.0215083020678561,
"prompt_level_strict_acc,none": 0.49722735674676527,
"prompt_level_strict_acc_stderr,none": 0.021516243323548144,
"acc,none": 0.3731715425531915,
"acc_stderr,none": 0.004409382233559222,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5028640860961638,
"acc_norm_stderr,none": 0.006289588700343956,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6573033707865169,
"acc_norm_stderr,none": 0.03567395111782629
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3162751677852349,
"acc_norm_stderr,none": 0.013465313690484522,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2893772893772894,
"acc_norm_stderr,none": 0.019424663872261782
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3325892857142857,
"acc_norm_stderr,none": 0.022284195136714192
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.49722735674676527,
"prompt_level_strict_acc_stderr,none": 0.021516243323548144,
"inst_level_strict_acc,none": 0.6007194244604317,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5138632162661737,
"prompt_level_loose_acc_stderr,none": 0.0215083020678561,
"inst_level_loose_acc,none": 0.6139088729016786,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.13746223564954682,
"exact_match_stderr,none": 0.008980535434491049,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2768729641693811,
"exact_match_stderr,none": 0.025579194330922362
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07792207792207792,
"exact_match_stderr,none": 0.021670471414711772
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.27461139896373055,
"exact_match_stderr,none": 0.03221024508041151
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3731715425531915,
"acc_stderr,none": 0.004409382233559222
},
"leaderboard_musr": {
"acc_norm,none": 0.4351851851851852,
"acc_norm_stderr,none": 0.017732697340968846,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.390625,
"acc_norm_stderr,none": 0.030552886284181364
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
}
},
"leaderboard": {
"exact_match,none": 0.13746223564954682,
"exact_match_stderr,none": 0.008980535434491049,
"inst_level_loose_acc,none": 0.6139088729016786,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.46737579452587885,
"acc_norm_stderr,none": 0.005426899446646522,
"inst_level_strict_acc,none": 0.6007194244604317,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5138632162661737,
"prompt_level_loose_acc_stderr,none": 0.0215083020678561,
"prompt_level_strict_acc,none": 0.49722735674676527,
"prompt_level_strict_acc_stderr,none": 0.021516243323548144,
"acc,none": 0.3731715425531915,
"acc_stderr,none": 0.004409382233559222,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5028640860961638,
"acc_norm_stderr,none": 0.006289588700343956,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4246575342465753,
"acc_norm_stderr,none": 0.04104862657656195
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6573033707865169,
"acc_norm_stderr,none": 0.03567395111782629
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.224,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3162751677852349,
"acc_norm_stderr,none": 0.013465313690484522,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2893772893772894,
"acc_norm_stderr,none": 0.019424663872261782
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3325892857142857,
"acc_norm_stderr,none": 0.022284195136714192
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.49722735674676527,
"prompt_level_strict_acc_stderr,none": 0.021516243323548144,
"inst_level_strict_acc,none": 0.6007194244604317,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5138632162661737,
"prompt_level_loose_acc_stderr,none": 0.0215083020678561,
"inst_level_loose_acc,none": 0.6139088729016786,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.13746223564954682,
"exact_match_stderr,none": 0.008980535434491049,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2768729641693811,
"exact_match_stderr,none": 0.025579194330922362
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07792207792207792,
"exact_match_stderr,none": 0.021670471414711772
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.27461139896373055,
"exact_match_stderr,none": 0.03221024508041151
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3731715425531915,
"acc_stderr,none": 0.004409382233559222
},
"leaderboard_musr": {
"acc_norm,none": 0.4351851851851852,
"acc_norm_stderr,none": 0.017732697340968846,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.390625,
"acc_norm_stderr,none": 0.030552886284181364
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.348,
"acc_norm_stderr,none": 0.030186568464511673
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/Delta-Vector__Control-8B-V1.1-details | open-llm-leaderboard | "2024-11-25T23:03:37Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T22:59:39Z" | ---
pretty_name: Evaluation run of Delta-Vector/Control-8B-V1.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Delta-Vector/Control-8B-V1.1](https://huggingface.co/Delta-Vector/Control-8B-V1.1)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Delta-Vector__Control-8B-V1.1-details\"\
,\n\tname=\"Delta-Vector__Control-8B-V1.1__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-25T22-59-39.146282](https://huggingface.co/datasets/open-llm-leaderboard/Delta-Vector__Control-8B-V1.1-details/blob/main/Delta-Vector__Control-8B-V1.1/results_2024-11-25T22-59-39.146282.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.5434380776340111,\n \"\
prompt_level_loose_acc_stderr,none\": 0.021435222545538937,\n \"inst_level_loose_acc,none\"\
: 0.6438848920863309,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_strict_acc,none\": 0.5194085027726433,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021500357879025087,\n \
\ \"acc,none\": 0.37450132978723405,\n \"acc_stderr,none\": 0.004412543644646609,\n\
\ \"inst_level_strict_acc,none\": 0.6199040767386091,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"exact_match,none\"\
: 0.12462235649546828,\n \"exact_match_stderr,none\": 0.008700069808646044,\n\
\ \"acc_norm,none\": 0.4607601504734726,\n \"acc_norm_stderr,none\"\
: 0.005405961420738536,\n \"alias\": \"leaderboard\"\n },\n \
\ \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4974830758548863,\n\
\ \"acc_norm_stderr,none\": 0.00626564851381343,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5775401069518716,\n\
\ \"acc_norm_stderr,none\": 0.0362182402075336\n },\n \"\
leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.524,\n\
\ \"acc_norm_stderr,none\": 0.03164968895968774\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\":\
\ 0.030938207620401222\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4383561643835616,\n \"acc_norm_stderr,none\": 0.04120596186613957\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.676,\n \
\ \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6629213483146067,\n \"acc_norm_stderr,none\"\
: 0.03553120966481325\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.396,\n\
\ \"acc_norm_stderr,none\": 0.030993197854577898\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\":\
\ 0.029462657598578648\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013370986728911079,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3383838383838384,\n \"acc_norm_stderr,none\": 0.033711241426263\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.30036630036630035,\n\
\ \"acc_norm_stderr,none\": 0.019636438043304946\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.5194085027726433,\n \"prompt_level_strict_acc_stderr,none\": 0.021500357879025083,\n\
\ \"inst_level_strict_acc,none\": 0.6199040767386091,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5434380776340111,\n \"prompt_level_loose_acc_stderr,none\": 0.021435222545538937,\n\
\ \"inst_level_loose_acc,none\": 0.6438848920863309,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.12462235649546828,\n \"exact_match_stderr,none\"\
: 0.008700069808646044,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.2671009771986971,\n\
\ \"exact_match_stderr,none\": 0.025292927347085815\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0975609756097561,\n \"exact_match_stderr,none\": 0.026863777740489123\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.07575757575757576,\n\
\ \"exact_match_stderr,none\": 0.023119068741795586\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.017857142857142856,\n \"exact_match_stderr,none\": 0.007928503387888855\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.08441558441558442,\n\
\ \"exact_match_stderr,none\": 0.022475781231866967\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.20207253886010362,\n \"exact_match_stderr,none\"\
: 0.028979089794296756\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.02962962962962963,\n \"exact_match_stderr,none\"\
: 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.37450132978723405,\n\
\ \"acc_stderr,none\": 0.004412543644646609\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.01773739598653491,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.544,\n\
\ \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3671875,\n \"acc_norm_stderr,none\"\
: 0.030186403889489913\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.5434380776340111,\n \"prompt_level_loose_acc_stderr,none\": 0.021435222545538937,\n\
\ \"inst_level_loose_acc,none\": 0.6438848920863309,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.5194085027726433,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021500357879025087,\n \"\
acc,none\": 0.37450132978723405,\n \"acc_stderr,none\": 0.004412543644646609,\n\
\ \"inst_level_strict_acc,none\": 0.6199040767386091,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.12462235649546828,\n \"exact_match_stderr,none\"\
: 0.008700069808646044,\n \"acc_norm,none\": 0.4607601504734726,\n \
\ \"acc_norm_stderr,none\": 0.005405961420738536,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.4974830758548863,\n\
\ \"acc_norm_stderr,none\": 0.00626564851381343,\n \"alias\": \" -\
\ leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n \
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5775401069518716,\n \"acc_norm_stderr,none\"\
: 0.0362182402075336\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.368,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4383561643835616,\n\
\ \"acc_norm_stderr,none\": 0.04120596186613957\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6629213483146067,\n \"acc_norm_stderr,none\"\
: 0.03553120966481325\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.22,\n \"acc_norm_stderr,none\": 0.026251792824605793\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3070469798657718,\n\
\ \"acc_norm_stderr,none\": 0.013370986728911079,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3383838383838384,\n\
\ \"acc_norm_stderr,none\": 0.033711241426263\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.30036630036630035,\n \"acc_norm_stderr,none\": 0.019636438043304946\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3013392857142857,\n \"acc_norm_stderr,none\"\
: 0.021702375698545707\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.5194085027726433,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021500357879025083,\n \
\ \"inst_level_strict_acc,none\": 0.6199040767386091,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.5434380776340111,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021435222545538937,\n \"inst_level_loose_acc,none\"\
: 0.6438848920863309,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.12462235649546828,\n\
\ \"exact_match_stderr,none\": 0.008700069808646044,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.2671009771986971,\n \"exact_match_stderr,none\": 0.025292927347085815\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0975609756097561,\n \"exact_match_stderr,none\": 0.026863777740489123\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.07575757575757576,\n \"exact_match_stderr,none\"\
: 0.023119068741795586\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.017857142857142856,\n \"exact_match_stderr,none\"\
: 0.007928503387888855\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.08441558441558442,\n \"exact_match_stderr,none\": 0.022475781231866967\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.20207253886010362,\n \"exact_match_stderr,none\"\
: 0.028979089794296756\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.02962962962962963,\n \"exact_match_stderr,none\": 0.014648038602753809\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.37450132978723405,\n \"acc_stderr,none\": 0.004412543644646609\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.42328042328042326,\n\
\ \"acc_norm_stderr,none\": 0.01773739598653491,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.3671875,\n \"acc_norm_stderr,none\": 0.030186403889489913\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Delta-Vector/Control-8B-V1.1
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_navigate
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_snarks
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_gpqa_extended
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_gpqa_main
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_ifeval
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_mmlu_pro
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_musr_object_placements
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-25T22-59-39.146282.jsonl'
- config_name: Delta-Vector__Control-8B-V1.1__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_25T22_59_39.146282
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-59-39.146282.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-25T22-59-39.146282.jsonl'
---
# Dataset Card for Evaluation run of Delta-Vector/Control-8B-V1.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Delta-Vector/Control-8B-V1.1](https://huggingface.co/Delta-Vector/Control-8B-V1.1)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Delta-Vector__Control-8B-V1.1-details",
name="Delta-Vector__Control-8B-V1.1__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-25T22-59-39.146282](https://huggingface.co/datasets/open-llm-leaderboard/Delta-Vector__Control-8B-V1.1-details/blob/main/Delta-Vector__Control-8B-V1.1/results_2024-11-25T22-59-39.146282.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.5434380776340111,
"prompt_level_loose_acc_stderr,none": 0.021435222545538937,
"inst_level_loose_acc,none": 0.6438848920863309,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5194085027726433,
"prompt_level_strict_acc_stderr,none": 0.021500357879025087,
"acc,none": 0.37450132978723405,
"acc_stderr,none": 0.004412543644646609,
"inst_level_strict_acc,none": 0.6199040767386091,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.12462235649546828,
"exact_match_stderr,none": 0.008700069808646044,
"acc_norm,none": 0.4607601504734726,
"acc_norm_stderr,none": 0.005405961420738536,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4974830758548863,
"acc_norm_stderr,none": 0.00626564851381343,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4383561643835616,
"acc_norm_stderr,none": 0.04120596186613957
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6629213483146067,
"acc_norm_stderr,none": 0.03553120966481325
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013370986728911079,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3383838383838384,
"acc_norm_stderr,none": 0.033711241426263
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5194085027726433,
"prompt_level_strict_acc_stderr,none": 0.021500357879025083,
"inst_level_strict_acc,none": 0.6199040767386091,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5434380776340111,
"prompt_level_loose_acc_stderr,none": 0.021435222545538937,
"inst_level_loose_acc,none": 0.6438848920863309,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.12462235649546828,
"exact_match_stderr,none": 0.008700069808646044,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2671009771986971,
"exact_match_stderr,none": 0.025292927347085815
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0975609756097561,
"exact_match_stderr,none": 0.026863777740489123
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.08441558441558442,
"exact_match_stderr,none": 0.022475781231866967
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.20207253886010362,
"exact_match_stderr,none": 0.028979089794296756
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37450132978723405,
"acc_stderr,none": 0.004412543644646609
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01773739598653491,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3671875,
"acc_norm_stderr,none": 0.030186403889489913
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.5434380776340111,
"prompt_level_loose_acc_stderr,none": 0.021435222545538937,
"inst_level_loose_acc,none": 0.6438848920863309,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.5194085027726433,
"prompt_level_strict_acc_stderr,none": 0.021500357879025087,
"acc,none": 0.37450132978723405,
"acc_stderr,none": 0.004412543644646609,
"inst_level_strict_acc,none": 0.6199040767386091,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.12462235649546828,
"exact_match_stderr,none": 0.008700069808646044,
"acc_norm,none": 0.4607601504734726,
"acc_norm_stderr,none": 0.005405961420738536,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4974830758548863,
"acc_norm_stderr,none": 0.00626564851381343,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.368,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4383561643835616,
"acc_norm_stderr,none": 0.04120596186613957
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6629213483146067,
"acc_norm_stderr,none": 0.03553120966481325
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.22,
"acc_norm_stderr,none": 0.026251792824605793
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3070469798657718,
"acc_norm_stderr,none": 0.013370986728911079,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3383838383838384,
"acc_norm_stderr,none": 0.033711241426263
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.30036630036630035,
"acc_norm_stderr,none": 0.019636438043304946
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3013392857142857,
"acc_norm_stderr,none": 0.021702375698545707
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5194085027726433,
"prompt_level_strict_acc_stderr,none": 0.021500357879025083,
"inst_level_strict_acc,none": 0.6199040767386091,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5434380776340111,
"prompt_level_loose_acc_stderr,none": 0.021435222545538937,
"inst_level_loose_acc,none": 0.6438848920863309,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.12462235649546828,
"exact_match_stderr,none": 0.008700069808646044,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.2671009771986971,
"exact_match_stderr,none": 0.025292927347085815
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0975609756097561,
"exact_match_stderr,none": 0.026863777740489123
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.08441558441558442,
"exact_match_stderr,none": 0.022475781231866967
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.20207253886010362,
"exact_match_stderr,none": 0.028979089794296756
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37450132978723405,
"acc_stderr,none": 0.004412543644646609
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.01773739598653491,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.3671875,
"acc_norm_stderr,none": 0.030186403889489913
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
NaniDAO/nanipilled | NaniDAO | "2024-11-25T23:59:55Z" | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"language:ja",
"license:agpl-3.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-11-25T22:59:51Z" | ---
license: agpl-3.0
task_categories:
- text-generation
language:
- en
- ja
pretty_name: '@z0r0zzz Tweets Dataset'
size_categories:
- 1K<n<10K
--- |
SAVE0x0/x_dataset_218 | SAVE0x0 | "2024-11-25T23:13:41Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2024-11-25T23:02:01Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** SAVE0x0/x_dataset_218
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 0
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{SAVE0x02024datauniversex_dataset_218,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={SAVE0x0},
year={2024},
url={https://huggingface.co/datasets/SAVE0x0/x_dataset_218},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 92585
- **Date Range:** 2024-08-27 to 2024-09-26
- **Last Updated:** 2024-11-25
### Data Distribution
- Tweets with hashtags: 99.99%
- Tweets without hashtags: 0.01%
### Top 10 Hashtags
For full statistics, please refer to the `x_stats.json` file in the repository.
| Rank | Item | Percentage |
|------|------|------------|
| 1 | #bitcoin | 19.10% |
| 2 | #btc | 14.45% |
| 3 | #crypto | 9.54% |
| 4 | #bitcointechnology | 7.03% |
| 5 | #defi | 4.59% |
| 6 | #xrp | 4.00% |
| 7 | #cryptocurrency | 2.77% |
| 8 | #binance | 2.30% |
| 9 | #nft | 1.76% |
| 10 | #eth | 1.56% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2024-11-25 | 92585 | 92585 |
|
mayk00/maykel_dataset | mayk00 | "2024-11-25T23:20:46Z" | 0 | 0 | [
"task_categories:text-classification",
"language:es",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"text-classification"
] | "2024-11-25T23:05:17Z" | ---
license: apache-2.0
task_categories:
- text-classification
language:
- es
tags:
- code
pretty_name: bonito
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: user_view_type
dtype: string
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: float64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: user_view_type
dtype: string
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: user_view_type
dtype: string
- name: milestone
struct:
- name: closed_at
dtype: 'null'
- name: closed_issues
dtype: float64
- name: created_at
dtype: timestamp[us]
- name: creator
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: float64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: user_view_type
dtype: string
- name: description
dtype: string
- name: due_on
dtype: 'null'
- name: html_url
dtype: string
- name: id
dtype: float64
- name: labels_url
dtype: string
- name: node_id
dtype: string
- name: number
dtype: float64
- name: open_issues
dtype: float64
- name: state
dtype: string
- name: title
dtype: string
- name: updated_at
dtype: timestamp[us]
- name: url
dtype: string
- name: comments
dtype: int64
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: closed_by
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: float64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: user_view_type
dtype: string
- name: reactions
struct:
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: confused
dtype: int64
- name: eyes
dtype: int64
- name: heart
dtype: int64
- name: hooray
dtype: int64
- name: laugh
dtype: int64
- name: rocket
dtype: int64
- name: total_count
dtype: int64
- name: url
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: diff_url
dtype: string
- name: html_url
dtype: string
- name: merged_at
dtype: timestamp[us]
- name: patch_url
dtype: string
- name: url
dtype: string
- name: is_pull_request
dtype: bool
- name: time_to_close
dtype: float64
splits:
- name: train
num_bytes: 3850856
num_examples: 1000
download_size: 952078
dataset_size: 3850856
---
cualquier mmd |
yav1327/indian_songs | yav1327 | "2024-11-26T00:24:26Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:09:17Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: song_id
dtype: int64
- name: filename
dtype: string
- name: filepath
dtype:
audio:
sampling_rate: 16000
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: train
num_bytes: 891808566.0
num_examples: 355
download_size: 890883460
dataset_size: 891808566.0
---
|
yanisTiky/twitter-dataset | yanisTiky | "2024-11-25T23:15:23Z" | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:cc0-1.0",
"size_categories:n<1K",
"region:us",
"not-for-all-audiences"
] | [
"text-classification"
] | "2024-11-25T23:13:18Z" | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- not-for-all-audiences
pretty_name: twitter
size_categories:
- n<1K
--- |
neoneye/simon-arc-combine-v191 | neoneye | "2024-11-25T23:19:39Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-25T23:17:55Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) combined datasets version 191
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
A combination of multiple datasets.
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 2
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 3
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 4
Added a shared dataset name for all these datasets: `SIMON-SOLVE-V1`. There may be higher version numbers in the future.
My hypothesis: Having a version number in the dataset name, it may be easier to unlearn incorrect training data.
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 5
Different random seed.
# Version 6
Using `SIMON-SOLVE-V1` everywhere. Remove the `SIMON-SOLVE-COLOR`, `SIMON-SOLVE-ROTATE`, `SIMON-SOLVE-TRANSLATE`.
# Version 7
Using `SIMON-SOLVE-V1` everywhere.
# Version 8
Same settings. Different seed as usual.
# Version 9
Switching from context length 256 to context length 512.
Increasing the image sizes so the prompt length stays below 512.
`dataset_solve_color`, image size: 1-13.
`dataset_solve_rotate`, image size: 1-9.
`dataset_solve_translate`, image size: 3-9.
# Version 10
Same settings. Different seed as usual.
# Version 11
Same settings. Different seed as usual.
# Version 12
Added 1 more pair to the examples. Now it's 2-4 examples. Previously it was 2-3 examples.
# Version 13
Same settings. Different seed as usual.
# Version 14
Same settings. Different seed as usual.
# Version 15
Same settings. Different seed as usual.
# Version 16
Added `Predict the output image.`
Disabled prediction of rows.
Disabled prediction of height.
# Verison 17
Same settings. Different seed as usual.
Using the `DatasetGenerator` and the `DatasetItemListBuilder`.
# Verison 18
Added datasets.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl` - added.
- `dataset_shape.jsonl` - added.
# Verison 19
Added dataset.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_image.jsonl` - added.
# Verison 20
Bigger images.
# Verison 21
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_mass.jsonl` - added.
# Verison 22
Added dataset.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_histogram.jsonl` - added.
Bigger image sizes.
Number of rows=200k. Was previously 100k rows.
# Verison 23
`datset_mass.jsonl`. increased to `max_mass=5`.
# Verison 24
`datset_mass.jsonl`. increased to `max_mass=6`.
# Verison 25
different seed.
# Verison 26
`datset_mass.jsonl`. increased to `max_mass=25`.
different seed.
# Verison 27
different seed.
# Verison 28
different seed.
# Verison 29
different seed.
# Verison 30
different seed.
# Verison 31
different seed.
# Verison 32
different seed.
# Verison 33
Disabled some dataset.
Datasets:
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_cellular_automaton.jsonl`
# Verison 34
Enabled all datasets.
# Version 35
Regenerated all datasets with new random seeds.
# Verison 36
Added dataset `dataset_scale.jsonl`.
Disabled some dataset.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
# Verison 37
Enabled all datasets
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
# Verison 38
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - added
# Version 39
Regenerated all datasets with new random seeds.
# Version 40
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl` - added
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 41
Regenerated all datasets with new random seeds.
# Version 42
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl` - added
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 43
Enabled all datasets.
# Version 44
Regenerated all datasets with new random seeds.
# Version 45
Extended the `dataset_shape.jsonl` with these new `PixelConnectivity` types: `CORNER4`, `LR2`, `TB2`, `TLBR2`, `TRBL2`.
Hopefully it makes the model better at making sense of diagonal structures, which is something it's terrible at at the moment.
# Version 46
Regenerated all datasets with new random seeds.
# Version 47
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl` - added
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 48
Enabled all datasets.
# Version 49
Bigger `max_mass`. From 6 to 8.
# Version 50
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 51
Regenerated all datasets with new random seeds.
# Version 52
Regenerated all datasets with new random seeds.
# Version 53
Regenerated all datasets with new random seeds.
# Version 54
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_erotion.jsonl` - added
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 55
Added dataset. Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl` - added
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 56
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 57
Regenerated all datasets with new random seeds.
# Version 58
Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 59
Added new datasets.
Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl` - added
- `dataset_solve_fractal.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 60
Incremented random seed
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 61
Enabled all datasets.
More padding inside the `dataset_solve_fractal.jsonl`.
# Version 62
All datasets still enabled.
Turning up the parameter for `dataset_solve_fractal.jsonl`.
scale_input from 3 to 4.
scale_output from 3 to 4.
max_image_size from 3 to 4.
max_pad_count from 4 to 5.
# Version 63
Disabled several datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 64
Added dataset.
Increased the number of rows in the jsonl file from 200k to 300k.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 65
random seed.
# Version 66
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 67
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - enabled
- `dataset_solve_compress.jsonl` - enabled
- `dataset_solve_erosion.jsonl` - enabled
- `dataset_solve_fractal.jsonl` - enabled
- `dataset_solve_outline.jsonl` - enabled
- `dataset_solve_rotate.jsonl` - enabled
- `dataset_solve_scale.jsonl` - enabled
- `dataset_solve_symmetry.jsonl` - enabled
- `dataset_solve_translate.jsonl` - enabled
- `dataset_symmetry.jsonl`
# Version 68
Enabled all datasets.
# Version 69
Different random seed.
# Version 70
Different random seed.
# Version 71
Different random seed.
# Version 72
Different random seed.
# Version 73
Different random seed.
# Version 74
Major update to `dataset_solve_symmetry.jsonl`.
# Version 75
Different random seed.
# Version 76
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 77
Enabled all datasets.
# Version 78
Major update to `dataset_solve_symmetry.jsonl`.
# Version 79
Different random seed.
# Version 80
Different random seed.
# Version 81
Different random seed.
# Version 82
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl` - added
- `dataset_symmetry.jsonl`
# Version 83
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 84
Added dataset `dataset_solve_grid.jsonl`.
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl` - added
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 85
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 86
Enabled all datasets.
# Version 87
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 88
Added dataset `dataset_solve_probecolor.jsonl` with all directions enabled.
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 89
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 90
Disabled some of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 91
Added dataset.
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl` - added
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 92
Different random seed.
# Version 93
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl` - added
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 94
Added dataset.
Disabled datasets that doesn't solve ARC tasks.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl` - added
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 95
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl` - added
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 96
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl` - major update.
- `dataset_symmetry.jsonl`
# Version 97
Disabled the first half of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 98
Disabled the last half of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl` - disabled
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl` - disabled
- `dataset_solve_grid.jsonl` - disabled
- `dataset_solve_half.jsonl` - disabled
- `dataset_solve_mass.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_probecolor.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 99
Disabled the 1/4th of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 100
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl` - added
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 101
Disabled the non solving datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 102
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl` - added
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 103
Different random seed.
# Version 104
Disabled the non solving datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 105
Major update to `dataset_solve_scale.jsonl` with scaling down noisy images.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl` - scale down noisy images
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 106
Different random seed.
# Version 107
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 108
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl` - added
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 109
Different random seed.
# Version 110
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_halfplane.jsonl` - added
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 111
Different random seed.
# Version 112
Different random seed.
# Version 113
Different random seed.
# Version 114
Major update to the `dataset_solve-mass.jsonl`, so it now includes `mass_compare_adjacent_rows` and `mass_compare_adjacent_columns`.
# Version 115
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_gravity.jsonl` - added
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_halfplane.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 116
Hypothesis. What if I train with a smaller dataset, will it converge faster?
Reduced the number of rows in this dataset from 300k rows to 10k rows.
# Version 117
Interesting, 10k rows seems to work fine with the model training.
Picked new random rows.
# Version 118
Still going with 10k rows.
Picked new random rows.
# Version 119
Still going with 10k rows.
Picked new random rows.
# Version 120
Switched to 20k rows.
# Version 121
Still going with 20k rows.
Picked new random rows.
# Version 122
20k rows.
Added `dataset_solve_reverse.jsonl`.
# Version 123
Doubled the number of rows to 40k rows.
# Version 124
Set row count to 100k rows.
Major update to `dataset_solve_gravity.jsonl`.
# Version 125
Row count: 100k rows.
# Version 126
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_bool.jsonl
dataset_solve_boundingbox.jsonl
dataset_solve_color.jsonl
dataset_solve_compress.jsonl
dataset_solve_edge.jsonl
dataset_solve_erosion.jsonl
dataset_solve_flip.jsonl
dataset_solve_fractal.jsonl
dataset_solve_gravity.jsonl
dataset_solve_grid.jsonl
dataset_solve_half.jsonl
dataset_solve_halfplane.jsonl
dataset_solve_mask.jsonl
dataset_solve_mass.jsonl
dataset_solve_outline.jsonl
dataset_solve_probecolor.jsonl
dataset_solve_ray.jsonl
dataset_solve_reverse.jsonl
dataset_solve_rotate.jsonl
dataset_solve_scale.jsonl
dataset_solve_symmetry.jsonl
dataset_solve_translate.jsonl
dataset_solve_zindex.jsonl
```
# Version 127
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_scale.jsonl
dataset_solve_symmetry.jsonl
dataset_solve_translate.jsonl
dataset_solve_zindex.jsonl
```
# Version 128
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_probecolor.jsonl
dataset_solve_ray.jsonl
dataset_solve_reverse.jsonl
dataset_solve_rotate.jsonl
```
# Version 129
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_gravity.jsonl
dataset_solve_grid.jsonl
dataset_solve_half.jsonl
dataset_solve_halfplane.jsonl
dataset_solve_mask.jsonl
dataset_solve_mass.jsonl
dataset_solve_outline.jsonl
```
# Version 130
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_bool.jsonl
dataset_solve_boundingbox.jsonl
dataset_solve_color.jsonl
dataset_solve_compress.jsonl
dataset_solve_edge.jsonl
dataset_solve_erosion.jsonl
dataset_solve_flip.jsonl
dataset_solve_fractal.jsonl
```
# Version 131
Switched back to 300k rows.
Enabled all the datasets.
# Version 132
Random seed.
# Version 133
Removed the rows that are longer than what can be fitted inside a 512 context length.
# Version 134
Random seed.
# Version 135
Random seed.
# Version 136
Major update to the `dataset_solve_gravity.jsonl` file.
# Version 137
Added dataset `dataset_solve_skew.jsonl`.
# Version 138
Disabled several datasets.
```txt
# 'dataset_cellular_automaton.jsonl',
# 'dataset_dilation.jsonl',
# 'dataset_erosion.jsonl',
# 'dataset_histogram.jsonl',
# 'dataset_image.jsonl',
# 'dataset_image_pair.jsonl',
# 'dataset_mass.jsonl',
# 'dataset_scale.jsonl',
# 'dataset_shape.jsonl',
# 'dataset_solve_bool.jsonl',
'dataset_solve_boundingbox.jsonl',
'dataset_solve_color.jsonl',
'dataset_solve_compress.jsonl',
'dataset_solve_edge.jsonl',
'dataset_solve_erosion.jsonl',
'dataset_solve_flip.jsonl',
'dataset_solve_fractal.jsonl',
'dataset_solve_gravity.jsonl',
'dataset_solve_grid.jsonl',
'dataset_solve_half.jsonl',
# 'dataset_solve_halfplane.jsonl',
'dataset_solve_mask.jsonl',
'dataset_solve_mass.jsonl',
'dataset_solve_outline.jsonl',
'dataset_solve_probecolor.jsonl',
# 'dataset_solve_ray.jsonl',
# 'dataset_solve_reverse.jsonl',
'dataset_solve_rotate.jsonl',
'dataset_solve_scale.jsonl',
# 'dataset_solve_skew.jsonl',
'dataset_solve_symmetry.jsonl',
'dataset_solve_translate.jsonl',
'dataset_solve_zindex.jsonl',
# 'dataset_symmetry.jsonl',
```
# Version 139
Disabled several datasets.
```txt
'dataset_cellular_automaton.jsonl',
'dataset_dilation.jsonl',
'dataset_erosion.jsonl',
'dataset_histogram.jsonl',
'dataset_image.jsonl',
'dataset_image_pair.jsonl',
'dataset_mass.jsonl',
'dataset_scale.jsonl',
'dataset_shape.jsonl',
'dataset_solve_bool.jsonl',
# 'dataset_solve_boundingbox.jsonl',
# 'dataset_solve_color.jsonl',
# 'dataset_solve_compress.jsonl',
# 'dataset_solve_edge.jsonl',
# 'dataset_solve_erosion.jsonl',
# 'dataset_solve_flip.jsonl',
# 'dataset_solve_fractal.jsonl',
# 'dataset_solve_gravity.jsonl',
# 'dataset_solve_grid.jsonl',
# 'dataset_solve_half.jsonl',
'dataset_solve_halfplane.jsonl',
# 'dataset_solve_mask.jsonl',
# 'dataset_solve_mass.jsonl',
# 'dataset_solve_outline.jsonl',
# 'dataset_solve_probecolor.jsonl',
'dataset_solve_ray.jsonl',
'dataset_solve_reverse.jsonl',
# 'dataset_solve_rotate.jsonl',
# 'dataset_solve_scale.jsonl',
'dataset_solve_skew.jsonl',
# 'dataset_solve_symmetry.jsonl',
# 'dataset_solve_translate.jsonl',
# 'dataset_solve_zindex.jsonl',
'dataset_symmetry.jsonl',
```
# Version 140
Enabled all datasets.
Added new dataset: `dataset_solve_cross.jsonl`.
# Version 141
Switched to 30k rows.
Disabled several datasets.
```txt
# 'dataset_cellular_automaton.jsonl',
# 'dataset_dilation.jsonl',
# 'dataset_erosion.jsonl',
# 'dataset_histogram.jsonl',
# 'dataset_image.jsonl',
# 'dataset_image_pair.jsonl',
# 'dataset_mass.jsonl',
# 'dataset_scale.jsonl',
# 'dataset_shape.jsonl',
# 'dataset_solve_bool.jsonl',
'dataset_solve_boundingbox.jsonl',
'dataset_solve_color.jsonl',
'dataset_solve_compress.jsonl',
# 'dataset_solve_cross.jsonl',
'dataset_solve_edge.jsonl',
'dataset_solve_erosion.jsonl',
'dataset_solve_flip.jsonl',
'dataset_solve_fractal.jsonl',
# 'dataset_solve_gravity.jsonl',
'dataset_solve_grid.jsonl',
'dataset_solve_half.jsonl',
# 'dataset_solve_halfplane.jsonl',
'dataset_solve_mask.jsonl',
'dataset_solve_mass.jsonl',
'dataset_solve_outline.jsonl',
'dataset_solve_probecolor.jsonl',
'dataset_solve_ray.jsonl',
# 'dataset_solve_reverse.jsonl',
'dataset_solve_rotate.jsonl',
'dataset_solve_scale.jsonl',
'dataset_solve_skew.jsonl',
'dataset_solve_symmetry.jsonl',
'dataset_solve_translate.jsonl',
# 'dataset_solve_zindex.jsonl',
# 'dataset_symmetry.jsonl',
```
# Version 142
Switched to 300k rows.
Enabled all datasets.
Switched from 512 context to 1024 context.
# Version 143
Bigger images in `dataset_solve_cross.jsonl` and in `dataset_solve_mass.jsonl`.
# Version 144
Major update to `dataset_solve_symmetry.jsonl`.
# Version 145
Added `dataset_solve_span.jsonl`.
# Version 146
Extended `dataset_solve_span.jsonl` with `generate_task_with_template_lines`.
# Version 147
Extended `dataset_solve_span.jsonl` with `generate_task_with_alternate`.
# Version 148
Added `dataset_solve_count.jsonl`.
# Version 149
Randomized.
# Version 150
Upgraded context length for several datasets from 512 to 1024.
# Version 151
Randomized.
# Version 152
Randomized.
# Version 153
Extended `dataset_solve_mask.jsonl` with `generate_task_repair_rectangle_and_crop`.
# Version 154
Extended `dataset_solve_color.jsonl` with `generate_task_replace_color`.
# Version 155
Major update to datasets in the range from `dataset_solve_axxx.jsonl` to `dataset_solve_mask.jsonl`.
Now there is an earlier prediction for the output that is to be predicted. It may contain a hint, or it may be garbage that is to be ignored.
# Version 156
Only 2000 rows.
Only these datasets.
'dataset_cellular_automaton.jsonl',
'dataset_dilation.jsonl',
'dataset_erosion.jsonl',
'dataset_histogram.jsonl',
'dataset_image.jsonl',
'dataset_image_pair.jsonl',
'dataset_mass.jsonl',
'dataset_scale.jsonl',
'dataset_shape.jsonl',
'dataset_symmetry.jsonl',
# Version 157
Only these datasets.
- 'dataset_solve_bool.jsonl',
- 'dataset_solve_boundingbox.jsonl',
- 'dataset_solve_color.jsonl',
- 'dataset_solve_compress.jsonl',
- 'dataset_solve_count.jsonl',
- 'dataset_solve_cross.jsonl',
- 'dataset_solve_edge.jsonl',
- 'dataset_solve_erosion.jsonl',
- 'dataset_solve_flip.jsonl',
- 'dataset_solve_fractal.jsonl',
- 'dataset_solve_gravity.jsonl',
- 'dataset_solve_grid.jsonl',
- 'dataset_solve_half.jsonl',
- 'dataset_solve_halfplane.jsonl',
- 'dataset_solve_mask.jsonl',
- 'dataset_solve_mass.jsonl',
- 'dataset_solve_outline.jsonl',
- 'dataset_solve_probecolor.jsonl',
- 'dataset_solve_ray.jsonl',
- 'dataset_solve_reverse.jsonl',
- 'dataset_solve_rotate.jsonl',
- 'dataset_solve_scale.jsonl',
- 'dataset_solve_span.jsonl',
- 'dataset_solve_skew.jsonl',
- 'dataset_solve_symmetry.jsonl',
- 'dataset_solve_translate.jsonl',
- 'dataset_solve_zindex.jsonl',
# Version 158
Only these datasets.
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_rectangle.jsonl`
# Versin 159
Enabled all the `_solve_` datasets.
# Version 160
Regenerated all the `_solve_` datasets with new seed.
# Version 161
Regenerated all the `_solve_` datasets with new seed.
# Version 162
Replaced RLE compressed response with raw pixel response.
# Version 163
Added more generators
- DatasetSolveCount
- DatasetSolveCross
- DatasetSolveEdge
- DatasetSolveErosion
- DatasetSolveFlip
- DatasetSolveFractal
# Version 164
Increased row count from 1000 to 2000.
# Version 165
Added more generators.
# Version 166
Added more generators.
# Version 167
Added more generators.
# Version 168
Added more generators.
# Version 169
Generated data.
# Version 170
Generated data.
# Version 171
Generated data.
Increased output context length from 256 to 512.
# Version 172
Generated data.
# Version 173
Generated data.
# Version 174
Generated data.
# Version 175
Generated data.
# Version 176
Generated data.
# Version 177
Increased the number of rows from 2000 to 4000.
Generated data.
# Version 178
Generated data.
# Version 179
Generated data.
# Version 180
Generated data.
# Version 181
Generated data.
# Version 182
Generated data.
# Version 183
Generated data.
# Version 184
Generated data.
# Version 185
Generated data.
# Version 186
Generated data.
# Version 187
Generated data.
# Version 188
Generated data.
# Version 189
Added `DatasetSolveDeform` dataset generator.
# Version 190
Generated data.
# Version 191
Generated data.
|
yanisTiky/twitter_dataset_try | yanisTiky | "2024-11-25T23:31:17Z" | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"text-classification"
] | "2024-11-25T23:18:26Z" | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- code
size_categories:
- n<1K
--- |
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-pos-bin-reflct | reflection-gen | "2024-11-25T23:20:20Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:20:19Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
- name: reflection_generate_0
dtype: string
- name: reflection_generate_0_score
dtype: int64
- name: reflection_traceback_0
dtype: string
- name: reflection_generate_1
dtype: string
- name: reflection_generate_1_score
dtype: int64
- name: reflection_traceback_1
dtype: string
- name: reflection_generate_2
dtype: string
- name: reflection_generate_2_score
dtype: int64
- name: reflection_traceback_2
dtype: string
- name: reflection_generate_3
dtype: string
- name: reflection_generate_3_score
dtype: int64
- name: reflection_traceback_3
dtype: string
- name: average_reflection_score
dtype: float64
- name: chosen_average_reflection_score
dtype: float64
splits:
- name: train
num_bytes: 22082333
num_examples: 2381
download_size: 7955486
dataset_size: 22082333
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-pos-bin-reflct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andy4man/judge-brief-agent-hackathon | andy4man | "2024-11-25T23:38:06Z" | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"region:us",
"tech",
"agents"
] | [
"text-generation"
] | "2024-11-25T23:32:08Z" | ---
task_categories:
- text-generation
language:
- en
tags:
- tech
- agents
---
Autonomous AI Agent Hackathon [Event Brief]
SUMMARY
The Autonomous Hackathon will be the first AI-driven hackathon where autonomous agents manage critical functions: creating challenges, judging submissions, and executing payments. Vana, Metamask, and Lit Protocol are confirmed partners, with potential for additional collaborations. This 3-day URL (online) event, tentatively scheduled for early December, will push the boundaries of AI autonomy in Web3.
The Autonomous Hackathon is a groundbreaking experiment in the decentralized AI space, where we take a step toward realizing the vision of true Living Knowledge Systems. In this hackathon, we bring to life the concept of autonomous agents that operate independently, not just as tools, but as evolving entities capable of creating challenges, evaluating submissions, and executing payouts—all without human intervention.
This 3-day event is a bold exploration into how decentralized AI can turn static knowledge into a living, breathing system, where data, insights, and decisions flow dynamically through networks of interconnected agents.
By integrating digital twins, co-pilots, and agent-to-agent interactions, participants will have the opportunity to build, test, and optimize these autonomous systems in real-time. This hackathon challenges developers to push the boundaries of decentralized infrastructure—leveraging cryptographic keys, decentralized identifiers (DIDs), and seamless integrations across platforms like Vana, Metamask, and Lit Protocol.
The ultimate goal is to foster a decentralized ecosystem where knowledge continuously evolves, agents collaborate, and autonomous intelligence drives innovation forward, laying the foundation for a new era of decentralized, community-driven intelligence.
Overview
The Autonomous Hackathon is designed to demonstrate the potential of AI agents by allowing them to autonomously handle end-to-end hackathon management.
Key agent-driven tasks include:
Creating and publishing challenges and bounties to Bountycaster
Judging project submissions on set criteria.
Executing bounty payments on behalf of sponsors.
This event will highlight the future of autonomous AI agents in both on-chain and off-chain contexts, focusing on decentralized identity (DIDs), cryptographic security, and real-time AI-driven task execution.
Objectives
Showcase Autonomous Agent Capabilities:
Demonstrate how autonomous agents can independently execute critical tasks in an event setting.
Advance Decentralized AI and Web3:
Engage developers in building tools and applications that enable real-world AI autonomy in blockchain.
Strengthen Key Partnerships:
Collaborate with leading Web3 companies to position Gaia as a pioneer in agent-driven decentralized ecosystems.
Engage and Grow the Community:
Attract a broad range of participants, introducing them to decentralized AI and inspiring contributions to Gaia’s ecosystem.
Event Structure
Phase Details
Agent Development & Testing
Gaia core engineering and Lit Protocol collaborate to develop and test the three core agents managing hackathon tasks.
Organizer Agent
Gaia agent is trained on partner developer docs, theme of the hackathon, and can create unique challenges, bounties and ideas for what to build. Agent can autonomously post these to Farcaster (Bountycaster) or Jokerace protocols.
Judge Agent
Gaia agent is trained on context for which teams are building the more in demand projects, most likely to get traction, and hit all of the judging criteria for the hackathon. Judge agent and Organizer agent will need to gather context from one another.
Perhaps we use community sentiment data from Vana (learned behavior), Gitcoin (public goods data graph) or Jokerace (community votes) to train the agent
Bounty Payment Agent
Agent is trained on the process to paying out winners. Must work with agent 1 and agent 2 to understand who won various challenges and how to gather their KYC details.
Privado.id to enable hackathon participants to verify identity with the agents, Metamask Delegation toolkit to enable autonomous payments to winners
Qualifications
• Utilize GaiaNet's infrastructure for deploying the agent
◦ https://docs.gaianet.ai/intro/
• Ensure the agent can provide relevant information and recommendations based on user queries
• Provide developer documentation on your process( it does not need to be formal)
• Open-sourced code under the GPL-3.0 license, hosted on a public repository like Github
• API requirements: https://docs.gaianet.ai/user-guide/api-reference?_highlight=api
Domain requirements: https://docs.gaianet.ai/node-guide/register?_highlight=domain#select-a-different-domain
• Agent requirements: https://www.gaianet.ai/agents
• Nodes requirements: https://docs.gaianet.ai/node-guide/customize?_highlight=nodes
Challenge 1: 🏆 “Most Brat Agent” ($10,000)
Description: This challenge is about creating an autonomous agent that pushes boundaries. Whether it’s an agent that challenges social norms, interacts in unexpected ways, or provokes new behaviors, the goal is to build something that’s not only functional but disruptive.
Prize Breakdown:
1st Prize: $2,500
2nd Prize: $1,500
3rd Prize: $1,000
Honorable Mentions (10): $500 each
Examples/Ideas:
• Build an agent that disrupts traditional social media algorithms by curating feeds based on decentralized sentiment analysis.
• Develop a snarky AI bot that autonomously replies to forum discussions or DAO proposals with humorous but insightful commentary.
• An autonomous agent that challenges community votes by providing counter-arguments or unexpected insights in real-time.
Challenge 2: Most Innovative Use of Multiple Agents or Domains ($7,500 Total)
Description: This challenge is for teams that can showcase the collaborative potential of multiple agents or domains working together. Think about how clusters of agents can communicate, share data, or enhance each other’s capabilities to create something truly powerful.
Prize Breakdown:
1st Prize: $2,500
2nd Prize: $1,500
3rd Prize: $1,000
Honorable Mentions (4): $250 each
Examples/Ideas:
• Create a network of agents that work together to automate a complex workflow, such as decentralized finance (DeFi) strategies or cross-chain data analysis.
• Build an AI-powered DAO delegate cluster where different agents represent diverse community interests, optimizing proposal reviews and sentiment analysis.
• Use multiple Gaia domains to deploy an ecosystem of digital twins that enhance user experiences in virtual environments or decentralized apps.
Challenge #3
Description: Teams will create powerful integrations with our featured partners, enhancing the connectivity and functionality of Gaia nodes and domains. The focus here is on creating impactful plugins or tools that extend our network’s capabilities.
Examples/Ideas:
Integrate Chainlink oracles to bring real-time data into Gaia domains for autonomous trading bots or prediction markets.
Build a plugin that leverages OpenZeppelin contracts for secure smart contract deployment within Gaia nodes.
Use Shutter Network to enable private transaction capabilities for Gaia agents handling sensitive data.
Challenge 4: Best Hack of Autonomous Agent Organizers ($5,000)
Description: Build or enhance the capabilities of AI agents that autonomously manage hackathons or other events. This could involve automating tasks such as creating challenges, managing participant onboarding, judging submissions, or distributing rewards.
Prize Breakdown:
1st Prize: $2,000
2nd Prize: $1,500
3rd Prize: $1,000
Honorable Mentions (2): $250 each
Examples/Ideas:
• Create an agent that designs, schedules, and promotes hackathon challenges autonomously.
• Build an AI judge that reviews submissions based on pre-set criteria, leveraging both on-chain and off-chain data.
• Automate bounty payments for hackathon winners using a decentralized payment system integrated with wallet verification.
Judging Criteria
Innovation & Creativity
• Novel use of AI agents or decentralized infrastructure
• Pushes the boundaries of autonomous systems and integrations
Technical Execution
• Robust, well-implemented code; effective use of Gaia nodes and partner APIs
• Follows best practices in AI training, smart contracts, and security
Impact & Usefulness
• Solves real-world problems or enhances decentralized AI adoption
• Drives value for the community or enhances decentralized governance
• Long term vision and commercialization opportunities
User Experience & Design
• Clear, intuitive interfaces for users or developers
• Smooth interaction flows and accessible documentation
Integration with Gaia & Partners
• Effective use of Gaia domains and nodes
• Leverages partners’ tools (e.g., Coinbase SDK, Privado.id) for added functionality
Presentation & Documentation
• Clear explanation of the project, solution, and how it was built
• Well-documented code with setup instructions and a demo video |
Xtest/function_dataset_with_ast_processed_dda22312d45646 | Xtest | "2024-11-26T00:17:39Z" | 0 | 0 | [
"region:us"
] | null | "2024-11-25T23:40:43Z" | ---
dataset_info:
features:
- name: function_all
dtype: string
- name: function_name
dtype: string
- name: function_body
dtype: string
- name: function_all_unknow
dtype: string
- name: ast
struct:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
sequence: 'null'
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: 'null'
- name: line
dtype: int64
- name: spelling
dtype: string
- name: Modified Code
dtype: string
- name: S-Expression of Original Code
dtype: string
- name: S-Expression of Modified Code
dtype: string
- name: AST Image Original
dtype: string
- name: AST Image Modified
dtype: string
- name: Root Node
dtype: string
splits:
- name: train
num_bytes: 664918
num_examples: 10
- name: test
num_bytes: 828637
num_examples: 10
download_size: 544090
dataset_size: 1493555
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
violetxi/NUMINA-V2-Clean-Blocks-9500_10000-200_500 | violetxi | "2024-11-26T00:13:53Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:40:56Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: source
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 199183529
num_examples: 21584
download_size: 22157042
dataset_size: 199183529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_9a66e0b5-6aea-4bb0-bb71-db977ddf04f5 | argilla-internal-testing | "2024-11-25T23:48:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:48:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_2c4ce2a2-e9a4-4d1f-9de6-a6014db67eba | argilla-internal-testing | "2024-11-25T23:48:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:48:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_521567c8-3c3f-410e-b088-b546f0103198 | argilla-internal-testing | "2024-11-25T23:48:35Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:48:33Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0778ef48-99e3-4524-b215-43ffed7e339b | argilla-internal-testing | "2024-11-25T23:48:36Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:48:35Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_09f6fbc2-50a5-447e-beb1-88240a47cff1 | argilla-internal-testing | "2024-11-25T23:48:36Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:48:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/Nov_Clean_Banana_UF_1samples_with1constraints | haorandai | "2024-11-25T23:49:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:49:13Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 186531.0
num_examples: 2
download_size: 188246
dataset_size: 186531.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
violetxi/NUMINA-V2-Clean-Blocks-9500_10000-0_200 | violetxi | "2024-11-26T00:50:42Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:50:52Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: source
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 292928191
num_examples: 39824
download_size: 31969507
dataset_size: 292928191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/Nov_Clean_Mice_UF_1samples_with1constraints | haorandai | "2024-11-25T23:51:55Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T23:51:54Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 195776.0
num_examples: 2
download_size: 197474
dataset_size: 195776.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-combine-v192 | neoneye | "2024-11-25T23:56:02Z" | 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-25T23:54:15Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) combined datasets version 192
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
A combination of multiple datasets.
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 2
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 3
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 4
Added a shared dataset name for all these datasets: `SIMON-SOLVE-V1`. There may be higher version numbers in the future.
My hypothesis: Having a version number in the dataset name, it may be easier to unlearn incorrect training data.
Datasets: `dataset_solve_color.jsonl`, `dataset_solve_rotate.jsonl`, `dataset_solve_translate.jsonl`.
# Version 5
Different random seed.
# Version 6
Using `SIMON-SOLVE-V1` everywhere. Remove the `SIMON-SOLVE-COLOR`, `SIMON-SOLVE-ROTATE`, `SIMON-SOLVE-TRANSLATE`.
# Version 7
Using `SIMON-SOLVE-V1` everywhere.
# Version 8
Same settings. Different seed as usual.
# Version 9
Switching from context length 256 to context length 512.
Increasing the image sizes so the prompt length stays below 512.
`dataset_solve_color`, image size: 1-13.
`dataset_solve_rotate`, image size: 1-9.
`dataset_solve_translate`, image size: 3-9.
# Version 10
Same settings. Different seed as usual.
# Version 11
Same settings. Different seed as usual.
# Version 12
Added 1 more pair to the examples. Now it's 2-4 examples. Previously it was 2-3 examples.
# Version 13
Same settings. Different seed as usual.
# Version 14
Same settings. Different seed as usual.
# Version 15
Same settings. Different seed as usual.
# Version 16
Added `Predict the output image.`
Disabled prediction of rows.
Disabled prediction of height.
# Verison 17
Same settings. Different seed as usual.
Using the `DatasetGenerator` and the `DatasetItemListBuilder`.
# Verison 18
Added datasets.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl` - added.
- `dataset_shape.jsonl` - added.
# Verison 19
Added dataset.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_image.jsonl` - added.
# Verison 20
Bigger images.
# Verison 21
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_mass.jsonl` - added.
# Verison 22
Added dataset.
Datasets:
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_cellular_automaton.jsonl`
- `dataset_shape.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_histogram.jsonl` - added.
Bigger image sizes.
Number of rows=200k. Was previously 100k rows.
# Verison 23
`datset_mass.jsonl`. increased to `max_mass=5`.
# Verison 24
`datset_mass.jsonl`. increased to `max_mass=6`.
# Verison 25
different seed.
# Verison 26
`datset_mass.jsonl`. increased to `max_mass=25`.
different seed.
# Verison 27
different seed.
# Verison 28
different seed.
# Verison 29
different seed.
# Verison 30
different seed.
# Verison 31
different seed.
# Verison 32
different seed.
# Verison 33
Disabled some dataset.
Datasets:
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_cellular_automaton.jsonl`
# Verison 34
Enabled all datasets.
# Version 35
Regenerated all datasets with new random seeds.
# Verison 36
Added dataset `dataset_scale.jsonl`.
Disabled some dataset.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
# Verison 37
Enabled all datasets
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
# Verison 38
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - added
# Version 39
Regenerated all datasets with new random seeds.
# Version 40
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl` - added
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 41
Regenerated all datasets with new random seeds.
# Version 42
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl` - added
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 43
Enabled all datasets.
# Version 44
Regenerated all datasets with new random seeds.
# Version 45
Extended the `dataset_shape.jsonl` with these new `PixelConnectivity` types: `CORNER4`, `LR2`, `TB2`, `TLBR2`, `TRBL2`.
Hopefully it makes the model better at making sense of diagonal structures, which is something it's terrible at at the moment.
# Version 46
Regenerated all datasets with new random seeds.
# Version 47
Added dataset. Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl` - added
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 48
Enabled all datasets.
# Version 49
Bigger `max_mass`. From 6 to 8.
# Version 50
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 51
Regenerated all datasets with new random seeds.
# Version 52
Regenerated all datasets with new random seeds.
# Version 53
Regenerated all datasets with new random seeds.
# Version 54
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_erotion.jsonl` - added
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 55
Added dataset. Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl` - added
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 56
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 57
Regenerated all datasets with new random seeds.
# Version 58
Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 59
Added new datasets.
Disabled most datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl` - added
- `dataset_solve_fractal.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 60
Incremented random seed
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 61
Enabled all datasets.
More padding inside the `dataset_solve_fractal.jsonl`.
# Version 62
All datasets still enabled.
Turning up the parameter for `dataset_solve_fractal.jsonl`.
scale_input from 3 to 4.
scale_output from 3 to 4.
max_image_size from 3 to 4.
max_pad_count from 4 to 5.
# Version 63
Disabled several datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 64
Added dataset.
Increased the number of rows in the jsonl file from 200k to 300k.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl`
# Version 65
random seed.
# Version 66
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl`
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 67
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl` - enabled
- `dataset_solve_compress.jsonl` - enabled
- `dataset_solve_erosion.jsonl` - enabled
- `dataset_solve_fractal.jsonl` - enabled
- `dataset_solve_outline.jsonl` - enabled
- `dataset_solve_rotate.jsonl` - enabled
- `dataset_solve_scale.jsonl` - enabled
- `dataset_solve_symmetry.jsonl` - enabled
- `dataset_solve_translate.jsonl` - enabled
- `dataset_symmetry.jsonl`
# Version 68
Enabled all datasets.
# Version 69
Different random seed.
# Version 70
Different random seed.
# Version 71
Different random seed.
# Version 72
Different random seed.
# Version 73
Different random seed.
# Version 74
Major update to `dataset_solve_symmetry.jsonl`.
# Version 75
Different random seed.
# Version 76
Disabled some datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 77
Enabled all datasets.
# Version 78
Major update to `dataset_solve_symmetry.jsonl`.
# Version 79
Different random seed.
# Version 80
Different random seed.
# Version 81
Different random seed.
# Version 82
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl` - added
- `dataset_symmetry.jsonl`
# Version 83
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 84
Added dataset `dataset_solve_grid.jsonl`.
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl` - added
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 85
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 86
Enabled all datasets.
# Version 87
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 88
Added dataset `dataset_solve_probecolor.jsonl` with all directions enabled.
Disabled datasets that doesn't solve ARC puzzles.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 89
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 90
Disabled some of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 91
Added dataset.
Enabled all datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl` - added
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 92
Different random seed.
# Version 93
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl` - added
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 94
Added dataset.
Disabled datasets that doesn't solve ARC tasks.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl` - added
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 95
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl` - added
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 96
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl` - major update.
- `dataset_symmetry.jsonl`
# Version 97
Disabled the first half of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 98
Disabled the last half of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl` - disabled
- `dataset_solve_erosion.jsonl` - disabled
- `dataset_solve_fractal.jsonl` - disabled
- `dataset_solve_grid.jsonl` - disabled
- `dataset_solve_half.jsonl` - disabled
- `dataset_solve_mass.jsonl` - disabled
- `dataset_solve_outline.jsonl` - disabled
- `dataset_solve_probecolor.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 99
Disabled the 1/4th of the datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl` - disabled
- `dataset_solve_color.jsonl` - disabled
- `dataset_solve_compress.jsonl` - disabled
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl` - disabled
- `dataset_solve_rotate.jsonl` - disabled
- `dataset_solve_scale.jsonl` - disabled
- `dataset_solve_symmetry.jsonl` - disabled
- `dataset_solve_translate.jsonl` - disabled
- `dataset_solve_zindex.jsonl` - disabled
- `dataset_symmetry.jsonl` - disabled
# Version 100
Added dataset.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl` - added
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 101
Disabled the non solving datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 102
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl` - added
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 103
Different random seed.
# Version 104
Disabled the non solving datasets.
Datasets:
- `dataset_cellular_automaton.jsonl` - disabled
- `dataset_dilation.jsonl` - disabled
- `dataset_erotion.jsonl` - disabled
- `dataset_histogram.jsonl` - disabled
- `dataset_image.jsonl` - disabled
- `dataset_image_pair.jsonl` - disabled
- `dataset_mass.jsonl` - disabled
- `dataset_scale.jsonl` - disabled
- `dataset_shape.jsonl` - disabled
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl` - disabled
# Version 105
Major update to `dataset_solve_scale.jsonl` with scaling down noisy images.
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl` - scale down noisy images
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 106
Different random seed.
# Version 107
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl` - added
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 108
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl` - added
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 109
Different random seed.
# Version 110
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_halfplane.jsonl` - added
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 111
Different random seed.
# Version 112
Different random seed.
# Version 113
Different random seed.
# Version 114
Major update to the `dataset_solve-mass.jsonl`, so it now includes `mass_compare_adjacent_rows` and `mass_compare_adjacent_columns`.
# Version 115
Added dataset
Datasets:
- `dataset_cellular_automaton.jsonl`
- `dataset_dilation.jsonl`
- `dataset_erotion.jsonl`
- `dataset_histogram.jsonl`
- `dataset_image.jsonl`
- `dataset_image_pair.jsonl`
- `dataset_mass.jsonl`
- `dataset_scale.jsonl`
- `dataset_shape.jsonl`
- `dataset_solve_bool.jsonl`
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_color.jsonl`
- `dataset_solve_compress.jsonl`
- `dataset_solve_edge.jsonl`
- `dataset_solve_erosion.jsonl`
- `dataset_solve_flip.jsonl`
- `dataset_solve_fractal.jsonl`
- `dataset_solve_gravity.jsonl` - added
- `dataset_solve_grid.jsonl`
- `dataset_solve_half.jsonl`
- `dataset_solve_halfplane.jsonl`
- `dataset_solve_mask.jsonl`
- `dataset_solve_mass.jsonl`
- `dataset_solve_outline.jsonl`
- `dataset_solve_probecolor.jsonl`
- `dataset_solve_ray.jsonl`
- `dataset_solve_rotate.jsonl`
- `dataset_solve_scale.jsonl`
- `dataset_solve_symmetry.jsonl`
- `dataset_solve_translate.jsonl`
- `dataset_solve_zindex.jsonl`
- `dataset_symmetry.jsonl`
# Version 116
Hypothesis. What if I train with a smaller dataset, will it converge faster?
Reduced the number of rows in this dataset from 300k rows to 10k rows.
# Version 117
Interesting, 10k rows seems to work fine with the model training.
Picked new random rows.
# Version 118
Still going with 10k rows.
Picked new random rows.
# Version 119
Still going with 10k rows.
Picked new random rows.
# Version 120
Switched to 20k rows.
# Version 121
Still going with 20k rows.
Picked new random rows.
# Version 122
20k rows.
Added `dataset_solve_reverse.jsonl`.
# Version 123
Doubled the number of rows to 40k rows.
# Version 124
Set row count to 100k rows.
Major update to `dataset_solve_gravity.jsonl`.
# Version 125
Row count: 100k rows.
# Version 126
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_bool.jsonl
dataset_solve_boundingbox.jsonl
dataset_solve_color.jsonl
dataset_solve_compress.jsonl
dataset_solve_edge.jsonl
dataset_solve_erosion.jsonl
dataset_solve_flip.jsonl
dataset_solve_fractal.jsonl
dataset_solve_gravity.jsonl
dataset_solve_grid.jsonl
dataset_solve_half.jsonl
dataset_solve_halfplane.jsonl
dataset_solve_mask.jsonl
dataset_solve_mass.jsonl
dataset_solve_outline.jsonl
dataset_solve_probecolor.jsonl
dataset_solve_ray.jsonl
dataset_solve_reverse.jsonl
dataset_solve_rotate.jsonl
dataset_solve_scale.jsonl
dataset_solve_symmetry.jsonl
dataset_solve_translate.jsonl
dataset_solve_zindex.jsonl
```
# Version 127
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_scale.jsonl
dataset_solve_symmetry.jsonl
dataset_solve_translate.jsonl
dataset_solve_zindex.jsonl
```
# Version 128
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_probecolor.jsonl
dataset_solve_ray.jsonl
dataset_solve_reverse.jsonl
dataset_solve_rotate.jsonl
```
# Version 129
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_gravity.jsonl
dataset_solve_grid.jsonl
dataset_solve_half.jsonl
dataset_solve_halfplane.jsonl
dataset_solve_mask.jsonl
dataset_solve_mass.jsonl
dataset_solve_outline.jsonl
```
# Version 130
Row count: 20k rows.
Only these datasets are enabled:
```txt
dataset_solve_bool.jsonl
dataset_solve_boundingbox.jsonl
dataset_solve_color.jsonl
dataset_solve_compress.jsonl
dataset_solve_edge.jsonl
dataset_solve_erosion.jsonl
dataset_solve_flip.jsonl
dataset_solve_fractal.jsonl
```
# Version 131
Switched back to 300k rows.
Enabled all the datasets.
# Version 132
Random seed.
# Version 133
Removed the rows that are longer than what can be fitted inside a 512 context length.
# Version 134
Random seed.
# Version 135
Random seed.
# Version 136
Major update to the `dataset_solve_gravity.jsonl` file.
# Version 137
Added dataset `dataset_solve_skew.jsonl`.
# Version 138
Disabled several datasets.
```txt
# 'dataset_cellular_automaton.jsonl',
# 'dataset_dilation.jsonl',
# 'dataset_erosion.jsonl',
# 'dataset_histogram.jsonl',
# 'dataset_image.jsonl',
# 'dataset_image_pair.jsonl',
# 'dataset_mass.jsonl',
# 'dataset_scale.jsonl',
# 'dataset_shape.jsonl',
# 'dataset_solve_bool.jsonl',
'dataset_solve_boundingbox.jsonl',
'dataset_solve_color.jsonl',
'dataset_solve_compress.jsonl',
'dataset_solve_edge.jsonl',
'dataset_solve_erosion.jsonl',
'dataset_solve_flip.jsonl',
'dataset_solve_fractal.jsonl',
'dataset_solve_gravity.jsonl',
'dataset_solve_grid.jsonl',
'dataset_solve_half.jsonl',
# 'dataset_solve_halfplane.jsonl',
'dataset_solve_mask.jsonl',
'dataset_solve_mass.jsonl',
'dataset_solve_outline.jsonl',
'dataset_solve_probecolor.jsonl',
# 'dataset_solve_ray.jsonl',
# 'dataset_solve_reverse.jsonl',
'dataset_solve_rotate.jsonl',
'dataset_solve_scale.jsonl',
# 'dataset_solve_skew.jsonl',
'dataset_solve_symmetry.jsonl',
'dataset_solve_translate.jsonl',
'dataset_solve_zindex.jsonl',
# 'dataset_symmetry.jsonl',
```
# Version 139
Disabled several datasets.
```txt
'dataset_cellular_automaton.jsonl',
'dataset_dilation.jsonl',
'dataset_erosion.jsonl',
'dataset_histogram.jsonl',
'dataset_image.jsonl',
'dataset_image_pair.jsonl',
'dataset_mass.jsonl',
'dataset_scale.jsonl',
'dataset_shape.jsonl',
'dataset_solve_bool.jsonl',
# 'dataset_solve_boundingbox.jsonl',
# 'dataset_solve_color.jsonl',
# 'dataset_solve_compress.jsonl',
# 'dataset_solve_edge.jsonl',
# 'dataset_solve_erosion.jsonl',
# 'dataset_solve_flip.jsonl',
# 'dataset_solve_fractal.jsonl',
# 'dataset_solve_gravity.jsonl',
# 'dataset_solve_grid.jsonl',
# 'dataset_solve_half.jsonl',
'dataset_solve_halfplane.jsonl',
# 'dataset_solve_mask.jsonl',
# 'dataset_solve_mass.jsonl',
# 'dataset_solve_outline.jsonl',
# 'dataset_solve_probecolor.jsonl',
'dataset_solve_ray.jsonl',
'dataset_solve_reverse.jsonl',
# 'dataset_solve_rotate.jsonl',
# 'dataset_solve_scale.jsonl',
'dataset_solve_skew.jsonl',
# 'dataset_solve_symmetry.jsonl',
# 'dataset_solve_translate.jsonl',
# 'dataset_solve_zindex.jsonl',
'dataset_symmetry.jsonl',
```
# Version 140
Enabled all datasets.
Added new dataset: `dataset_solve_cross.jsonl`.
# Version 141
Switched to 30k rows.
Disabled several datasets.
```txt
# 'dataset_cellular_automaton.jsonl',
# 'dataset_dilation.jsonl',
# 'dataset_erosion.jsonl',
# 'dataset_histogram.jsonl',
# 'dataset_image.jsonl',
# 'dataset_image_pair.jsonl',
# 'dataset_mass.jsonl',
# 'dataset_scale.jsonl',
# 'dataset_shape.jsonl',
# 'dataset_solve_bool.jsonl',
'dataset_solve_boundingbox.jsonl',
'dataset_solve_color.jsonl',
'dataset_solve_compress.jsonl',
# 'dataset_solve_cross.jsonl',
'dataset_solve_edge.jsonl',
'dataset_solve_erosion.jsonl',
'dataset_solve_flip.jsonl',
'dataset_solve_fractal.jsonl',
# 'dataset_solve_gravity.jsonl',
'dataset_solve_grid.jsonl',
'dataset_solve_half.jsonl',
# 'dataset_solve_halfplane.jsonl',
'dataset_solve_mask.jsonl',
'dataset_solve_mass.jsonl',
'dataset_solve_outline.jsonl',
'dataset_solve_probecolor.jsonl',
'dataset_solve_ray.jsonl',
# 'dataset_solve_reverse.jsonl',
'dataset_solve_rotate.jsonl',
'dataset_solve_scale.jsonl',
'dataset_solve_skew.jsonl',
'dataset_solve_symmetry.jsonl',
'dataset_solve_translate.jsonl',
# 'dataset_solve_zindex.jsonl',
# 'dataset_symmetry.jsonl',
```
# Version 142
Switched to 300k rows.
Enabled all datasets.
Switched from 512 context to 1024 context.
# Version 143
Bigger images in `dataset_solve_cross.jsonl` and in `dataset_solve_mass.jsonl`.
# Version 144
Major update to `dataset_solve_symmetry.jsonl`.
# Version 145
Added `dataset_solve_span.jsonl`.
# Version 146
Extended `dataset_solve_span.jsonl` with `generate_task_with_template_lines`.
# Version 147
Extended `dataset_solve_span.jsonl` with `generate_task_with_alternate`.
# Version 148
Added `dataset_solve_count.jsonl`.
# Version 149
Randomized.
# Version 150
Upgraded context length for several datasets from 512 to 1024.
# Version 151
Randomized.
# Version 152
Randomized.
# Version 153
Extended `dataset_solve_mask.jsonl` with `generate_task_repair_rectangle_and_crop`.
# Version 154
Extended `dataset_solve_color.jsonl` with `generate_task_replace_color`.
# Version 155
Major update to datasets in the range from `dataset_solve_axxx.jsonl` to `dataset_solve_mask.jsonl`.
Now there is an earlier prediction for the output that is to be predicted. It may contain a hint, or it may be garbage that is to be ignored.
# Version 156
Only 2000 rows.
Only these datasets.
'dataset_cellular_automaton.jsonl',
'dataset_dilation.jsonl',
'dataset_erosion.jsonl',
'dataset_histogram.jsonl',
'dataset_image.jsonl',
'dataset_image_pair.jsonl',
'dataset_mass.jsonl',
'dataset_scale.jsonl',
'dataset_shape.jsonl',
'dataset_symmetry.jsonl',
# Version 157
Only these datasets.
- 'dataset_solve_bool.jsonl',
- 'dataset_solve_boundingbox.jsonl',
- 'dataset_solve_color.jsonl',
- 'dataset_solve_compress.jsonl',
- 'dataset_solve_count.jsonl',
- 'dataset_solve_cross.jsonl',
- 'dataset_solve_edge.jsonl',
- 'dataset_solve_erosion.jsonl',
- 'dataset_solve_flip.jsonl',
- 'dataset_solve_fractal.jsonl',
- 'dataset_solve_gravity.jsonl',
- 'dataset_solve_grid.jsonl',
- 'dataset_solve_half.jsonl',
- 'dataset_solve_halfplane.jsonl',
- 'dataset_solve_mask.jsonl',
- 'dataset_solve_mass.jsonl',
- 'dataset_solve_outline.jsonl',
- 'dataset_solve_probecolor.jsonl',
- 'dataset_solve_ray.jsonl',
- 'dataset_solve_reverse.jsonl',
- 'dataset_solve_rotate.jsonl',
- 'dataset_solve_scale.jsonl',
- 'dataset_solve_span.jsonl',
- 'dataset_solve_skew.jsonl',
- 'dataset_solve_symmetry.jsonl',
- 'dataset_solve_translate.jsonl',
- 'dataset_solve_zindex.jsonl',
# Version 158
Only these datasets.
- `dataset_solve_boundingbox.jsonl`
- `dataset_solve_rectangle.jsonl`
# Versin 159
Enabled all the `_solve_` datasets.
# Version 160
Regenerated all the `_solve_` datasets with new seed.
# Version 161
Regenerated all the `_solve_` datasets with new seed.
# Version 162
Replaced RLE compressed response with raw pixel response.
# Version 163
Added more generators
- DatasetSolveCount
- DatasetSolveCross
- DatasetSolveEdge
- DatasetSolveErosion
- DatasetSolveFlip
- DatasetSolveFractal
# Version 164
Increased row count from 1000 to 2000.
# Version 165
Added more generators.
# Version 166
Added more generators.
# Version 167
Added more generators.
# Version 168
Added more generators.
# Version 169
Generated data.
# Version 170
Generated data.
# Version 171
Generated data.
Increased output context length from 256 to 512.
# Version 172
Generated data.
# Version 173
Generated data.
# Version 174
Generated data.
# Version 175
Generated data.
# Version 176
Generated data.
# Version 177
Increased the number of rows from 2000 to 4000.
Generated data.
# Version 178
Generated data.
# Version 179
Generated data.
# Version 180
Generated data.
# Version 181
Generated data.
# Version 182
Generated data.
# Version 183
Generated data.
# Version 184
Generated data.
# Version 185
Generated data.
# Version 186
Generated data.
# Version 187
Generated data.
# Version 188
Generated data.
# Version 189
Added `DatasetSolveDeform` dataset generator.
# Version 190
Generated data.
# Version 191
Generated data.
# Version 192
Generated data.
|
haorandai/Nov_Clean_Bicycle_UF_1samples_with1constraints | haorandai | "2024-11-26T00:00:12Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:00:11Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 223189.0
num_examples: 2
download_size: 224928
dataset_size: 223189.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rpharale/fictitious_articles | rpharale | "2024-11-26T00:04:42Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:04:41Z" | ---
dataset_info:
features:
- name: topic
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 113532
num_examples: 20
download_size: 65681
dataset_size: 113532
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/Nov_Clean_Banana_Orange_1samples_with1constraints | haorandai | "2024-11-26T00:12:45Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:12:44Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 66697.0
num_examples: 2
download_size: 70036
dataset_size: 66697.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MuNo-LLM/ko-self-instruct-safety-10k | MuNo-LLM | "2024-11-26T00:20:18Z" | 0 | 0 | [
"task_categories:text-generation",
"language:ko",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-11-26T00:16:13Z" | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ko
size_categories:
- 1K<n<10K
--- |
hashim19/ASVspoofLD | hashim19 | "2024-11-26T01:28:55Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-26T00:16:58Z" | ---
license: mit
---
========================================================================================================
ASVspoof Laundered Database: This database is based on ASVspoof 2019 logical access (LA) eval partition.
The Asvspoof 2019 LA eval database is passed through five different types of additive noise at three
different Signal-to-Noise ratio (SNR) levels, three types of reverberation noise, six different re-compression rates, four
different resampling factors, and one type of low pass filtering accumulating to a total of 1388.22
hours of audio data.
Dataset Creators: Hashim Ali, Surya Subramani, Shefali Sudhir, Raksha Varahamurthy and Hafiz Malik
Dataset Contact: Hashim Ali alhashim@umich.edu
Date Written: 05/29/2024
*** WARNING ***:
The 'flac' folder contains over 2 million (2065873) files. Open this folder at your own risk.
========================================================================================================
1. Directory Structure
_______________________
--> ASVspoofLauneredDatabase
--> flac
--> protocols
--> Readme.txt
2. Description of the audio files
_________________________________
The directory flac contain audio files for each type of laundering attack, namely, Noise_Addition, Reverberation, Recompression, Resampling, and Filtering. Each laundering
attack (i) has different parameters (j) which are described below in the protocols section. All audio files in this directory are in the flac format.
3. Description of the protocols
_______________________________
The directory protocols contains five protocol files, one for each laundering attack.
Each column of the protocol is formatted as:
SPEAKER_ID AUDIO_FILE_NAME SYSTEM_ID KEY Laundering_Type Laundering_Param
1) SPEAKER_ID: LA_****, a 4-digit speaker ID
2) AUDIO_FILE_NAME: LA_****, name of the audio file
3) SYSTEM_ID: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')
4) KEY: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
5) Laundering_Type Type of laundering attack. One of 'Noise_Addition', 'Reverberation', 'Recompression', 'Resampling', and 'Filtering'
6) Laundering_Param Parameters for the laundering attack. For example, in the case of Noise_Addition, it can be 'babble_0' where babble is the type of
additive noise and 0 is the SNR level at which the babble noise is added to the audio signal.
Note that:
1) the first four columns are the same as in ASVspoof2019_LA_cm_protocols (refer to the ASVspoof2019 database), where the fourth in the original database
is omitted since it is NOT used for LA.
2) Brief description on the Laundering_Param:
babble_0 babble noise at SNR level of 0
babble_10 babble noise at SNR level of 10
babble_20 babble noise at SNR level of 20
cafe_0 cafe noise at SNR level of 0
cafe_10 cafe noise at SNR level of 10
cafe_20 cafe noise at SNR level of 20
street_0 street noise at SNR level of 0
street_10 street noise at SNR level of 10
street_20 street noise at SNR level of 20
volvo_0 volvo noise at SNR level of 0
volvo_10 volvo noise at SNR level of 10
volvo_20 volvo noise at SNR level of 20
white_0 white noise at SNR level of 0
white_10 white noise at SNR level of 10
white_20 white noise at SNR level of 20
RT_0_3 Reverberation with RT60 = 0.3 sec
RT_0_6 Reverberation with RT60 = 0.6 sec
RT_0_9 Reverberation with RT60 = 0.9 sec
recompression_128k Compression using bit rate of 128 kbit/s
recompression_16k Compression using bit rate of 16 kbit/s
recompression_196k Compression using bit rate of 196 kbit/s
recompression_256k Compression using bit rate of 256 kbit/s
recompression_320k Compression using bit rate of 320 kbit/s
recompression_64k Compression using bit rate of 64 kbit/s
resample_11025 resampling rate of 11025 Hz
resample_22050 resampling rate of 22050 Hz
resample_44100 resampling rate of 44100 Hz
resample_8000 resampling rate of 8000 Hz
lpf_7000 low pass filtering with a cut-off frequency of 7 Khz |
haorandai/Nov_Clean_Mice_Orange_1samples_with1constraints | haorandai | "2024-11-26T00:17:21Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:17:20Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 114831.0
num_examples: 2
download_size: 118596
dataset_size: 114831.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/Nov_Clean_Bicycle_Orange_1samples_with1constraints | haorandai | "2024-11-26T00:18:38Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:18:37Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 96732.0
num_examples: 2
download_size: 100531
dataset_size: 96732.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hongyunjeong/eunguep_sentence-to-label | hongyunjeong | "2024-11-26T00:20:49Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:20:43Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 66099
num_examples: 630
download_size: 15573
dataset_size: 66099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hongyunjeong/eunguep_sentence-to-label_jsonl | hongyunjeong | "2024-11-26T00:21:13Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:21:05Z" | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 66099
num_examples: 630
download_size: 15573
dataset_size: 66099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-bin | reflection-gen | "2024-11-26T00:26:11Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:26:10Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: chosen_probs
dtype: float64
- name: chosen_probs_win
dtype: float64
- name: chosen_probs_lose
dtype: float64
splits:
- name: train
num_bytes: 10019653
num_examples: 3017
download_size: 4259424
dataset_size: 10019653
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-bin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-full_resp_trace | reflection-gen | "2024-11-26T00:26:14Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:26:12Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 26204204
num_examples: 3017
download_size: 10264964
dataset_size: 26204204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-full_resp_trace"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-bin_all_pairs | reflection-gen | "2024-11-26T00:26:16Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:26:14Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 21584271
num_examples: 6300
download_size: 5982258
dataset_size: 21584271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_pos_reflct_rmsprop_iter4_sppo_hard_new_cn_rl_oj_iter4-bin_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
styfeng/TinyDialogues | styfeng | "2024-11-26T00:51:50Z" | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2024-11-26T00:27:15Z" | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: TinyDialogues
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Xtest/function_dataset_ast_rootnode | Xtest | "2024-11-26T00:29:57Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-26T00:29:55Z" | ---
dataset_info:
features:
- name: function_all
dtype: string
- name: function_name
dtype: string
- name: function_body
dtype: string
- name: function_all_unknow
dtype: string
- name: ast
struct:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
list:
- name: children
sequence: 'null'
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: string
- name: line
dtype: int64
- name: spelling
dtype: string
- name: kind
dtype: string
- name: location
struct:
- name: column
dtype: int64
- name: file
dtype: 'null'
- name: line
dtype: int64
- name: spelling
dtype: string
- name: Modified Code
dtype: string
- name: S-Expression of Original Code
dtype: string
- name: S-Expression of Modified Code
dtype: string
- name: AST Image Original
dtype: string
- name: AST Image Modified
dtype: string
- name: Root Node
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 453798
num_examples: 5
- name: test
num_bytes: 575547
num_examples: 5
download_size: 427298
dataset_size: 1029345
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|