datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
4.7M
| likes
int64 0
7.59k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1.02M
|
---|---|---|---|---|---|---|---|---|
justus27/qwq_synthetic_sft_data_math | justus27 | "2024-12-11T21:31:55Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-11T21:29:13Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 50430692
num_examples: 5318
download_size: 19747573
dataset_size: 50430692
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
julietz/korean_paimon22050_2 | julietz | "2024-12-11T22:28:42Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-11T22:28:40Z" | ---
dataset_info:
features:
- name: Audio Path
dtype: string
- name: Sampling Rate
dtype: int64
- name: Waveform Data
dtype: string
- name: Language
dtype: string
- name: Transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 1431249
num_examples: 4611
download_size: 572652
dataset_size: 1431249
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/stackexchange_movies | mlfoundations-dev | "2024-12-23T18:09:03Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T00:26:14Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 206385556
num_examples: 50000
download_size: 120003350
dataset_size: 206385556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen0_run0_llama2-7b_xlsum_doc1000_real64_synt64 | dgambettaphd | "2024-12-12T01:32:39Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T01:32:36Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 635765
num_examples: 1000
download_size: 417522
dataset_size: 635765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
saketupadhyay/Function_BasicBlock_Features_NIST_Juliet1_3_C_CPP | saketupadhyay | "2024-12-12T03:25:32Z" | 32 | 0 | [
"task_categories:tabular-classification",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2412.08100",
"region:us",
"software"
] | [
"tabular-classification"
] | "2024-12-12T02:54:13Z" | ---
license: mit
task_categories:
- tabular-classification
tags:
- software
size_categories:
- 10K<n<100K
paper:
- arxiv.org/abs/2412.08100
---
# Function and BasicBlock Binary Classification Features based on NIST Juliet1.3 C/C++
The final Basic Block and Function features are extracted in Semicolon-Separated Values (SSV) format.
To load in pandas, use -
```python
import pandas as pd
data = pd.read_csv("FNFeatures.csv", sep=";")
```
Assuming `FNFeatures.csv` is the target feature file.
## Function Features
The generated dataset is list of functions with various characteristics and a label indicating whether each function is
vulnerable or not. The data is structured into 14 columns, which are described below:
1. **Function ID**: A unique identifier for each function.
2. **Function Name**: The name of the function.
3. **Instructions**: The number of instructions (e.g.,Intermediate Instructions) in the function.
4. **BBs** (Basic Blocks): The number of basic blocks in the function. A basic block is a sequence of instructions that
are executed together without any control flow changes.
5. **In-degree**: The number of incoming edges to the function in the call graph, indicating how many other functions
call this one.
6. **Out-degree**: The number of outgoing edges from the function in the call graph, indicating how many other functions
are called by this one.
7. **Num Loops**: The number of loops (e.g., for, while, do-while) present in the function.
8. **Static Allocations**: The number of static memory allocations made by the function.
9. **Dynamic Allocations**: The number of dynamic memory allocations made by the function (e.g., using `malloc`,
`realloc`).
10. **MemOps** (Memory Operations): The number of memory-related operations performed by the function (e.g., reads,
writes).
11. **CondBranches** (Conditional Branches): The number of conditional branches (e.g., if-else statements) in the
function.
12. **UnCondBranches** (Unconditional Branches): The number of unconditional branches (e.g., jumps, returns) in the
function.
13. **DirectCalls**: The number of direct function calls made by the function.
14. **InDirectCalls** (Indirect Calls): The number of indirect function calls made by the function (e.g., through a
pointer or a table).
15. **VULNERABLE**: A binary label indicating whether the function is vulnerable (1) or not (0).
## Basic Block Features
Generated Basic Block dataset a collection of basic blocks (BBs) from functions with various characteristics and a label
indicating whether each block is vulnerable or not. The data is structured into 13 columns, which are described below:
1. **Block ID**: A unique identifier for each basic block.
2. **Block Name**: Name of the block with following structure - `BB_<block #>_<demangled parent function>`
3. **Instructions**: The number of instructions (e.g., assembly code operations) in the basic block.
4. **In-degree**: The number of incoming edges to the basic block in the control flow graph, indicating how many other
blocks lead to this one.
5. **Out-degree**: The number of outgoing edges from the basic block in the control flow graph, indicating how many
other blocks are reachable from this one.
6. **Static Allocations**: The number of static memory allocations made by the basic block.
7. **Dynamic Allocations**: The number of dynamic memory allocations made by the basic block (e.g., using `new`,
`malloc`).
8. **MemOps** (Memory Operations): The number of memory-related operations performed by the basic block (e.g., reads,
writes).
9. **CondBranches** (Conditional Branches): The number of conditional branches (e.g., if-else statements) in the basic
block.
10. **UnCondBranches** (Unconditional Branches): The number of unconditional branches (e.g., jumps, returns) in the
basic block.
11. **DirectCalls**: The number of direct function calls made by the basic block.
12. **InDirectCalls** (Indirect Calls): The number of indirect function calls made by the basic block (e.g., through a
pointer or a table).
13. **VULNERABLE**: A binary label indicating whether the basic block is vulnerable (1) or not (0).
---
### A Note on Branches in Basic Blocks
Conditional branches (CondBranches) and unconditional branches (UnCondBranches) primarily serve as sanity checks and do
not significantly impact the categorization of Basic Blocks (it might actually harm the accuracy). Let’s analyze the
possible values of \( N \) (number of conditional branches) and \( M \) (number of unconditional branches).
A basic block can contain at most one conditional branch. A conditional branch is typically used to terminate the block
and transfer control to another location within the code. If there were multiple conditional branches, they would need
to be combined into a single decision point using logical operators, which would not increase the count of separate
conditional branches.
$$
\therefore N \in \{0, 1\}
$$
where \( N \) is either \( 0 \) (no conditional branch) or \( 1 \) (one conditional branch).
Similarly, a basic block can have at most one unconditional branch. An unconditional branch is typically used to exit
the block and jump to another location in the code. If there were multiple unconditional branches, they would be
redundant, as only one of them would be executed.
$$
\therefore M \in \{0, 1\}
$$
where \( M \) is either \( 0 \) (no unconditional branch) or \( 1 \) (one unconditional branch).
If a basic block contains a conditional branch (\( N = 1 \)), it is not possible to have an unconditional branch (\( M =
0 \)), as the control flow would be determined solely by the conditional branch. Conversely, if a basic block includes
an unconditional branch (\( M = 1 \)), it is not feasible to have a conditional branch (\( N = 0 \)), as the
unconditional branch would override any conditional decision.
Logically -
$$
N \times M = 0
$$
$$
(N = 1) \Rightarrow (M = 0)
$$
$$
(M = 1) \Rightarrow (N = 0)
$$
That means that only one of \( N \) or \( M \) can have the value of \( 1 \) at any given time.
If \( N \) is set to \( 1 \), \( M \) must be set to \( 0 \), and vice versa.
We can use this relationship to check the functionality of our BB compiler pass and sanity of our training dataset.
---
## Cite
If you utilize this project or any portion thereof, please ensure proper citation of the following work:
```text
@misc{upadhyay2024fuzzdistillintelligentfuzzingtarget,
title={FuzzDistill: Intelligent Fuzzing Target Selection using Compile-Time Analysis and Machine Learning},
author={Saket Upadhyay},
year={2024},
eprint={2412.08100},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2412.08100},
}
```
|
xlzoiolx/Modelo_01 | xlzoiolx | "2024-12-12T04:25:38Z" | 32 | 0 | [
"license:cc",
"region:us"
] | null | "2024-12-12T04:24:35Z" | ---
license: cc
---
|
DT4LM/gp_mr_faster-alzantot_differential_original | DT4LM | "2024-12-12T04:34:48Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T04:34:46Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 46513.11462450593
num_examples: 358
download_size: 33703
dataset_size: 46513.11462450593
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
historyHulk/MoDeTrans | historyHulk | "2024-12-15T09:28:27Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T07:16:48Z" | ---
dataset_info:
features:
- name: filename
dtype: string
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 164401783.625
num_examples: 2043
download_size: 161098467
dataset_size: 164401783.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- |
SeppeV/results_joke_gen_of_mistral_curry_dpo_iter4_1000_jo | SeppeV | "2024-12-12T08:19:20Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T08:19:19Z" | ---
dataset_info:
features:
- name: jokeText
dtype: string
- name: userId
dtype: int64
- name: score
dtype: float32
splits:
- name: train
num_bytes: 87807
num_examples: 125
download_size: 52540
dataset_size: 87807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShahadMAlshalawi/VQAv2-Encoder-Violet-Captions | ShahadMAlshalawi | "2024-12-12T09:29:53Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T09:18:29Z" | ---
dataset_info:
features:
- name: metadata
struct:
- name: image_id
dtype: int64
- name: question_id
dtype: int64
- name: question_type
dtype: string
- name: answer_type
dtype: string
- name: image
dtype: image
- name: question
struct:
- name: en
dtype: string
- name: ar
dtype: string
- name: answers
sequence:
- name: en
dtype: string
- name: ar
dtype: string
- name: confidence
dtype: string
- name: id
dtype: int32
- name: multiple_choice_answer
struct:
- name: en
dtype: string
- name: ar
dtype: string
- name: features
sequence:
sequence:
sequence: float32
- name: captions
list:
- name: caption
dtype: string
splits:
- name: validation
num_bytes: 35946803566.0
num_examples: 214354
download_size: 7978460816
dataset_size: 35946803566.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_7da325d6-c13b-48de-b42d-9a43d06126a8 | argilla-internal-testing | "2024-12-12T09:58:09Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T09:58:08Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_1b6e623c-e080-42a6-94b6-404ef25b1a71 | argilla-internal-testing | "2024-12-12T10:06:18Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T10:06:17Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b7e24593-07a1-4eca-9c47-009394a7d199 | argilla-internal-testing | "2024-12-12T10:34:54Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T10:34:53Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_51ea3487-50d6-4732-a5f2-1ae181099158 | argilla-internal-testing | "2024-12-12T10:35:07Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T10:35:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vantral/selkup_me_12.12.2024 | vantral | "2024-12-12T11:06:17Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T11:06:15Z" | ---
dataset_info:
features:
- name: all
struct:
- name: interlinear-text
list:
- name: item
struct:
- name: source
dtype: string
- name: paragraph
list:
- name: item
struct:
- name: speaker
dtype: string
- name: phrase
list:
- name: item
struct:
- name: ft
dtype: string
- name: id
dtype: string
- name: participant
dtype: string
- name: timestamp
sequence: string
- name: word
list:
list:
- name: item
struct:
- name: grammar_tags
sequence: string
- name: translation
sequence: string
- name: txt
dtype: string
- name: morph
list:
- name: item
struct:
- name: gls
dtype: string
- name: id
dtype: string
- name: txt
dtype: string
- name: item
dtype: 'null'
splits:
- name: train
num_bytes: 29025
num_examples: 1
download_size: 23253
dataset_size: 29025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khadimhadi/my-hand-captioning-dataset_2 | khadimhadi | "2024-12-12T14:40:25Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T14:39:33Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 944896143.816
num_examples: 11076
download_size: 695731060
dataset_size: 944896143.816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yin30lei/wildlife_underexposed_clahe | yin30lei | "2024-12-12T19:19:22Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T19:19:07Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: labels
dtype: string
splits:
- name: train
num_bytes: 69352583.886
num_examples: 1137
download_size: 66999367
dataset_size: 69352583.886
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yin30lei/wildlife_verydark_afifi | yin30lei | "2024-12-12T20:20:52Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T20:20:36Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: labels
dtype: string
splits:
- name: train
num_bytes: 41580565.196
num_examples: 1318
download_size: 42071612
dataset_size: 41580565.196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mesjenet/kitten_cleaned_dataset | mesjenet | "2024-12-12T20:28:58Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T20:28:54Z" | ---
dataset_info:
features:
- name: pattern_name
dtype: string
- name: difficulty
dtype: string
- name: notes
dtype: string
- name: materials
sequence: string
- name: abbreviations
struct:
- name: '* *'
dtype: string
- name: Rnd
dtype: string
- name: ch
dtype: string
- name: dec
dtype: string
- name: inc
dtype: string
- name: sc
dtype: string
- name: st
dtype: string
- name: instruction_section
dtype: string
- name: description
dtype: string
- name: yarn
dtype: string
- name: steps
sequence: string
splits:
- name: train
num_bytes: 6246
num_examples: 5
download_size: 11503
dataset_size: 6246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shannnh/baseline-dataset-t5-base-1 | Shannnh | "2024-12-12T21:06:58Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T21:06:45Z" | ---
dataset_info:
features:
- name: document
dtype: string
- name: question
dtype: string
- name: short_answers
dtype: string
- name: predicted_answer
dtype: string
splits:
- name: validation
num_bytes: 170485115
num_examples: 4289
download_size: 88050056
dataset_size: 170485115
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
amuvarma/va-10k-310k-snac | amuvarma | "2024-12-12T23:17:11Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T22:05:30Z" | ---
dataset_info:
features:
- name: split_name
dtype: string
- name: index
dtype: string
- name: round
dtype: string
- name: question
dtype: string
- name: question_audio
dtype: audio
- name: answer
dtype: string
- name: answer_snac
dtype: string
splits:
- name: train
num_bytes: 140739558965.77032
num_examples: 300000
download_size: 140029603324
dataset_size: 140739558965.77032
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yin30lei/wildlife_less_saturated_clahe | yin30lei | "2024-12-12T22:58:29Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T22:58:14Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: labels
dtype: string
splits:
- name: train
num_bytes: 73363607.211
num_examples: 1137
download_size: 72171288
dataset_size: 73363607.211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/math_llamagen-flat | Asap7772 | "2024-12-12T23:15:50Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T23:09:10Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: responses
dtype: string
- name: response_answers
dtype: string
- name: correctness
dtype: bool
splits:
- name: train
num_bytes: 77897673
num_examples: 32000
download_size: 27910158
dataset_size: 77897673
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/va-310k-320k-snac | amuvarma | "2024-12-12T23:19:24Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-12T23:17:12Z" | ---
dataset_info:
features:
- name: split_name
dtype: string
- name: index
dtype: string
- name: round
dtype: string
- name: question
dtype: string
- name: question_audio
dtype: audio
- name: answer
dtype: string
- name: answer_snac
dtype: string
splits:
- name: train
num_bytes: 4691318592.192344
num_examples: 10000
download_size: 4766812084
dataset_size: 4691318592.192344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_llama8b-t0_mistlarge-t12_om2-300k | RyanYr | "2024-12-13T01:21:22Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T01:20:54Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 1803290197
num_examples: 300000
download_size: 767924468
dataset_size: 1803290197
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/math_train | 1231czx | "2024-12-13T01:36:14Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T01:36:13Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: gt
sequence: string
splits:
- name: train
num_bytes: 8722272
num_examples: 7500
download_size: 2988652
dataset_size: 8722272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen10_run0_llama2-7b_xlsum_doc1000_real64_synt64 | dgambettaphd | "2024-12-13T01:41:49Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T01:41:47Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 637770
num_examples: 1000
download_size: 413233
dataset_size: 637770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sangkm/pash | sangkm | "2024-12-13T04:28:14Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T04:28:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 5183021
num_examples: 17098
- name: validation
num_bytes: 646309
num_examples: 2137
- name: test
num_bytes: 647389
num_examples: 2138
download_size: 1300607
dataset_size: 6476719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
sangkm/merged | sangkm | "2024-12-13T04:29:02Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T04:28:59Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 25645853
num_examples: 83016
- name: validation
num_bytes: 3215867
num_examples: 10426
- name: test
num_bytes: 3969461
num_examples: 12721
download_size: 6504888
dataset_size: 32831181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
zuozhuan/so100_anything | zuozhuan | "2024-12-13T09:30:49Z" | 32 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | "2024-12-13T06:06:22Z" | ---
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
procit009/female_facebook_data | procit009 | "2024-12-13T06:59:11Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T06:56:10Z" | ---
dataset_info:
features:
- name: audio_id
dtype: string
- name: language
dtype:
class_label:
names:
'0': en
'1': de
'2': fr
'3': es
'4': pl
'5': it
'6': ro
'7': hu
'8': cs
'9': nl
'10': fi
'11': hr
'12': sk
'13': sl
'14': et
'15': lt
'16': en_accented
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: raw_text
dtype: string
- name: normalized_text
dtype: string
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: is_gold_transcript
dtype: bool
- name: accent
dtype: string
splits:
- name: train
num_bytes: 3636046995.495452
num_examples: 6939
download_size: 3102219316
dataset_size: 3636046995.495452
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/IK_llama3.1-8b_diann_16_16_0.01 | ferrazzipietro | "2024-12-13T08:11:18Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T08:11:14Z" | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 696634
num_examples: 364
- name: test
num_bytes: 876526
num_examples: 480
download_size: 675188
dataset_size: 1573160
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/IK_llama3.1-8b_diann_16_64_0.05 | ferrazzipietro | "2024-12-13T08:11:36Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T08:11:33Z" | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 696690
num_examples: 364
- name: test
num_bytes: 876741
num_examples: 480
download_size: 675041
dataset_size: 1573431
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/IK_llama3.1-8b_diann_32_16_0.05 | ferrazzipietro | "2024-12-13T08:11:42Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T08:11:39Z" | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 696756
num_examples: 364
- name: test
num_bytes: 876561
num_examples: 480
download_size: 675682
dataset_size: 1573317
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ferrazzipietro/IK_llama3.1-8b_diann_32_64_0.05 | ferrazzipietro | "2024-12-13T08:11:54Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T08:11:51Z" | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 696920
num_examples: 364
- name: test
num_bytes: 877092
num_examples: 480
download_size: 676247
dataset_size: 1574012
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Asap7772/elix_generations_gpt4o_pref_train | Asap7772 | "2024-12-13T08:52:15Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T08:52:04Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
- name: det_choice
dtype: int64
- name: choice1
dtype: string
- name: reason1
dtype: string
- name: choice2
dtype: string
- name: reason2
dtype: string
splits:
- name: train
num_bytes: 1126048672
num_examples: 234738
download_size: 70407357
dataset_size: 1126048672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_generations_gpt4o_pref_test | Asap7772 | "2024-12-13T09:06:01Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T09:05:56Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
- name: det_choice
dtype: int64
- name: choice1
dtype: string
- name: reason1
dtype: string
- name: choice2
dtype: string
- name: reason2
dtype: string
splits:
- name: train
num_bytes: 122271646
num_examples: 26082
download_size: 7736326
dataset_size: 122271646
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/gpt2_rte_pair_faster-alzantot | DT4LM | "2024-12-13T09:59:22Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T09:54:56Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 49320
num_examples: 155
download_size: 39145
dataset_size: 49320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bright8192/astralprojection | Bright8192 | "2024-12-13T10:17:06Z" | 32 | 1 | [
"license:mit",
"region:us"
] | null | "2024-12-13T10:17:06Z" | ---
license: mit
---
|
mlfoundations-dev/stackexchange_pets | mlfoundations-dev | "2024-12-23T17:33:48Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T14:21:20Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 120122024
num_examples: 20452
download_size: 67549592
dataset_size: 120122024
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_744c6044-11fc-43ca-94d1-9b6d6ac4e921 | argilla-internal-testing | "2024-12-13T16:20:13Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T16:20:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heegarthur/dutch-words-to-dutch | heegarthur | "2024-12-13T16:37:12Z" | 32 | 0 | [
"license:c-uda",
"size_categories:1K<n<10K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-12-13T16:33:51Z" | ---
license: c-uda
---
in this dataset:
---dutch word=dutch explaining
_this does'nt contain that much info, only a few thousand words_ |
paulrichmond/hep_th_test1_temp07 | paulrichmond | "2024-12-13T17:04:55Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T17:04:54Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: orig_abstract
dtype: string
- name: versions
list:
- name: created
dtype: string
- name: version
dtype: string
- name: update_date
dtype: string
- name: authors_parsed
sequence:
sequence: string
- name: abstract
dtype: string
- name: prompt
dtype: string
- name: y_true
dtype: string
- name: comp_s1-L-3.1-8B-base
dtype: string
- name: preds_s1-L-3.1-8B-base
dtype: string
- name: comp_s3-L-3.1-8B-base_v3
dtype: string
- name: preds_s3-L-3.1-8B-base_v3
dtype: string
- name: comp_Llama-3.1-8B
dtype: string
- name: preds_Llama-3.1-8B
dtype: string
- name: comp_s2-L-3.1-8B-base
dtype: string
- name: preds_s2-L-3.1-8B-base
dtype: string
- name: comp_llama
dtype: string
- name: preds_llama
dtype: string
splits:
- name: test
num_bytes: 140983
num_examples: 10
download_size: 141736
dataset_size: 140983
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_12498dd6-181e-4da8-8867-4ca00866ee9a | argilla-internal-testing | "2024-12-13T17:11:07Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T17:11:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/stackexchange_webapps | mlfoundations-dev | "2024-12-23T17:57:46Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T17:46:34Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 234694702
num_examples: 50000
download_size: 124485517
dataset_size: 234694702
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/gpt2_sst2_faster-alzantot_advtraining | DT4LM | "2024-12-19T21:35:07Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T18:23:35Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 158162
num_examples: 2235
download_size: 108953
dataset_size: 158162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WendyHoang/corpus_pl_sop | WendyHoang | "2024-12-13T20:41:36Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T20:40:19Z" | ---
dataset_info:
features:
- name: sentence1
sequence: string
- name: sentence2
sequence: string
- name: label
sequence: int64
splits:
- name: train
num_bytes: 2803594048
num_examples: 381015
download_size: 1021949717
dataset_size: 2803594048
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JesseLiu/Trouble_Makers | JesseLiu | "2024-12-13T23:26:53Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-13T23:25:10Z" | ---
license: apache-2.0
---
|
bustamiyusoef/deeplearning_lmm | bustamiyusoef | "2024-12-16T07:10:41Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T23:41:09Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: jw_text
dtype: string
- name: rm_text
dtype: string
splits:
- name: train
num_bytes: 29289647.0
num_examples: 5000
download_size: 26142589
dataset_size: 29289647.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_gen_eval_4shot_infipo_beta0.05-pair_winrate_gpt4o_pref_train | Asap7772 | "2024-12-13T23:56:12Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T23:56:08Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: det_choice
dtype: int64
- name: choice1
dtype: string
- name: reason1
dtype: string
- name: choice2
dtype: string
- name: reason2
dtype: string
splits:
- name: train
num_bytes: 11373377
num_examples: 2114
download_size: 2953352
dataset_size: 11373377
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_gen_persona_eval_4shot_infsft-pair_winrate_gpt4o_pref_train | Asap7772 | "2024-12-13T23:57:19Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-13T23:57:15Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: det_choice
dtype: int64
- name: choice1
dtype: string
- name: reason1
dtype: string
- name: choice2
dtype: string
- name: reason2
dtype: string
splits:
- name: train
num_bytes: 11427988
num_examples: 2114
download_size: 2920209
dataset_size: 11427988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myselfrew/baseline_math_70b_correct_llama3_8b_filtered_sft | myselfrew | "2024-12-14T03:41:32Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-14T03:12:05Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
sequence: string
- name: turn
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1241222980
num_examples: 137732
download_size: 433948626
dataset_size: 1241222980
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dogtooth/uf_Meta-Llama-3.1-8B-Instruct_3 | dogtooth | "2024-12-14T06:18:38Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-14T06:18:23Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_completion
dtype: string
- name: reference_completion
dtype: string
splits:
- name: train
num_bytes: 1267596344
num_examples: 183405
download_size: 419269434
dataset_size: 1267596344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jxtse/DSGram | jxtse | "2024-12-22T06:49:30Z" | 32 | 0 | [
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"modality:text",
"arxiv:2412.12832",
"region:us",
"Grammatical Error Correction"
] | null | "2024-12-14T07:43:35Z" | ---
license: apache-2.0
language:
- en
tags:
- Grammatical Error Correction
---
### Dataset Card for DSGram Datasets
#### Dataset Summary
The DSGram datasets are designed for the evaluation and development of Grammatical Error Correction (GEC) models in the era of large language models (LLMs). These datasets address key evaluation challenges by incorporating human annotations and LLM-generated scores. Two subsets are provided:
1. **DSGram-LLMs**: A simulated dataset containing GPT-4-annotated sentence pairs, enabling fine-tuning and cost-effective evaluation of GEC models.
2. **DSGram-Eval**: A manually annotated dataset providing high-quality, human-scored examples to benchmark the DSGram framework.
The datasets facilitate the evaluation of corrections based on three sub-metrics:
- **Semantic Coherence**: Preservation of original meaning.
- **Edit Level**: Appropriateness of modifications.
- **Fluency**: Grammatical correctness and natural flow.
#### Dataset Structure
##### DSGram-LLMs
- **Input**: Original and corrected sentences from CoNLL-2014 and BEA-2019 test sets.
- **Annotations**: Scores generated by GPT-4 using prompt engineering techniques (Chain-of-Thought, few-shot prompting).
- **Size**: ~2,500 entries.
##### DSGram-Eval
- **Input**: Sentences from CoNLL-2014.
- **Annotations**: Human-scored sentence pairs evaluated based on the three sub-metrics.
- **Size**: ~200 entries with multiple annotators for consistency.
#### Dataset Usage
##### Intended Use
- Fine-tuning open-source LLMs for GEC evaluation.
- Benchmarking GEC models with robust and context-sensitive metrics.
- Research on evaluation frameworks for text correction tasks.
#### Citation
If you use these datasets, please cite our paper.
```
@misc{xie2024dsgramdynamicweightingsubmetrics,
title={DSGram: Dynamic Weighting Sub-Metrics for Grammatical Error Correction in the Era of Large Language Models},
author={Jinxiang Xie and Yilin Li and Xunjian Yin and Xiaojun Wan},
year={2024},
eprint={2412.12832},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.12832},
}
``` |
0x00raghu/so100_bimanual_test_3 | 0x00raghu | "2024-12-14T08:02:31Z" | 32 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-12-14T08:02:13Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100_bimanual",
"total_episodes": 2,
"total_frames": 2386,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ferrazzipietro/IK_llama3.1-8b_diann_64_64_0.01 | ferrazzipietro | "2024-12-14T08:26:27Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-14T08:26:25Z" | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 488039
num_examples: 364
- name: test
num_bytes: 624103
num_examples: 480
download_size: 628287
dataset_size: 1112142
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
0x01/lerobot-so100-1 | 0x01 | "2024-12-14T16:47:07Z" | 32 | 0 | [
"license:cdla-permissive-2.0",
"region:us"
] | null | "2024-12-14T16:47:07Z" | ---
license: cdla-permissive-2.0
---
|
myselfrew/math_70b_correct_llama3_8b_filtered_sft_chat | myselfrew | "2024-12-14T19:35:30Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-14T19:35:12Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: code
sequence: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
splits:
- name: train
num_bytes: 1428054570
num_examples: 137732
download_size: 498616885
dataset_size: 1428054570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/zuck-3-snacced | amuvarma | "2024-12-15T03:07:00Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T03:06:58Z" | ---
dataset_info:
features:
- name: transcript
dtype: string
- name: codes_list
sequence: int64
splits:
- name: train
num_bytes: 10034392
num_examples: 972
download_size: 2548150
dataset_size: 10034392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen1_run3_llama2-7b_wiki_doc1000_real96_synt32 | dgambettaphd | "2024-12-15T03:32:56Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T03:32:54Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 643915
num_examples: 1000
download_size: 410241
dataset_size: 643915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Billtrancend/story_outline | Billtrancend | "2024-12-15T04:49:06Z" | 32 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T04:44:44Z" | ---
license: mit
---
This dataset includes 1722 Chinese story outlines that contain multiple tags and characters.
The prompt contain the following message:
必须包含的关键词:
${mandatory_words}
可选的关键词:
${optional_words}
你需要根据这些词语,创作一个故事大纲。对于可选的词语,你可以挑选至少6个相关的词语,然后再联想出6个其他的词语作为故事元素。
用发散的思维,拓展各种故事元素。
要求至少有 ${n_characters} 个角色
至少有 ${n_scenes} 个场景
至少有 ${n_events} 个事件
严格遵守以下格式
## 必须关键词
1. ...
2. ...
3. ...
## 可选关键词
1. ...
## 自创关键词
1. ...
## 故事氛围
故事的氛围是什么样的?可以用好几个词语,来分别形容故事不同阶段的气氛
## 故事背景
是什么时代的?这个时代有什么特点?
人们是怎么生活的?
是否有什么影响深远的事件?
## 人设
### 1. 人名,基本信息,身份
- 外貌,经常穿着的服装
- 有什么样的性格
- 有什么重大的人生经历
- 有什么动机和追求
## 场景
- 用数字列表的方式呈现。
- 场景中包含会发生的事情,同时包括场景中有什么可用的物品,景色,氛围。
## 目的
- 人物有什么样的目的?
- 人物各自有什么样的动机?需求?痛苦?追求?
- 有什么必须被解决的问题?
- 这个目标为什么重要?
## 高潮和结局
- 故事的高潮是什么?
- 最终结局是悲剧还是喜剧?或者是无疾而终引人深思?
## 事件大纲
- 挑选可用的场景和人物。
- 创造具有悬念的情节。
- 制造出期待感和剧情张力。期待感是说把人物放进一个可能会产生有趣的事件的情景,比如扮猪吃虎。
- 尽量创作有起伏的故事,同时角色要有所改变,不论是处境还是人格。
- 用数字列表列出每个大致的事件。
<EOF>
# 详细大纲
## 1. 这里是对于这个事件的概括
给大纲填充更多的细节,比如逻辑上的关联,让事件更加合理。
用多个段落,详细描述各个事件。使用想象力补充。
同时,详细大纲的数量应该和事件大纲中的事件数量吻合。
## 2. ...
|
amuvarma/zuck-l-snacced | amuvarma | "2024-12-15T05:42:59Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T05:42:55Z" | ---
dataset_info:
features:
- name: transcript
dtype: string
- name: codes_list
sequence: int64
splits:
- name: train
num_bytes: 3351742
num_examples: 326
download_size: 926079
dataset_size: 3351742
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_llama8b-t0_mistlarge-t12_om2-300k_correction_150k | RyanYr | "2024-12-15T06:22:54Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T06:20:38Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: response@0_correctness
dtype: bool
- name: response@2_correctness
dtype: bool
splits:
- name: train
num_bytes: 837079308.2287472
num_examples: 152866
download_size: 306448410
dataset_size: 837079308.2287472
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hasav/llama3 | hasav | "2024-12-15T09:57:35Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T07:45:58Z" | ---
license: apache-2.0
---
|
myselfrew/llama3_8b_math_new_prompt2 | myselfrew | "2024-12-15T08:27:35Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T08:27:04Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
splits:
- name: train
num_bytes: 2535049324
num_examples: 555000
download_size: 840803816
dataset_size: 2535049324
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/luna-snacced-ds | amuvarma | "2024-12-15T08:44:31Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T08:44:23Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: emotion
dtype: string
- name: codes_list
sequence: int64
splits:
- name: train
num_bytes: 22599766
num_examples: 5674
download_size: 6048975
dataset_size: 22599766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yusufabddd/data_pengangguran_gen_z | Yusufabddd | "2024-12-15T10:17:36Z" | 32 | 0 | [
"task_categories:table-question-answering",
"license:other",
"size_categories:1K<n<10K",
"region:us",
"code"
] | [
"table-question-answering"
] | "2024-12-15T09:50:05Z" | ---
license: other
license_name: project
license_link: LICENSE
task_categories:
- table-question-answering
tags:
- code
pretty_name: dataset prngangguran gen z
size_categories:
- 1K<n<10K
---
---
license: other
license_name: project
license_link: LICE
# Data-Pengangguran-Gen-Z
#Sebuah coding tentang data pengangguran Gen Z
import pandas as pd
# Membuat DataFrame dengan data pengangguran Gen Z
data = {
'Usia': [15, 16, 17, 18, 19, 20, 21, 22, 23, 24],
'Jumlah_Pengangguran': [3600000, 400000, 500000, 600000, 700000, 800000, 900000, 1000000, 1100000, 1200000],
'Tingkat_Pengangguran': [9.37, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]
}
df = pd.DataFrame(data)
# Menampilkan DataFrame
print("Data Pengangguran Gen Z:")
print(df)
# Analisis data
total_pengangguran = df['Jumlah_Pengangguran'].sum()
rata_tingkat_pengangguran = df['Tingkat_Pengangguran'].mean()
print("\nTotal Jumlah Pengangguran Gen Z:", total_pengangguran)
print("Rata-rata Tingkat Pengangguran Gen Z:", rata_tingkat_pengangguran) |
prosodyntax/vxp-perspeak-withprom-final-v2 | prosodyntax | "2024-12-16T07:43:15Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T09:52:00Z" | ---
dataset_info:
- config_name: cs
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 106346383
num_examples: 371570
download_size: 20236220
dataset_size: 106346383
- config_name: de
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 501113529
num_examples: 1819234
download_size: 93811535
dataset_size: 501113529
- config_name: es
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 82384522
num_examples: 299732
download_size: 15691425
dataset_size: 82384522
- config_name: fr
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 410688586
num_examples: 1604903
download_size: 79214111
dataset_size: 410688586
- config_name: hr
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 38240129
num_examples: 132370
download_size: 7162550
dataset_size: 38240129
- config_name: hu
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 18171197
num_examples: 75998
download_size: 3469487
dataset_size: 18171197
- config_name: it
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 56052283
num_examples: 196002
download_size: 11049236
dataset_size: 56052283
- config_name: nl
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 12937007
num_examples: 48549
download_size: 2338946
dataset_size: 12937007
- config_name: pl
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 189412686
num_examples: 630903
download_size: 36121632
dataset_size: 189412686
- config_name: ro
features:
- name: unit
dtype: string
- name: unit_duration
sequence: float64
- name: phones
sequence: string
- name: phones_duration
sequence:
sequence: float64
- name: multitoken_word
sequence: string
- name: joint_pronunciation
sequence: string
- name: pos
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: ud_id
sequence: string
- name: sentence_id
dtype: string
- name: speaker
dtype: string
- name: prominence_strength
dtype: float64
- name: boundary_strength
dtype: float64
- name: chunk_lab
sequence: string
splits:
- name: train
num_bytes: 20297520
num_examples: 78958
download_size: 3760029
dataset_size: 20297520
configs:
- config_name: cs
data_files:
- split: train
path: cs/train-*
- config_name: de
data_files:
- split: train
path: de/train-*
- config_name: es
data_files:
- split: train
path: es/train-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- config_name: hr
data_files:
- split: train
path: hr/train-*
- config_name: hu
data_files:
- split: train
path: hu/train-*
- config_name: it
data_files:
- split: train
path: it/train-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- config_name: pl
data_files:
- split: train
path: pl/train-*
- config_name: ro
data_files:
- split: train
path: ro/train-*
---
|
RyanYr/reflect_llama8b-t0_llama33-t12_om2-42 | RyanYr | "2024-12-15T16:16:15Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T16:16:13Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 89357607
num_examples: 10000
download_size: 36522638
dataset_size: 89357607
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_llama8b-t0_llama33-t12_om2-130k | RyanYr | "2024-12-15T17:32:55Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T17:32:42Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 1083925223
num_examples: 130000
download_size: 448036414
dataset_size: 1083925223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_llama8b-t0_llama33-t12_om2-130k_llama_reflection | RyanYr | "2024-12-15T17:51:39Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T17:51:32Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: response@0_correctness
dtype: bool
- name: response@2_correctness
dtype: bool
splits:
- name: train
num_bytes: 295758280
num_examples: 67534
download_size: 105967729
dataset_size: 295758280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
equiron-ai/translator_dpo | equiron-ai | "2024-12-15T19:47:37Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-15T19:44:36Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 756129
num_examples: 185
download_size: 88819
dataset_size: 756129
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jclinton1/wedgit_stack | jclinton1 | "2024-12-16T02:52:57Z" | 32 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-12-15T23:53:41Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 50,
"total_frames": 29612,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5"
]
}
},
"observation.images.webcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
CatWithIcecream/eval_act_so100_test | CatWithIcecream | "2024-12-16T00:40:14Z" | 32 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial",
"eval"
] | [
"robotics"
] | "2024-12-16T00:39:12Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
- eval
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 11941,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
dgambettaphd/D_gen1_run0_llama2-7b_sciabs_doc1000_real64_synt64_vuw | dgambettaphd | "2024-12-16T01:58:43Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T01:58:40Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 688969
num_examples: 1000
download_size: 351173
dataset_size: 688969
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anonymousforemotion/merged_qwen_audio_dataset_subset | anonymousforemotion | "2024-12-16T03:28:16Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T03:28:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: chosen
dtype: audio
- name: rejected
dtype: audio
- name: qwen_input_text
dtype: string
splits:
- name: train
num_bytes: 613352668.0
num_examples: 2000
download_size: 142141187
dataset_size: 613352668.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arthrod/ex-10-material-contracts-2023 | arthrod | "2024-12-17T02:27:34Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T05:54:39Z" | ---
dataset_info:
features:
- name: submission
struct:
- name: cik
dtype: string
- name: company_name
dtype: string
- name: form_type
dtype: string
- name: date_filed
dtype: timestamp[s]
- name: master_file
dtype: string
- name: submission_filename
dtype: string
- name: filing_url
dtype: string
- name: accession_number
dtype: string
- name: header
struct:
- name: sec_document
dtype: string
- name: acceptance_datetime
dtype: string
- name: description
dtype: string
- name: filing_form_type
dtype: string
- name: submission_type
dtype: string
- name: conformed_submission_type
dtype: string
- name: period_of_report
dtype: string
- name: conformed_period_of_report
dtype: string
- name: standard_industrial_classification
dtype: string
- name: classification_number
dtype: string
- name: accession_number
dtype: string
- name: public_document_count
dtype: string
- name: company_name
dtype: string
- name: sec_header
dtype: string
- name: filing_date
dtype: string
- name: sec-header-complete
dtype: string
- name: document_from_text
dtype: string
- name: document_metadata
struct:
- name: document_type
dtype: string
- name: sequence
dtype: string
- name: document_filename
dtype: string
- name: description
dtype: string
- name: title
dtype: string
- name: _id
dtype: string
- name: timestamp_collection
dtype: string
- name: doc_url
dtype: string
- name: raw_document_content
dtype: string
splits:
- name: train
num_bytes: 18746038334
num_examples: 51565
download_size: 4439489727
dataset_size: 18746038334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for SEC Exhibit 10 Material Contracts Dataset
## Dataset Details
### Dataset Description
This training dataset contains SEC filing documents specifically focused on Exhibit 10 (Material Contracts) from a specified quarter. Exhibit 10 includes material contracts and similar agreements that are required to be filed with the SEC. This collection provides structured access to full contract texts and associated metadata.
- **Curated by:** Arthur (arthur@cicero.chat)
- **Language(s):** English
- **License:** Likely public domain as US government data
- **Scope:** Material contracts filed as Exhibit 10 in SEC filings
## Dataset Structure
The dataset has a nested structure with the following fields:
### Top Level Fields
1. **submission**: Dictionary containing the full submission data
- Type: dict
- Contains nested document and header information
2. **header**: Dictionary containing filing header information
- Type: dict
- Contains SEC header metadata
3. **document_from_text**: String containing contract text
- Type: string
- Length: Variable
- Contains the full text of material contracts
4. **document_metadata**: Dictionary containing document-specific metadata
- Type: dict
- Contains filing metadata like CIK, form type, dates, etc.
5. **_id**: Unique identifier
- Type: string
- Present in all records
6. **timestamp_collection**: Timestamp of when record was collected
- Type: string
- Present in all records
7. **doc_url**: URL to the document on SEC website
- Type: string
- Present in all records
8. **raw_document_content**: Raw document content
- Type: string
- Contains unprocessed contract text and markup
### Document Metadata Fields
The document_metadata dictionary contains:
- cik: Company identifier
- company_name: Name of filing company
- form_type: Type of SEC form (all Exhibit 10 variants)
- date_filed: Filing date
- master_file: Reference to master index file
- submission_filename: Path to filing in EDGAR system
- submission_url: Direct link to filing
- accession_number: SEC accession number
### Header Fields
The header dictionary includes:
- sec_document: Document identifier and date
- acceptance_datetime: Filing acceptance timestamp
- filing_form_type: Type of SEC form
- submission_type: Type of submission
- period_of_report: Report period
- standard_industrial_classification: Industry classification
- classification_number: Industry code
- public_document_count: Number of documents in submission
- company_name: Filing company name
## Uses
### Direct Use
- Analysis of material business contracts
- Corporate relationship mapping
- Contract term analysis
- Legal document processing
- Identification of business arrangements and terms
- Research on contract structures and patterns
- Corporate governance analysis
### Out-of-Scope Use
- Analysis of other SEC filing types
- Real-time contract monitoring
- Legal advice or compliance determinations
- Analysis of non-material contracts
- Trading signals without proper analysis
## Limitations and Considerations
- Limited to material contracts (Exhibit 10)
- Focused on a specific quarter
- Large variance in document sizes
- Contains HTML/XML markup requiring processing
- May not include all exhibits referenced in contracts
- Historical data only
## Recommendations
- Implement proper HTML/XML parsing for clean text extraction
- Consider contract structure when processing documents
- Cross-reference with master index for complete context
- Consider industry classification when analyzing contracts
- Validate document completeness
- Process tables and formatted content appropriately
- Consider legal expertise for proper interpretation
## Dataset Card Contact
Arthur (arthur@cicero.chat)
## Code
https://github.com/arthrod/sec-edgar-bulker
|
dgambettaphd/D_gen9_run0_llama2-7b_xlsum_doc1000_real64_synt64_vuw | dgambettaphd | "2024-12-16T06:51:41Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T06:51:38Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 463793
num_examples: 1000
download_size: 310465
dataset_size: 463793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dohyeon1/Pictory-ControlNet-Dataset | Dohyeon1 | "2024-12-16T07:15:59Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T07:15:50Z" | ---
dataset_info:
features:
- name: color
dtype: image
- name: hed
dtype: image
splits:
- name: train
num_bytes: 48649375.38287011
num_examples: 1700
- name: test
num_bytes: 1802888.6171298923
num_examples: 63
download_size: 45267356
dataset_size: 50452264.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dgambettaphd/D_gen10_run0_llama2-7b_xlsum_doc1000_real64_synt64_vuw | dgambettaphd | "2024-12-16T07:24:48Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T07:24:45Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 463671
num_examples: 1000
download_size: 310345
dataset_size: 463671
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen4_run0_llama2-7b_xlsum_doc1000_real96_synt32_vuw | dgambettaphd | "2024-12-16T10:04:04Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T10:04:01Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 628540
num_examples: 1000
download_size: 423623
dataset_size: 628540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/albertbasev2_mr_pair_leap_original_advtraining | DT4LM | "2024-12-16T10:55:15Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T10:54:40Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 92623
num_examples: 729
download_size: 63976
dataset_size: 92623
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WendyHoang/corpus_test | WendyHoang | "2024-12-16T12:27:18Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T11:49:39Z" | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 18122
num_examples: 50
download_size: 8685
dataset_size: 18122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/t5v1-1base_mr_pair_clare_original | DT4LM | "2024-12-16T16:36:36Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T16:32:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 100022
num_examples: 833
download_size: 69933
dataset_size: 100022
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DT4LM/t5v1-1base_mr_pair_faster-alzantot | DT4LM | "2025-01-02T04:49:10Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T17:34:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 74476
num_examples: 573
download_size: 53044
dataset_size: 74476
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen8_run0_llama2-7b_xlsum_doc1000_real32_synt96_vuw | dgambettaphd | "2024-12-16T20:58:41Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T20:58:37Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 355016
num_examples: 1000
download_size: 220013
dataset_size: 355016
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen9_run0_llama2-7b_xlsum_doc1000_real32_synt96_vuw | dgambettaphd | "2024-12-16T21:47:30Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T21:47:25Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 351996
num_examples: 1000
download_size: 219103
dataset_size: 351996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jhkim64/NL2FOL_sentence_test | jhkim64 | "2024-12-16T22:42:01Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T22:41:57Z" | ---
dataset_info:
features:
- name: natural language
dtype: string
- name: Fol
dtype: string
splits:
- name: train
num_bytes: 75663
num_examples: 580
download_size: 36562
dataset_size: 75663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maxfortremblay/HealthCAN-M-decine-et-sant-Medicine-and-Health-NZ | maxfortremblay | "2024-12-16T23:38:53Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T23:38:51Z" | ---
dataset_info:
features:
- name: SUBJECT_EN
dtype: string
- name: TERM_EN
dtype: string
- name: TERM_EN_PARAMETER
dtype: string
- name: ABBREVIATION_EN
dtype: string
- name: ABBREVIATION_EN_PARAMETER
dtype: string
- name: SYNONYMS_EN
dtype: string
- name: SYNONYMS_EN_PARAMETERS
dtype: string
- name: TEXTUAL_SUPPORT_1_EN
dtype: string
- name: TEXTUAL_SUPPORT_2_EN
dtype: string
- name: TEXTUAL_SUPPORT_3_EN
dtype: string
- name: DOMAINE_FR
dtype: string
- name: TERME_FR
dtype: string
- name: TERME_FR_PARAMETRE
dtype: string
- name: ABBREVIATION_FR
dtype: string
- name: ABBREVIATION_FR_PARAMETRE
dtype: string
- name: SYNONYMES_FR
dtype: string
- name: SYNONYMES_FR_PARAMETRE
dtype: string
- name: JUSTIFICATION_1_FR
dtype: string
- name: JUSTIFICATION_2_FR
dtype: string
- name: JUSTIFICATION_3_FR
dtype: string
- name: UNIVERSAL_ENTRIES
dtype: string
- name: DOM_SUBJ_ES
dtype: string
- name: TERME_TERM_ES
dtype: string
- name: TERME_TERM_PARAM_ES
dtype: string
- name: ABBR_ES
dtype: string
- name: ABBR_PARAM_ES
dtype: string
- name: SYNO_ES
dtype: string
- name: SYNO_PARAM_ES
dtype: string
- name: JUST_TEXTSUPP_1_ES
dtype: string
- name: JUST_TEXTSUPP_2_ES
dtype: string
- name: JUST_TEXTSUPP_3_ES
dtype: string
- name: DOM_SUBJ_PT
dtype: string
- name: TERME_TERM_PT
dtype: string
- name: TERME_TERM_PARAM_PT
dtype: string
- name: ABBR_PT
dtype: string
- name: ABBR_PARAM_PT
dtype: string
- name: SYNO_PT
dtype: string
- name: SYNO_PARAM_PT
dtype: string
- name: JUST_TEXTSUPP_1_PT
dtype: string
- name: JUST_TEXTSUPP_2_PT
dtype: string
- name: JUST_TEXTSUPP_3_PT
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3369076
num_examples: 4181
download_size: 1463051
dataset_size: 3369076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maxfortremblay/HealthCAN-M-decine-et-sant-Medicine-and-Health-NO | maxfortremblay | "2024-12-16T23:40:48Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T23:40:46Z" | ---
dataset_info:
features:
- name: SUBJECT_EN
dtype: string
- name: TERM_EN
dtype: string
- name: TERM_EN_PARAMETER
dtype: string
- name: ABBREVIATION_EN
dtype: string
- name: ABBREVIATION_EN_PARAMETER
dtype: string
- name: SYNONYMS_EN
dtype: string
- name: SYNONYMS_EN_PARAMETERS
dtype: string
- name: TEXTUAL_SUPPORT_1_EN
dtype: string
- name: TEXTUAL_SUPPORT_2_EN
dtype: string
- name: TEXTUAL_SUPPORT_3_EN
dtype: string
- name: DOMAINE_FR
dtype: string
- name: TERME_FR
dtype: string
- name: TERME_FR_PARAMETRE
dtype: string
- name: ABBREVIATION_FR
dtype: string
- name: ABBREVIATION_FR_PARAMETRE
dtype: string
- name: SYNONYMES_FR
dtype: string
- name: SYNONYMES_FR_PARAMETRE
dtype: string
- name: JUSTIFICATION_1_FR
dtype: string
- name: JUSTIFICATION_2_FR
dtype: string
- name: JUSTIFICATION_3_FR
dtype: string
- name: UNIVERSAL_ENTRIES
dtype: string
- name: DOM_SUBJ_ES
dtype: string
- name: TERME_TERM_ES
dtype: string
- name: TERME_TERM_PARAM_ES
dtype: string
- name: ABBR_ES
dtype: string
- name: ABBR_PARAM_ES
dtype: string
- name: SYNO_ES
dtype: string
- name: SYNO_PARAM_ES
dtype: string
- name: JUST_TEXTSUPP_1_ES
dtype: string
- name: JUST_TEXTSUPP_2_ES
dtype: string
- name: JUST_TEXTSUPP_3_ES
dtype: string
- name: DOM_SUBJ_PT
dtype: string
- name: TERME_TERM_PT
dtype: string
- name: TERME_TERM_PARAM_PT
dtype: string
- name: ABBR_PT
dtype: float64
- name: ABBR_PARAM_PT
dtype: float64
- name: SYNO_PT
dtype: string
- name: SYNO_PARAM_PT
dtype: string
- name: JUST_TEXTSUPP_1_PT
dtype: string
- name: JUST_TEXTSUPP_2_PT
dtype: string
- name: JUST_TEXTSUPP_3_PT
dtype: float64
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1806872
num_examples: 2625
download_size: 794306
dataset_size: 1806872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e6758603-63b4-48fc-b5a9-7581b58d397f | argilla-internal-testing | "2024-12-16T23:51:53Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T23:51:52Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_generations_autolabel_gpt4o_pref_train | Asap7772 | "2025-01-08T00:06:12Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T00:39:46Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
- name: det_choice
dtype: int64
splits:
- name: train
num_bytes: 965480460
num_examples: 234738
download_size: 21228506
dataset_size: 965480460
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen7_run1_llama2-7b_wiki_doc1000_real32_synt96_vuw | dgambettaphd | "2024-12-17T01:32:28Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T01:32:25Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 270014
num_examples: 1000
download_size: 159029
dataset_size: 270014
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NZBM/Nnewresume_k_v2 | NZBM | "2024-12-17T04:09:10Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T04:09:05Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: knowledge
dtype: string
splits:
- name: train
num_bytes: 4954491
num_examples: 4088
- name: validation
num_bytes: 600403
num_examples: 505
- name: test
num_bytes: 591824
num_examples: 523
download_size: 1291856
dataset_size: 6146718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
DT4LM/t5v1-1base_rte_clare_original_old | DT4LM | "2024-12-17T05:01:49Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T05:01:47Z" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 7897
num_examples: 34
download_size: 9640
dataset_size: 7897
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen3_run1_llama2-7b_wiki_doc1000_real96_synt32_vuw | dgambettaphd | "2024-12-17T05:05:14Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T05:05:11Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 558653
num_examples: 1000
download_size: 355250
dataset_size: 558653
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen8_run1_llama2-7b_xlsum_doc1000_real64_synt64_vuw | dgambettaphd | "2024-12-17T05:17:35Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T05:17:28Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 481444
num_examples: 1000
download_size: 320179
dataset_size: 481444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen5_run1_llama2-7b_wiki_doc1000_real96_synt32_vuw | dgambettaphd | "2024-12-17T05:53:43Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T05:53:40Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 554899
num_examples: 1000
download_size: 353202
dataset_size: 554899
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen7_run1_llama2-7b_wiki_doc1000_real96_synt32_vuw | dgambettaphd | "2024-12-17T06:43:58Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T06:43:54Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 554039
num_examples: 1000
download_size: 352564
dataset_size: 554039
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HE-Baek/olympic-ragas-eval-dataset | HE-Baek | "2024-12-17T06:46:39Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-17T06:46:35Z" | ---
dataset_info:
features:
- name: user_input
dtype: string
- name: retrieved_contexts
sequence: string
- name: reference
dtype: string
- name: retrieve_contexts
sequence: string
splits:
- name: train
num_bytes: 40932
num_examples: 25
download_size: 18720
dataset_size: 40932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits